Why a 30/60/90 Plan is Better Than "Let Us Just Automate"
Automation is often seen as a project with a start and end date. In practice, successful automation is an operating system: status tracking, ownership, monitoring, documentation, and continuous improvement.
The firms that fail at automation share common patterns:
- They try to automate everything at once
- Nobody owns the workflow after launch
- There is no monitoring until something breaks
- Documentation does not exist or is outdated
With a 30/60/90 plan, you prevent these failures by forcing focus, defining ownership, and building operational discipline before scaling.
The Philosophy: One Workflow Done Right
The goal is not "automate as much as possible." The goal is "get one workflow into stable production."
Why this matters:
- One working workflow teaches you more than five half-finished ones
- Operational patterns (monitoring, error handling, documentation) transfer to future workflows
- You build internal credibility with measurable results
- Partners see automation as reliable, not experimental
The 30/60/90 structure:
- Days 0-30: Build and validate
- Days 31-60: Stabilize and operationalize
- Days 61-90: Measure, learn, prepare to scale
Days 0-30: Goal, Scope, First Prototype
Output at End of Day 30
A workflow that runs in a controlled scope with real data.
Week 1: Define Success
Target KPI Selection
Pick ONE metric that matters. Examples:
- Response time to new inquiries: < 12 hours
- Document generation time: < 30 minutes
- Data entry accuracy: > 95%
- Follow-up completion rate: 100%
The KPI must be:
- Measurable (you can track it today)
- Meaningful (it affects client experience or revenue)
- Movable (automation can realistically improve it)
Scope Definition
Draw a clear boundary around what this workflow does and does not do.
| In Scope | Out of Scope |
|---|---|
| New client inquiries via web form | Phone inquiries |
| Employment law matters | Other practice areas |
| Weekday business hours | Weekend/holiday handling |
| Standard intake flow | Complex conflict scenarios |
Week 2: Map the Process
Current State Documentation
Before automating, understand what happens today:
- What triggers the process?
- What steps occur (in what order)?
- Who does each step?
- Where does data come from and go to?
- What decisions are made and by whom?
- What can go wrong?
Minimal Status Model
Define 3-5 statuses that track progress:
- New (just received)
- In Review (being processed)
- Pending Response (waiting on client)
- Complete (done)
- Error (needs attention)
Week 3: Build the Prototype
Keep It Simple
The first version should be embarrassingly simple. If you are integrating more than 2 systems, you are overcomplicating.
Test Cases
Create 3-5 test cases from real historical data:
- Happy path (everything works)
- Edge case (unusual but valid input)
- Error case (invalid input, missing data)
- Boundary case (high volume, large files)
Run each test case manually before automation.
Week 4: Controlled Launch
Pilot Group
Start with one attorney or one practice area. Not the whole firm.
Parallel Running
Keep the manual process running alongside automation for 1-2 weeks. Compare results.
Daily Check-ins
Someone looks at the workflow output every day. No exceptions.
Anti-Patterns to Avoid
Integration overload: "While we are at it, let us also connect the billing system, the calendar, and the document management." No. One integration at a time.
Perfection paralysis: "We cannot launch until we handle every edge case." Launch with 80% coverage, handle exceptions manually.
Invisible automation: "It just runs in the background." Every automation needs visible status and logging.
Days 31-60: Stabilize (Operations)
Output at End of Day 60
Operations are "quiet" - the workflow runs reliably with minimal intervention.
Week 5-6: Monitoring and Alerting
What to Monitor
- Execution success/failure rate
- Processing time per item
- Queue depth (items waiting)
- Error types and frequency
Alert Philosophy
Only alert on actionable items. If an alert fires and the response is "ignore it," remove the alert.
Alert Tiers
| Tier | Condition | Response Time | Channel |
|---|---|---|---|
| Critical | Workflow stopped | < 1 hour | SMS + Email |
| Warning | Error rate > 10% | < 4 hours | |
| Info | Unusual volume | Next business day | Dashboard |
Week 7-8: Error Handling
Retry Logic
Not every failure is permanent. Build automatic retries for transient failures:
- API timeouts: Retry 3x with exponential backoff
- Rate limits: Retry after delay
- Network errors: Retry 2x
Dead Letter Queue
Items that fail all retries go to a dead letter queue for manual review. Never silently drop failures.
Graceful Degradation
If a non-critical step fails, continue the workflow and flag for follow-up. Do not block the entire process.
Week 7-8: Documentation
Runbook Minimum
One page that answers:
- What does this workflow do?
- How do I know if it is working?
- What do I do if it breaks?
- Who do I contact for help?
Architecture Diagram
Visual showing: trigger, steps, integrations, outputs. Does not need to be pretty. Needs to be accurate.
Ownership Model
Primary Owner
One person responsible for workflow health. They get alerts, they fix issues, they approve changes.
Backup Owner
One person who can cover during PTO/illness. Trained on runbook, has access to all systems.
Escalation Path
If owner and backup cannot resolve: who gets called next?
The 10-Minute Rule
If you cannot diagnose and begin fixing a problem within 10 minutes, the workflow is not production-ready. This means:
- Logs are accessible and searchable
- Error messages are meaningful
- Rollback procedure exists and is tested
- Someone knows how the workflow works
Days 61-90: Scale (Without Complexity Explosion)
Output at End of Day 90
1-2 additional use cases identified and scoped. Operational patterns documented for reuse.
Week 9-10: Measure and Learn
KPI Review
Compare target KPI before and after automation:
- Baseline (before): What was the metric?
- Current (after): What is it now?
- Delta: What changed?
- Attribution: How much is due to automation vs. other factors?
Operational Metrics
- How many items processed?
- What is the error rate?
- How much manual intervention required?
- What is the cost per item (if measurable)?
Lessons Learned
Document what worked, what did not, what you would do differently. This informs the next workflow.
Week 11-12: Prepare to Scale
Identify Bottlenecks
What limits throughput?
- Data quality issues upstream
- Manual approval steps
- Integration rate limits
- Processing capacity
Second Use Case Selection
Score potential next workflows on:
- Impact (time saved, revenue affected)
- Feasibility (data available, integrations possible)
- Risk (what happens if it fails)
- Dependencies (what else needs to change)
Pick the highest impact/feasibility, lowest risk option.
Pattern Documentation
What from this workflow can be reused?
- Monitoring setup
- Error handling patterns
- Documentation templates
- Testing approach
Success Criteria Checklist
Day 30 Checkpoint
- Target KPI defined and baselined
- Scope documented (in/out)
- Workflow runs with test data
- 3-5 test cases passing
- Pilot group identified
Day 60 Checkpoint
- Monitoring dashboard live
- Alerts configured and tested
- Error handling implemented
- Runbook written
- Owner and backup assigned
- 10-minute rule verified
Day 90 Checkpoint
- KPI improvement measured
- Operational metrics tracked
- Lessons learned documented
- Second use case scoped
- Patterns documented for reuse
Common Failure Modes
"We will document it later"
Documentation debt compounds. If you cannot explain the workflow today, you cannot maintain it tomorrow.
"It works, ship it"
Working is not the same as production-ready. Monitoring, error handling, and ownership must exist before "done."
"Let us add one more feature"
Scope creep kills timelines. The 30/60/90 plan is about discipline. Additions go to the next cycle.
"The vendor will handle it"
Vendors provide tools, not outcomes. Internal ownership is non-negotiable.
Next Step
Map your first 30-day scope:
- Pick one workflow with clear KPI
- Define scope boundaries
- Identify pilot group
- Assign owner
Start small, prove value, then scale.
Guide: AI Automation for Law Firms
Related:
Use Case Scoring Model