The Problem: Everything Feels Important
Every law firm has dozens of processes that could be automated. Intake, billing, document generation, client communication, compliance tracking, calendar management.
The trap: trying to automate everything at once. Result: nothing gets done well.
Use case scoring solves this by forcing prioritization. You evaluate each potential automation against objective criteria and pick the highest-value targets first.
The Scoring Framework
Score each potential automation on four dimensions:
| Dimension | Question | Score Range |
|---|---|---|
| Impact | How much time/money does this save? | 1-5 |
| Feasibility | How hard is this to implement? | 1-5 |
| Risk | What happens if it fails? | 1-5 (inverted: 5 = low risk) |
| Strategic Fit | Does this align with firm priorities? | 1-5 |
Total Score = Impact + Feasibility + Risk + Strategic Fit
Maximum possible: 20 points. Minimum threshold for action: 12 points.
Dimension 1: Impact
Impact measures the value created by automation. Consider both time savings and business value.
Time Savings Calculation
Formula:
Weekly hours saved = (Current time per task) × (Tasks per week) × (Automation coverage %)
Example: Client intake routing
- Current: 15 minutes per inquiry to review and route
- Volume: 40 inquiries per week
- Automation coverage: 80% (20% need manual review)
- Weekly savings: 0.25 hours × 40 × 0.80 = 8 hours/week
Impact Scoring Guide
| Score | Criteria |
|---|---|
| 5 | >10 hours/week saved OR direct revenue impact |
| 4 | 5-10 hours/week saved |
| 3 | 2-5 hours/week saved |
| 2 | 1-2 hours/week saved |
| 1 | <1 hour/week saved |
Beyond Time: Business Value
Some automations have impact beyond time savings:
- Client experience improvement (faster response, fewer errors)
- Compliance risk reduction
- Capacity for growth without hiring
- Competitive differentiation
Factor these into your impact score when relevant.
Dimension 2: Feasibility
Feasibility measures implementation difficulty. Lower difficulty = higher score.
Feasibility Factors
Data availability:
- Is the data structured and accessible?
- Are there APIs or integrations available?
- Is data quality sufficient?
Process clarity:
- Is the current process documented?
- Are there clear rules or is it judgment-based?
- How many exceptions exist?
Technical requirements:
- What systems need to be connected?
- Are credentials and access available?
- What is the technical complexity?
Organizational readiness:
- Is there stakeholder buy-in?
- Who will own the workflow?
- Is there capacity to implement?
Feasibility Scoring Guide
| Score | Criteria |
|---|---|
| 5 | Simple integration, clear rules, data ready, buy-in exists |
| 4 | Moderate complexity, minor data work needed |
| 3 | Multiple systems, some judgment calls, needs stakeholder alignment |
| 2 | Complex logic, significant data work, organizational resistance |
| 1 | Highly complex, poor data quality, no clear process, no buy-in |
Dimension 3: Risk
Risk measures consequences of failure. This score is inverted: lower risk = higher score.
Risk Categories
Operational risk:
- What happens if the workflow fails silently?
- How quickly would you detect a problem?
- What is the blast radius of errors?
Compliance risk:
- Does this touch regulated data?
- Are there professional responsibility implications?
- What are the audit requirements?
Client impact risk:
- Could errors affect client matters?
- Is there reputational risk?
- What is the recovery path from errors?
Risk Scoring Guide
| Score | Criteria |
|---|---|
| 5 | Internal process only, easy to detect errors, simple rollback |
| 4 | Limited external exposure, monitoring catches issues quickly |
| 3 | Some client touchpoints, moderate detection difficulty |
| 2 | Direct client impact, compliance implications, hard to detect |
| 1 | High compliance risk, significant client exposure, hard to recover |
Risk Mitigation
Low-risk scores do not disqualify automation. They signal the need for more safeguards.
For high-risk automations:
- Human approval steps before external actions
- Enhanced monitoring and alerting
- Comprehensive audit logging
- Gradual rollout with close supervision
Dimension 4: Strategic Fit
Strategic fit measures alignment with firm priorities and direction.
Strategic Considerations
Firm priorities:
- What are the stated goals for this year?
- Where is leadership focused?
- What pain points are most discussed?
Growth trajectory:
- Will this process scale with growth?
- Does automation enable expansion?
- Is this a bottleneck for growth?
Competitive position:
- Do competitors have this automated?
- Is this a differentiator?
- Does this improve market position?
Strategic Fit Scoring Guide
| Score | Criteria |
|---|---|
| 5 | Direct alignment with top firm priorities, leadership champion |
| 4 | Supports stated firm goals, general support |
| 3 | Neutral alignment, no opposition |
| 2 | Tangential to priorities, limited interest |
| 1 | Misaligned with direction, active resistance |
Scoring in Practice: Examples
Example 1: New Client Intake Routing
| Dimension | Score | Rationale |
|---|---|---|
| Impact | 4 | 8 hours/week saved, faster client response |
| Feasibility | 4 | Web form data structured, clear routing rules |
| Risk | 4 | Internal routing, errors caught quickly |
| Strategic Fit | 5 | Firm priority: improve client experience |
| Total | 17 | High priority |
Example 2: Automated Legal Research Summary
| Dimension | Score | Rationale |
|---|---|---|
| Impact | 5 | Significant time savings, high-value task |
| Feasibility | 2 | AI quality uncertain, judgment-intensive |
| Risk | 2 | Direct work product, malpractice risk |
| Strategic Fit | 3 | Interesting but not a stated priority |
| Total | 12 | Borderline - needs pilot first |
Example 3: Office Supply Ordering
| Dimension | Score | Rationale |
|---|---|---|
| Impact | 1 | Less than 1 hour per week saved |
| Feasibility | 5 | Simple process, easy integration |
| Risk | 5 | Zero client impact |
| Strategic Fit | 1 | Not a firm priority |
| Total | 12 | Low priority despite being easy |
Example 4: Conflict Check Automation
| Dimension | Score | Rationale |
|---|---|---|
| Impact | 4 | 5+ hours/week, faster new matter opening |
| Feasibility | 3 | Complex matching logic, multiple data sources |
| Risk | 1 | Professional responsibility critical |
| Strategic Fit | 5 | Compliance priority, growth enabler |
| Total | 13 | Medium priority - invest in safeguards |
The Scoring Process
Step 1: Generate Candidates
List all processes that could potentially be automated. Do not filter yet. Include:
- Repetitive manual tasks
- Data entry and transfer
- Communication workflows
- Document generation
- Scheduling and reminders
- Reporting and tracking
Step 2: Quick Filter
Eliminate obvious non-starters:
- Processes that require human judgment at every step
- Tasks done less than once per week
- Processes with no clear owner
- Tasks where automation cost exceeds lifetime value
Step 3: Score Remaining Candidates
For each remaining candidate:
- Estimate time savings (be realistic, not optimistic)
- Assess technical and organizational feasibility
- Evaluate risk profile
- Check strategic alignment
Step 4: Rank and Select
Sort by total score. Consider:
- Natural groupings (all intake vs. all billing)
- Dependencies (A must happen before B)
- Quick wins vs. strategic investments
Step 5: Validate Top Candidates
Before committing, validate top 3-5 candidates:
- Talk to people who do the work today
- Check data quality assumptions
- Confirm stakeholder support
- Estimate implementation timeline
Common Scoring Mistakes
Mistake 1: Overestimating Impact
"This will save hours" often becomes "this saves 20 minutes." Be conservative. Use actual data where possible.
Mistake 2: Underestimating Feasibility Challenges
"The data is in the system" does not mean the data is accessible, clean, or in the right format.
Mistake 3: Ignoring Risk
The exciting automation with high impact but high risk can damage client relationships and create compliance issues.
Mistake 4: Forgetting Strategic Fit
A perfectly feasible, impactful automation that nobody cares about will not get adopted or maintained.
Mistake 5: Scoring Once
Business priorities change. Volume changes. Technology changes. Re-score quarterly.
Template: Use Case Scoring Sheet
| Use Case | Impact (1-5) | Feasibility (1-5) | Risk (1-5) | Strategic Fit (1-5) | Total | Priority |
|---|---|---|---|---|---|---|
| [Name] |
Priority Guide:
- 16-20: High priority, start immediately
- 12-15: Medium priority, plan for next quarter
- 8-11: Low priority, revisit later
- Below 8: Do not pursue
Next Step
- List 10 potential automation candidates
- Score each using this framework
- Validate top 3 with stakeholders
- Pick one to start with
The goal is not to automate everything. The goal is to automate the right things first.