The Core Question
Every law firm with intake volume eventually asks: Should we use simple routing rules or invest in AI screening?
The answer is not "AI is better" or "rules are enough." The answer depends on your volume, your data quality, and your tolerance for false positives.
What Routing Rules Actually Do
Routing rules are deterministic: IF condition THEN action.
Example:
- IF practice_area = "employment" AND urgency = "high" THEN route to Partner A
- IF source = "referral" THEN priority = high
- IF matter_value > 100000 THEN route to senior team
Strengths:
- Predictable behavior every time
- Easy to explain to partners
- No training data required
- Works from day one
- Easy to audit and modify
Weaknesses:
- Cannot handle nuance in free-text descriptions
- Requires well-structured input fields
- Breaks when data is messy or incomplete
- Adding new conditions = manual work
What AI Screening Actually Does
AI screening uses machine learning to classify inquiries based on patterns in historical data.
What it can detect:
- Practice area from unstructured case descriptions
- Urgency signals in language (legal deadlines mentioned, emotional distress)
- Quality indicators (detail level, coherence, spam patterns)
- Potential conflicts (entity recognition)
Strengths:
- Handles messy, unstructured input
- Improves with more data
- Catches patterns humans miss
- Scales without adding rules
Weaknesses:
- Requires training data (hundreds of labeled examples minimum)
- Black box problem: hard to explain specific decisions
- Can be confidently wrong
- Needs ongoing monitoring and retraining
- Higher implementation cost
Decision Framework: 5 Questions
Question 1: What is your monthly inquiry volume?
| Volume | Recommendation |
|---|---|
| < 50/month | Rules only. AI overhead not justified. |
| 50-200/month | Rules first, consider AI for specific pain points. |
| > 200/month | AI screening likely provides ROI. |
Question 2: How structured is your intake form?
| Data Quality | Recommendation |
|---|---|
| Dropdown-heavy, minimal free text | Rules work well. |
| Mixed structured + free text | Consider AI for free text classification. |
| Mostly free text descriptions | AI significantly outperforms rules. |
Question 3: Do you have historical data?
AI needs training examples. If you have:
- < 100 labeled historical inquiries: Start with rules
- 100-500 labeled examples: AI possible but limited
500 labeled examples: AI can be effective
No historical data? Rules are your only option until you build a dataset.
Question 4: How costly are routing mistakes?
| Mistake Cost | Recommendation |
|---|---|
| Low (easy reassignment) | AI acceptable, mistakes get corrected |
| Medium (delays, client frustration) | Rules for high-stakes, AI for triage |
| High (missed deadlines, malpractice risk) | Rules with human review, AI as assist only |
Question 5: What is your technical capacity?
| Technical Resources | Recommendation |
|---|---|
| No dedicated IT | Rules in existing tools (CRM, practice management) |
| Some technical support | Consider low-code AI tools with rules fallback |
| Strong technical team | Custom AI models possible |
The Hybrid Approach (Usually Best)
Most firms benefit from combining both:
Layer 1: Deterministic Rules (Always Runs First)
- Route known referral sources directly
- Flag high-value matters based on explicit indicators
- Apply practice-area routing from dropdown selections
- Enforce compliance rules (conflicts, jurisdiction)
Layer 2: AI Screening (For Ambiguous Cases)
- Classify practice area from free-text descriptions
- Detect urgency signals in language
- Score lead quality
- Suggest routing when rules cannot decide
Layer 3: Human Review (For Edge Cases)
- AI confidence below threshold
- High-stakes matters
- Novel case types
- Potential conflicts flagged
Implementation order:
- Start with rules covering 60-70% of cases
- Identify where rules fail or create bottlenecks
- Add AI for those specific gaps
- Monitor and refine both
Common Patterns That Work
Pattern A: Rules + AI Triage
- Rules handle clear-cut routing
- AI scores lead quality for follow-up prioritization
- Human reviews AI-flagged edge cases
Pattern B: AI First, Rules Override
- AI classifies all incoming inquiries
- Rules override AI for specific conditions (VIP sources, compliance requirements)
- Reduces manual work while maintaining control
Pattern C: Parallel Processing
- Rules and AI both process every inquiry
- Discrepancies flagged for human review
- Useful during AI testing phase
Red Flags: When AI Is Oversold
Be skeptical if a vendor promises:
"Our AI handles everything automatically"
Reality: You still need fallback rules and human review.
"No training data required"
Reality: Generic models work poorly for legal intake. Your data is different.
"100% accuracy"
Reality: No classification system is 100% accurate. What matters is error handling.
"Set it and forget it"
Reality: AI models degrade over time. Ongoing monitoring is required.
Implementation Checklist
For Rules-Based Routing
- Map all routing scenarios (who handles what)
- Define clear conditions (no ambiguity)
- Build escalation paths for unmatched cases
- Test with historical data
- Document all rules for maintenance
- Set up monitoring for unrouted inquiries
For AI Screening
- Audit historical data quality
- Define classification categories
- Label training data (minimum 100+ examples per category)
- Set confidence thresholds
- Build fallback to rules for low-confidence cases
- Plan for ongoing retraining
- Document model behavior for compliance
Cost Comparison
| Factor | Rules Only | AI Screening |
|---|---|---|
| Setup cost | Low | Medium-High |
| Ongoing maintenance | Manual rule updates | Model monitoring + retraining |
| Scalability | Linear (more rules = more work) | Sublinear (model handles complexity) |
| Error cost | Predictable | Can be surprising |
| Explainability | High | Lower |
Next Step
Before deciding on AI, answer these questions:
- What percentage of inquiries do current rules handle correctly?
- Where specifically do rules fail?
- Do you have historical data to train AI?
- What is your tolerance for AI errors?
If rules handle 80%+ correctly and you lack training data: optimize rules first.
If rules struggle with free-text classification and you have data: explore AI for that specific gap.