Problem: "Sounds Good" Is Not a Quality Definition
AI texts can be fluent and still be dangerous:
- unclear claims,
- too much generalizing,
- missing process logic,
- formulations that are vulnerable to attack.
Goal: a verifiable QA standard.
QA Criteria (Short Version)
- Does it help a real decision-maker? (concrete, actionable)
- Claims are clean (no unprovable promises)
- Process before tool (no tool litany)
- Compliance sensitivity (no legal advice, no no-go claims)
- Internal links are meaningful (Pillar + 2 Cluster)
Artifact: QA Checklist (Copy/Paste)
A) Content & Value
- 1 clear angle (checklist/decision aid/anti-pattern)
- At least 1 artifact (template, table, text block)
- No repetition of paragraph "filler text"
B) Claims & Risk
- No "guaranteed", "always", "legally secure" etc.
- No implicit legal advice
- If sensitive: alternatives/formulations offered
C) Structure
- Lead (1-2 sentences)
- 3-5 sections (##)
- Conclusion + CTA (scope/KPI)
D) Consistency
- Terms consistent (status, owner, SLA)
- Examples fit the law firm context
E) Links
- Link to Pillar: - [ ] 2 links to related articles
Stop Rules (No Publish)
- More than 2 risky claims without safeguards → Stop
- No artifacts/no concrete steps → Stop
- Interlinking missing → Stop
KPI Block
Correction rate after QA: Target < 30% (otherwise briefing/prompt is bad)
Internal Link Coverage: Target 100% (Pillar + 2 Cluster)
Deep dive: Content Approval in 10 Minutes (Review Flow)
Deep dive: No-Go Claims: Safe Alternatives
Next Step
If you have 1 example article + your internal "no-gos", I can adapt the QA checklist so it's applicable by your team in 10 minutes.