The Math Didn't Work
The insurer's fraud detection system was a patchwork of rules and ML models. A recent LLM proof-of-concept showed promise, but projected costs made AI fraud detection unaffordable at scale.
Detection vs. Cost Trade-off
| Approach | Detection | Challenge |
|---|---|---|
| Pure Rules | 58% | Too many fraud cases missed |
| Rules + ML | 71% | 40% false positive rate |
| Rules + ML + LLM | 89% | $540K/year - unaffordable |
The Business Case Gap
The CFO set clear requirements for any AI investment:
- Estimated $12M annual fraud losses
- LLM solution would cost $540K/year
- CFO demanded 10:1 ROI minimum
- Solution needed to cost <$120K/year
Operational Problems
Investigator Overload
40% false positive rate meant investigators spent more time chasing false alarms than catching real fraud.
Sophisticated Fraud Slipping Through
Rule-based systems caught obvious patterns but missed complex, coordinated fraud schemes.
No Explanations
When claims were flagged, investigators couldn't understand why. Black-box decisions slowed review.
CFO Skepticism
Previous AI investments hadn't delivered promised ROI. The fraud team needed proven economics.
"Everyone told us we needed AI for fraud detection. Nobody told us we'd go bankrupt running it at scale."
— Chief Risk Officer