The Agent Assist Math Problem: Measuring Everything but Impact
After failing with three clients, we rebuilt Agent Assist from the ground up—focused on the 21% of calls where it actually matters.
We Measured Everything Except Whether It Actually Helped
My Agent Assist face-plant post buried the lede. We're a BPO. We torched this thing with three different clients before I asked the obvious:What are we actually trying to accomplish here?
The Vanity Metrics Dashboard:
Adoption rate: 87%! (agents clicking buttons)
Queries per hour: 45! (agents clicking more buttons)
Knowledge surfaced: 10K articles! (agents closing popups)
Cool metrics. Did it actually help anyone?
So We Asked Agents: "When Do You Actually Need Help?"
Their answers:
Edge cases – 8% of calls
New scenarios – 3%
Policy conflicts – 4%
"I know this but can't remember" – 6%
21% of calls. That’s the whole game.
We were force-feeding AI on 79% of calls they could handle blindfolded. They don’t need help updating addresses. They don’t need your refund policy script. They don’t need to be told how to greet a customer for the 10,000th time.
The Measurement Circus
What Actually Matters (on that 21%):
Did agents give the right info? (not “did they click”)
Did customers accept it? (not “was knowledge surfaced”)
Did it prevent escalation? (not “adoption rate”)
Did the customer call back? (not “query volume”)
The Real Problem:
Your Agent Assist vendor can’t measure what matters. They track their buttons. They don’t see that CSAT tanked, escalations spiked, or callbacks surged.
You bought a speedometer for a parked car.
Your Choice:
Impact 100% of calls by 1%? Or 21% of calls by 50%?
Agents already know. They’re clicking to dismiss popups while solving real issues with real skills.
You get what you measure...make sure you're measuring the right things.
How long does it take to close 10k pop-ups, btw?