Feb 16, 2026

You wanted AI in support. Now what?

Two years ago, nobody wanted AI in support. Now everyone wants it yesterday.

Summary: The shift from "we can't justify AI" to "deploy it by Monday" happened almost overnight for most support teams. But the rush to adopt is outpacing the readiness to do it well. Your docs aren't ready, and the same person being asked to deploy the bot is also doing everything else. We spoke to support and knowledge teams about what's actually going wrong - and what to fix first.

A couple of years ago, mention AI in a support conversation and you'd be met with suspicious looks and raised brows. "AI will take over humans!" is now replaced by "If you don't learn to work with AI, you'll be seeing the door." So what actually changed?

Before AI tools gained traction, support teams that wanted to automate had a problem that had nothing to do with the technology. There was no guide or playbook to follow. All that people could see was a wave of new-age products making big claims, but nobody senior enough was willing to bet on them. Just a couple of years back in 2024, 76% of customer service teams mentioned that just identifying relevant AI use cases was a challenge, let alone getting the budget for one.

"The higher ups were like, we can't justify it. Just get an agent to help out." 
— Knowledge manager, consumer app company

Spectacular, give me 100 right now

The switch flipped almost overnight, starting with senior leaders. 75% of CS teams admitted that the push to adopt GenAI came directly from executive leadership.

Seriously well done to the AI vendors' marketing teams, because their campaigns paid off handsomely. AI adoption rates skyrocketed, with 85% of customer service leaders planning to pilot AI chatbots in 2025. The help center went from afterthought to the thing the AI chatbot depends on. What exactly convinced them, whether it was vendor marketing, board pressure, or just watching competitors deploy is harder to pin down. But the result was the same: AI became the centerpiece of modern day support. 

A massive shift, one happening too fast for teams to get a chance to prepare for it.

"Senior managers are now realizing that the help center is more important... because it's feeding Fin."
 — Enablement lead, customer success platform

What breaks when you skip the prep 

Just like with makeup, bad prep ruins everything. Our conversations brought the same pattern to light. Teams deploy the bot, and within weeks they run into the same problems. The AI wasn't bad, but the knowledge feeding it wasn't ready.

Your docs aren't written for a bot (Sorry friends, but this one’s major)

Unexpectedly, this one’s causing the major snags. Help center articles written for humans don't work the same way for AI. Humans skim and fill in gaps, and rely heavily on context. Bots take everything literally, and if something’s not written explicitly, it won’t understand

One team we spoke to found they were referencing a blue button across ten articles - but here’s the problem - this button hadn't existed in months. The human reading it would skim past that. But the bot will tell your customer to click it.

"We thought it [our Help Center] was good, but it wasn’t good enough for [the AI bot]." 

— Operations lead, B2B platform

61% of customer service leaders say they have a backlog of articles to edit, and more than a third have no formal process for revising outdated content. These are the teams now deploying AI on top of that same content. Can you see why I said this is a major issue?

Your metrics don't mean what you think they mean: 

If there’s just one thing to take from this piece, it’s this. Don’t rely on your metrics blindly.

We have learnt that the bot's claimed resolution rate tends to be incorrect. Bots are capable of closing conversations without giving the user answers. 

If you just rely on those metrics, you’re celebrating a number that doesn’t reflect reality. Even AI vendors are starting to admit it - containment rate counts every closed conversation as a win, whether the customer got an answer or just gave up trying.

The person deploying the bot is also doing everything else:

Bot rollout rarely comes with extra headcount. Typically, it gets added on top of someone who's already managing help centers, handling escalations, and answering tickets. 70% of challenges in AI initiatives come from people and process problems, not the technology itself. An overworked support team member simply does not have the time to alter their help articles to be bot-friendly. Cut them some slack!

What to fix before you deploy (or right now, if you already have)

Yes, I do have ~ solutions ~ 

These are things the teams further along told us they wish they'd done from the start.

  • Protect your person's time

If the person deploying the bot is also doing everything else, something will suffer. Lighten their workload somehow, even temporarily. The teams who gave their person dedicated time to focus on the bot saw better results because they actually had space to do the work.

  • Fix your content first, not your bot

The bot is only as good as what it reads. Before you touch any AI settings, go through your help center. Look for outdated screenshots, contradicting articles, dead features still being referenced. 

  • Manually check what "resolved" actually means

Pull twenty "resolved" conversations and read them. Was the customer's question actually answered, or did the bot just close the chat? This is fifteen minutes of work that could completely change how you evaluate your bot metrics.

  • Look at what the bot gets wrong first

Most teams launch and focus on what the bot handles well. The more useful exercise is the opposite - Where is it breaking? What questions is it getting wrong? Those are your content gaps, and they're where your effort should go.

  • Use your skeptics (but don’t tell them why!) 

The person on your team who doesn't trust AI? Hand them the bot. They'll find every wrong answer, every edge case. That's free QA.

The real timeline

You're not going to hit "AI handles 50% of our tickets" in month one. Rebuilding your help desk is a work of patience, and is ever-evolving.

The realistic version:

- Month 1, the bot handles basic questions and exposes how many gaps your docs have. 

- Month 3, you've patched the worst of it and the bot is starting to be genuinely useful. 

- Month 6, you finally have enough data to know whether this is working.

The shift from "no" to "go" happened fast. But the work that it takes to make AI in support actually deliver? That part doesn't have a shortcut. 

The teams getting the best results right now aren't the ones with the fanciest tools. They're the ones who took the time to fix what was underneath first.

"I don't fully trust AI... but it's nice to use as a way to become more efficient." 
— Knowledge manager, consumer app company

That's probably the healthiest take on all of this.


Image Courtesy National Gallery of Art, Washington

Documentation,
finally done right.

We’d love to show you how Pageloop works.

Documentation,
finally done right.

We’d love to show you how Pageloop works.

Documentation,
finally done right.

We’d love to show you how Pageloop works.

2026 Pageloop. All rights reserved.

Talk to the founders

Talk to the founders