Why Your AI Isn’t Working (And What to Do About It)
If you’re like many support leaders, you’ve introduced AI recently, or not too long ago, and the shine may have worn off a bit.
Maybe your auto-QA is a little too generous. Maybe your automation refuses to work the way it’s supposed to. Maybe your resolution rates keep dropping. Maybe your customers just seem frustrated by the whole experience.
If this is you, you’re not alone. In fact you’re in good company! Generative AI and its business use cases are still fresh, so everyone is learning!
Before you decide to switch vendors, or trash your AI strategy altogether, there are a few important questions to ask and levers you can pull to help get things moving in the right direction.
What Were Your Expectations?
To make sense of your current situation, we have to begin here:
What did you think AI was going to do for your team?
Personally, I’ve heard founders at small businesses say that once their AI solution was launched, their team wouldn’t need to worry about tickets anymore. (Not going to happen.)
I’ve also heard vendors promise potential users “turn it on and forget it” setups with no maintenance. (Not true.)
I’ve heard CX leaders commit to very specific automation percentages that may or may not ever be achievable based on their business model, internal processes, and inquiry types.
I’ve also heard from teams who set no expectations at all - they just feel like the product should be better.
If you’ve fallen into one of these categories, it may be as simple as realigning your expectations.
You could do some benchmarking. You could talk to other brands in your industry. You could ask about the vendor’s other users. This can all help to develop a more realistic goal to measure your AI success; however your real performance potential is going to be business-specific.
I used to work for a leader who always wanted to know how our ticket volume compared to the “industry average” and it drove me nuts. Ticket volume can be dependent on your product quality, your support staffing, your website’s user experience, the clarity of your marketing emails, the timeliness of your logistics operations, the reliability of your carrier, and so on.
So it makes sense AI and automation would be similar. Two nearly identical retailers can have wildly different automation potential simply because they have different policies and processes in place. The best metric to measure success against is yourself.
Did You Actually Manage the Tool?
If your expectations were reasonable, the next step is to ask yourself another very honest question:
How much time and energy did you actually dedicate to managing your AI?
As an example,
If your system flagged a gap in your content, did you create that content?
If sentiment analysis misclassified a delighted customer simply because they used the f-word (with love!), did you correct the model so it wouldn’t make that mistake again?
When you launched a new product, did you upload the relevant content, or did that task quietly fall off a crowded to-do list?
When your solution suggested the wrong article to an agent, did you train it to give the right one?
There’s no guilt, shame or judgement here! I know what early startups look like, and items like maintaining knowledge content can easily go ignored when it seems like everyday, something else is on fire.
But AI is like a garden. It needs watering and weeding, and without that attention, it’s unable to produce any fruit.
If you didn’t invest much time into your AI solution this year, what can you change for next year? Could you build a dedicated role or partial role for AI management? Could you operationalize your oversight with daily checklists or metric thresholds? Could you implement monitoring so you’re alerted when resolution or accuracy metrics drop so things don’t go too far off the rails?
What Content Are You Feeding Your AI?
One of the most common reasons teams aren’t getting the results they want from their AI solution has to do with its knowledge sources. Your AI is only as good as the information it receives.
Many leaders assume their content is up to date, only to discover they have multiple versions of a policy lingering on the website, contradicting return guidelines in SOPs, or expired promotional content still alive and well, living on the backend of their AI Agent and handing out discount codes like candy. If your AI Agent is responding with information that seems wrong, dated, or straight-up invented, that’s a sign to start looking at your knowledge sources.
You should also verify that the content you’re giving the AI to consume is in a format that your solution can read. Not all AI solutions can read the same materials. Some can’t read images. Some can’t read spreadsheets. Some can’t read diagrams.
Depending on your solution, you may have also written “rules” or “prompts” or other instructions for your AI Agent to follow when it encounters specific situations. You’ll want to review these prompts for clarity (ELI5) and ensure the “guardrails” you’ve set have left no room for interpretation. Understand this will take some trial and error.
When Was The Last Time You Talked to Your Account Rep?
If you’re hitting all these basics, and you’re still not sure what may be going on, then you may want to have a call with your account rep, if an account rep is available to you.
You would be shocked how many CX leaders struggle with their technology, consider leaving a tool (sometimes in search for a feature they don’t know their product already has) and have never, not once, met with their account rep. They don’t even know who they are, or if they have one.
Meanwhile, the rep is sending friendly “heyyyy, want to book a call?” emails into the universe.
Are all account reps amazing? Nah. But many account reps are incredibly helpful and your relationships with them can span well into your career as you each journey from one organization to the next. The cost of your account rep is built into your subscription so you’re already paying for them, and their whole job is literally to help you succeed!
Did you know that you can ask to meet more regularly than the “once per week” or “once per month” cadence they’ve pitched you? You can also ask for other people to be on the call, like a solutions engineer for example.
Now every provider may not say yes. Vendors come in all different sizes, they have different resources, and some charge for different tiers of support, but in my experience, vendors are quite willing to accommodate your requests. They want your renewal. Especially in a competitive space like AI.
There are times where the relationship with the account rep just isn’t working. I once had an account rep who canceled on me every month, never rescheduled, and couldn't answer any questions when he did come around. In cases like this you can ask the vendor to work with someone else. Yes, it’s a little awkward and sensitive, but having a new rep turned the experience around.
If you haven’t yet tried your account rep as a resource, do it. Most teams don’t get nearly as much value out of that relationship as they could.
That covers some of the most basic issues you may be experiencing with AI. Now for some of the more challenging problems:
How Clean Are Your Service Workflows?
Some leaders may find themselves several months into an AI purchase, and frustratingly, they still haven’t launched the darn thing.
Others may have launched, only to find their solution only somewhat works, because only a handful of the product’s features are usable to them right away.
Messy launches are more common than you’d think, and are often due to some backend structure development or integration occurring in tandem with AI and automation implementation.
While there are many advantages to getting “AI ready” before purchasing an AI solution, let’s be honest - not all brands have this luxury. Some brands pull the trigger when a solution is on sale that they would never be able to afford otherwise. Some brands are up against crushing ticket volumes. Some have had to downsize and are scrambling for coverage. Some brands just didn’t know any better, because you don’t know what you don’t know! Launching AI without the operational structure to fully support it isn’t an ideal situation but it’s a very normal one and a very real one.
Before you can get the most impact from your solution, the structural foundations will need to be addressed. What does this look like? This may mean creating the QA scorecard that didn’t previously exist so you can set up auto QA, classifying agent skills and updating ticket categorizations so you can leverage dynamic routing, sunsetting old products from your Shopify store so it’s not suggesting discontinued items, creating knowledge content for the things your team just “knows,” integrating with an ERP, and so on.
It’s a lot of work, but you’ll get there. Until then, try not to hate on your solution. Foundations are still being set. It needs more time before it can be evaluated.
Are You Automating the Right Things?
Perhaps the issue with your AI is less about functionality and more about adoption. It’s not working because nobody is using it.
There are more than two dozen ways DTC brands are using AI in CX today, and most teams are only exploring a small handful.
You may have built a beautiful pre-purchase automated assistant, but customers still head straight to the live chat for a product recommendation.
You may have launched a Co-Pilot to save agents time, but your agents are still relying on macros and manual writing.
Changing customer behavior is very hard. So hard that many leaders in CX believe it shouldn’t be attempted, and instead, all solutions should follow the customer’s lead. (Omni-channel experience anyone?)
Changing employee behavior is also hard. It requires intentional design, buy-in, coaching, reinforcement, and time.
If you’re finding yourself with low adoption rates on a tool you’ve recently implemented, you need to gather information from your users, analyze the situation, and evaluate your investment.
For what reason was this solution launched? Do your users know the product exists? What benefits are there to leveraging this product? What is stopping them from using the product? How could you remove those friction points? Is it worth continuing to invest in these areas, or could AI be better used in another area of your support function?
Does Your AI Match Your Current CX Philosophy?
A lot can change in a year.
Maybe you launched your AI solution during an unexpected surge of support volume, when speed and deflection (I said it) seemed critical. But today you’re more focused on providing customers with the best possible experience, including an easy route to a human if that’s the customer preference.
Maybe your team initially wanted most topics to be escalated to a live agent, in fear of AI getting it wrong, but now it’s escalating so much, you’re not sure why you’re paying for this thing.
I worked with a team that purchased generative AI, but still scripted everything out as a precaution. Over time, they became unhappy with how limited and repetitive the AI’s responses seemed to be. The technology didn’t change. Their expectations for AI in CX changed.
When your philosophy changes, your flows, intents, guardrails, and automation logic simply no longer align with your current service strategy and need to be rethought.
Are You Using the Right Technology?
Many support teams get their first exposure to AI through the capabilities that come bundled with their ticketing or telephony platforms. Sometimes this is enough. Other times, the brand outgrows the tool. Sometimes the native AI was never a great fit to begin with.
Startups may also be trying to make a tool do something it’s not designed to do. You may be patching things with creative dev work because purchasing a new tool feels premature.
You may have invested in a tool early on, and while you were satisfied at the time, its advancements are not keeping up with the quickly evolving market. You’ve done everything right, and the tool simply doesn’t do what you need it to do.
If you’ve maxed out the AI solution you have - you’ve dedicated time and resources, you’ve made frequent updates, you’ve been working with their internal team - then it’s time to start asking if this is really the right tool for the job.
Final Thoughts
If you’re re-evaluating your AI right now, or feeling disappointed in its performance, you’re not behind. You’re doing what good leaders do: pausing, assessing, and getting honest about what needs to change.
AI is powerful, but it’s not magic. It’s not plug-and-play. And it’s not self-maintaining.
But when you set realistic expectations, manage it like another member of your team, and align it with your current CX philosophy, it becomes one of the most transformative assets in modern customer experience.