...but it won't be because of the AI itself
by Matt Queen
3/16/2026
Not so long ago in a galaxy not so far away, a company seeking to implement an AI solution did everything right; at least on paper. They identified a high-value operational problem. They secured executive sponsorship. They found a credible vendor with a proven model. The use case was personalized intelligent call routing for their contact center: route incoming calls to the most optimal agent for each customer as calls come in, improving conversion and satisfaction.
By the time the initiative surfaced its fatal flaw, the company had spent months of internal effort and significant resources. The model wasn't the problem. The vendor wasn't the problem. The problem was the data: specifically, the years of accumulated inconsistency sitting quietly in systems that were "good enough", in fields no one had standardized, capturing behaviors no one had agreed on the defintion of.
The model needed clean, consistent transaction data to learn from. What the company had was noise dressed up as data. The initiative paused. The vendor relationship went south. Company leaders acknowledged the failure quietly and moved on.
The single most common reason AI initiatives fail in consumer-facing companies isn't model sophistication or capability. It's a lack of robust data governance; and it's almost always invisible until it isn't.
Here's the pattern I've observed: data infrastructure is often built to run operations, not to train models. Your data may be just good enough to support your day to day, but without sufficient detail and proper structure, it may not be accurate enough to train an AI model. Reservation systems, loyalty platforms, CRM tools, contact center software, each one was implemented to do a job, and each one made local decisions about how to capture data that made perfect sense in isolation.
A stay is a stay until you realize that one system counts a comp night as a stay, another doesn't, and a third captures it differently depending on which property entered it. A complaint is a complaint until you discover that front-line teams in different markets categorized it under six different codes that no one ever reconciled. A call is a call until you find out that handle time is measured from answer in one region and from IVR entry in another.
None of this is negligence. It's the accumulated consequence of systems built by different teams, at different times, under different pressures, to (barely) support just their own operational needs, without a unified data philosophy. It's completely normal. But it will derail your AI initiative if you don't address it.
The vendor demo never shows the messy data. It shows a clean, prepared dataset performing beautifully. And when asked, the vendor will assure you that they can help prep your data in case it "isn't perfect" (however, they don't mention at this time how much that prep will cost.) The RFP process evaluates model architecture, not data readiness. The business case is built on the vendor's benchmark results, which were produced in a controlled environment with clean inputs. By the time your actual data enters the picture, the contract is signed, the project is announced internally (sometimes externally), and the company is under pressure to deliver.
AI vendors sell model capability. What they can't sell you is data readiness, because they don't own it and can't control it. That responsibility sits entirely on your side of the table, usually with people who are also trying their best to run the business, hit quarterly objectives, and do it all for the least amount of cost.
The most honest conversation you can have with any AI vendor is: "Before we talk about what your model can do, walk me through what our data needs to look like for it to work. Then let's talk about the gap."
Most vendors will try and answer this honestly if asked, but a lot of clients don't ask more than surface-level "check the box" questions, and to be fair, the vendor doesn't know how inconsistent and dirty your data is (this is especially true for modern tech vendors who may make assumptions about data structure and integrity if they came up through companies founded within the last decade - not running on systems minimally upgraded over the last 30 years.)
These are not deep technical questions. They are strategic questions that any senior leader can drive, regardless of whether they have a data background.
Do we agree on what we're measuring? Not "do we have data on X", but do the people who capture X, the people who store X, and the people who will use X to train a model, all mean the same thing when they say X? Definitional consistency is the foundation. If it doesn't exist, the model will learn from a mixture of different things masquerading as one thing.
How much of our historical data reflects the world we're trying to model today? Behavioral data ages. Customer behavior three years ago, through a pandemic disruption, under a different loyalty structure, at a different scale; that data may actively mislead a model trying to optimize for today's reality. Volume is not a substitute for relevance.
Who owns data quality, and what does that accountability actually look like? If the answer is "IT owns the systems and the business owns the definitions and nobody owns the gap between them", that gap is where your initiative will fail. Data governance is not a technology problem. It is an organizational design problem, and it requires a human being with authority, not just a policy document.
It is an argument for going in with eyes open. The companies I've noticed get real value from AI initiatives, not just proof-of-concept demos, but durable operational improvement, did a version of the same thing: they audited before they built. They had honest conversations about data quality that were uncomfortable and inconvenient. They slowed down the front end so they could accelerate the back end.
The hospitality and travel industry in particular (my home over the last two decades) has a structural advantage here that it often doesn't fully leverage: it has the potential to capture enormous volumes of behavioral data (whether it is doing so or not). Stay history, spend patterns, service interactions, channel preferences, complaint trajectories; this is rich, detailed, longitudinal data on real customer behavior. When it's clean and well-governed, it is genuinely powerful training material, and if it all isn't being captured, cleaned and stored for model training and analysis, it should be.
The question is not whether to pursue AI. The question is whether you're willing to do the less glamorous work that makes the glamorous results possible.
Data governance is where AI initiatives actually succeed or fail. It just rarely shows up in the press release.
Before your next AI vendor conversation, or before you revive an initiative that stalled, consider doing a focused data readiness assessment. Not a multi-month audit project, not a technology overhaul. A structured, honest look at the specific data your proposed use case depends on, evaluated against the three questions above.
It will surface uncomfortable things. That discomfort is the point. Better to find it in a controlled assessment than six months into a project that can't deliver what it promised.
The leaders I respect most in this space have stopped asking "can we do AI?" and started asking "what would we need to be true for AI to work for us?" That shift in framing, from capability to readiness, is where the real advantage lives.
QUANDRAI advises consumer-facing companies on loyalty strategy, consumer intelligence, and practical AI implementation.Â
Whether you need quick clarity on a specific decision or a comprehensive strategic engagement, let's discuss how customer intelligence and financial rigor can give you the confidence to move forward.
Reach out and let's discuss your current challenges. No obligation.