
I am not a gamer. I am a mom of a gamer. And not just any gamer — a gamer who is deeply skeptical of AI. So when NVIDIA announced DLSS 5 at its GPU Technology Conference last week, it did not land as industry news in our house. It landed as a confrontation.
He started pulling up screenshots. Side-by-side comparisons between what NVIDIA was showing in their promotional materials and what the game actually looked like on his screen running at high-medium settings. One image showed Grace Ashcroft from Resident Evil: Requiem as she appears on the official box art — a deliberate, carefully rendered character design that artists spent years developing. What DLSS 5 did to her was immediately visible: smoother, shinier, more “photorealistic” in the way that AI defines photorealism, and completely unrecognizable as the character players had come to know. He also pointed out “the AI version looks nothing like what she looks like on the box art either.”

He was not debating frame rates or performance benchmarks. He kept coming back to one thing. This is wrong.
He wasn’t alone in his frustration. The community’s reaction online was swift and pointed. Across social media and Reddit, it was nearly impossible to find anyone genuinely positive about DLSS 5, with the word “slop” appearing in reactions with striking regularity. Game developers were equally blunt. As Wired reported, video game artist and designer James Brady described it as something that “devalues an artist’s creativity and intent on a basic level,” functioning on the surface as little more than a Snapchat filter applied to someone else’s work. Developer Rami Ismail described it as having “a megacorporation smear the most dystopian slop all over what is generally two to three years of my life’s work.” Perhaps most damning, developers at studios whose games appeared in the DLSS 5 announcement — including Capcom and Ubisoft — say they found out about the reveal at the same time as the public. NVIDIA had used their work without telling them what it would look like.
NVIDIA CEO Jensen Huang’s response to the backlash was to tell critics they were “completely wrong.” That response matters, not because of the technical argument it was making, but because of what it revealed about the disconnect between the people building the technology and the people whose trust they were depending on.
This Is Not a Technical Debate. It Is the Latest Break in a Long Line of Broken Trust.
We have been living through two decades of eroding trust. Misinformation spread through social media has made people question what is real. Deepfakes have made people question what they see. Algorithmic feeds have made people question what is being surfaced for them and why. The average person has learned, through repeated experience, that the thing being presented to them may not be what it appears to be.
AI did not create this erosion. But it is accelerating it at a pace that most organizations are not taking seriously enough.
What is happening with DLSS 5 is a sharp example of why. When a company deploys AI in a way that changes the experience without consent, without transparency, and without regard for what people actually want, it does not just generate backlash. It confirms a suspicion that was already building. It validates the distrust that was already there. As Wired noted, the negative response toward DLSS 5 may partly stem from widespread anti-AI sentiment, but that does not make the criticisms less valid. Similar to AI-generated text, images, and video, there is something dehumanizing about applying a technology’s concept of improvement on top of what creators originally made — like watching a film and then letting an AI do a final visual pass over everything, without telling anyone.
When customers say “this is wrong,” they are not giving you a product review. They are telling you that something has been taken from them without permission. And in a trust environment that is already this fragile, that is not a small thing. That is the beginning of lost market share.
The Pattern Every Organization Is Moving Through
There is a predictable pattern in how AI shows up inside organizations, and understanding this pattern is critical.
At first, AI accelerates execution. It reduces effort, increases speed, and allows teams to produce more. At this stage, it is mostly invisible to the customer and often improves the experience.
Then AI begins to influence decisions. It starts shaping recommendations, prioritization, and how work is approached. Organizations begin to rely on it to guide thinking, not just execution.
The real shift happens when AI begins shaping the final output. That is when it stops being an internal tool and starts becoming part of the customer experience. That is where alignment either holds or breaks. And that is where trust — and market share — become vulnerable.

When Optimization Becomes Reinterpretation
DLSS was initially embraced because it improved performance without changing the core experience. It made games run better while preserving what made them compelling. As it evolved, it began enhancing visuals in ways that felt aligned with the original design intent.
What is happening with DLSS 5 is different. It is influencing how the experience itself is rendered. Lighting, textures, facial features, and character interpretation are being shaped by AI in ways that change how the game feels and whether the characters you see on screen match the ones you came to care about. That is no longer optimization. That is reinterpretation. And when customers feel that shift, even if they cannot fully explain it, they begin to question whether the product is still what they signed up for. That is where trust starts to erode. And once trust erodes, customers start looking for alternatives.
When This Stops Being Theoretical
This is not limited to gaming. It is already showing up in everyday business interactions in ways that are much harder to recover from.
Over Thanksgiving week, my mom had a kidney stone and needed to see her urologist. She called the office, spoke with a woman who took her information, and scheduled her for an appointment the Friday after Thanksgiving. Everything about the interaction felt normal. There was no indication that she was not speaking with a real person.
When my mom arrived for her appointment, the office was closed. As she stood there trying to figure out what had happened, other patients began arriving as well. Each of them had scheduled appointments the same way. Each of them had spoken to what they believed was a person. Each of them had been given incorrect information. It became clear that they had all interacted with an AI agent. It had never been disclosed, which is wrong.
This Is Where Trust Breaks Fast
In that moment, the issue was not efficiency or cost savings. It was trust. Patients were in pain. They had taken time to get there. They followed the process exactly as instructed. And the system failed them in a way that felt both avoidable and misleading.
What made it worse was not just the error. It was the lack of transparency. They believed they were speaking with someone accountable, who understood their situation, and could be trusted to provide accurate information. Instead, they were interacting with a system that had no awareness of its limitations and no clear boundary for when it should defer to a human.
That is not a technology issue. That is a leadership issue.
And it sits inside the same pattern as what happened with DLSS 5. In both cases, a company deployed AI in a way that changed the experience without the customer’s knowledge or consent. In both cases, the customers who felt the impact felt like they had been wronged. What was taken from them — the character design, the appointment, the sense that they were dealing with something real and accountable — mattered to them. Perhaps the AI was deployed to save money or time or both, but they are now paying for it in trust. And trust is where market share begins to move.
How Value Is Quietly Lost
Most leaders think about risk in terms of major failures, but that is rarely how value is lost. Value is usually lost through small moments where the experience feels off.
The product may still work. It may even be more efficient. But if it no longer feels as intentional, as consistent, or as trustworthy, customers will feel what has been best described as “the ick.”
Engagement begins to decline and loyalty weakens. Over time, retention becomes less predictable. From an investor’s perspective, this is where risk compounds. When trust erodes, revenue volatility increases. When revenue becomes less predictable, valuation is impacted. This is how market share is lost — not through a single catastrophic failure, but through a gradual erosion of trust at the customer level that leaders often don’t see (or won’t see?) until it is too late.
Why This Is Now a Competitive Advantage Issue
Access to AI is no longer the differentiator. Most companies have access to similar tools, similar capabilities, and similar opportunities to improve efficiency. What cannot be easily replicated is judgment.
The companies that will win are the ones that know how to apply AI without degrading the customer experience. They understand where it enhances value and where it risks undermining it. They move quickly internally while maintaining consistency externally. That requires alignment across the organization — not just on tools, but on standards, decisions, and what the experience should feel like. The companies that get this right will not just retain their customers. They will absorb the market share that their competitors are quietly losing.

Internal Misalignment Becomes External Inconsistency
What customers experience externally is a reflection of what is happening internally. Different teams adopt AI in different ways. Marketing prioritizes speed. Product prioritizes innovation. Customer service prioritizes efficiency. Without alignment, those differences create inconsistency. Some interactions feel thoughtful and intentional. Others feel generic or disconnected. Customers do not see the internal decisions behind this. They feel the inconsistency. And inconsistency is what breaks trust.
The Workforce Shift Is Amplifying the Risk
At the same time, the workforce is evolving. The advantage is no longer just technical expertise. It is the ability to apply AI within the context of the business. This means more decisions are being made by people closest to the work. When those people are aligned, AI becomes a force multiplier. When they are not, AI amplifies inconsistency, and that inconsistency shows up directly in the customer experience.
Where Most Leaders Get It Wrong
Most leaders focus on capability and implementation. They focus on what AI can do and how quickly it can be deployed. What they do not focus on enough is how the experience is being shaped by those decisions.
There is an assumption that if the system works, the outcome will follow. It does not. Outcomes follow alignment. If leaders do not define what the experience should feel like and where AI should or should not be used, those decisions will be made inconsistently across the organization. And your customer will feel it. And when enough customers feel it, they leave.
The Bottom Line
You can improve efficiency. You can accelerate execution. You can implement the most advanced AI tools available. But if your customer starts to feel like your product is no longer what it was meant to be — if they start saying, in whatever words they have for it, that this is wrong — none of it matters.
Because the fastest way to lose market share is to lose the trust of the customer you were trying to serve. And right now, customers are paying attention. They have been burned enough times to know when something has changed. They are not confused. They are not overreacting. They are telling you something important.
Pay attention when your customer says this is wrong. That is the signal you cannot afford to ignore.
If you are integrating AI and you are not actively evaluating how it is shaping your customer experience, you are taking on more risk than you think. I built a trust assessment to help leaders see where alignment is strong and where it is beginning to break. And in my trust workshop, we work through how decisions are made, how AI is being used, and what your experience should consistently feel like across the organization. Because once trust erodes, it is much harder to rebuild than it is to protect.


