Avoiding AI Failures: Lessons from 2025’s biggest brand missteps and wins

TL;DR

AI reshaped branding in 2025, but not necessarily in the way that brands had intended. Several companies learned the hard way that AI without human oversight leads to reputational risk, from hallucinated government reports to uncanny holiday ads. This article reviews key examples of this, explains what went wrong and demonstrates how the conscious use of AI, supported by a dedicated AI team, can enable brands to innovate safely and responsibly.

It started with excitement. Then came the automation.

It started with excitement.

Then came automation. Next came the hallucinations, bizarre adverts and lawsuits.

In 2025, AI officially joined the marketing and communications team,  acting as your creative intern, copywriter, researcher and media buyer all rolled into one tireless algorithm. But somewhere between speeding up workflows and generating synthetic perfection, things got weird. Really weird.

From holiday adverts that left viewers feeling uneasy, to global reports written by machines that made things up with confidence, AI proved that it is not just a tool, but a liability when left unchecked. This is a reality check on what happens when brands forget that creativity, ethics, and cultural intelligence require something that machines still lack: judgement.

Here are the most spectacular AI failures of 2025, and the lessons your brand can learn from them before your next campaign goes live.

Is AI creating more content, or just more noise?

Welcome to the age of ‘AI slop’

This new term entered our collective vocabulary in 2025. Coined by internet communities, ‘AI slop’ refers to the overwhelming flood of low-quality, AI-generated content, including blog posts, product listings, images and even entire news stories, which lack originality, depth or human relevance. These aren’t just annoying; they’re actively eroding trust in digital communication. They’re actively eroding trust in digital communication.

The problem? Many of these pieces are created to manipulate algorithms rather than engage audiences. They’re optimised for clicks, but devoid of insight. The result is a growing sense that we’re drowning in content but starving of meaning.

So, what does ‘AI slop’ actually look like in practice?

Paramount’s “AI slop” promo videos

Fans and critics both described Paramount’s promotional videos for major film releases as “lifeless” and “algorithmically assembled”. In 2025, generative AI became an essential tool for brands’ business and marketing strategies, but some relied on it as a shortcut rather than using it as a creative engine. Paramount’s content felt as though it had been created merely to fulfil a brief, rather than to move an audience. Comments were disabled. The videos were quietly withdrawn. However, the story spread everywhere.

 

H-E-B’s chili character mystery 

H-E-B’s quirky, anthropomorphic food characters sparked a different kind of reaction — not outrage, but confusion. People spent more time discussing whether the adverts were AI-generated than engaging with the brand itself. In an era where authenticity is currency, ambiguity is a liability. The brand never provided a clarification. The conversation moved on without them.

The Glasgow Wonka experience 

The Glasgow Wonka experience is perhaps the starkest example of what happens when AI-generated content has no basis in reality. The event was promoted as an immersive, interactive experience for families, illustrated with ‘dreamlike’ AI-generated images. Ticket buyers were promised enchanted gardens, magical candy worlds and a richly designed fantasy environment. In reality, however, it was a sparsely decorated warehouse containing only a few props, some jelly beans, and lemonade. Police were called. Families demanded refunds.
The actor playing Willy Wonka told BBC Radio that he had been given a 15-page script of AI-generated gibberish to learn just days before the event. The event went viral, but not for the intended reasons. This highlighted the potential of generative AI to fuel deceptive advertising and sparked ongoing debates about the need for regulatory guidance on its use in advertising.

 

💡 Lesson: AI slop isn’t just bad content. It’s a trust-destroying force. When audiences can’t tell what’s real — or when AI-generated promises lead to real-world disappointment — the reputational damage lands on the brand, not the algorithm.

The hidden cost of moving too fast

Speed is seductive. AI promises to compress timelines, reduce production costs and enable overstretched teams to achieve more with fewer resources. And, in many cases, it genuinely can. However, speed without strategy creates a new category of risk: reputational damage that spreads faster than the content that caused it.

Coca-Cola’s Christmas campaign — two years, same mistake

For the second year in a row, Coca-Cola’s 2025 festive advert was AI-generated. The year before, the company sparked widespread controversy by using artificial intelligence for its Christmas campaign. Clearly unfazed by the backlash, the brand has doubled down.

The 2025 ad was widely described by viewers and creatives as “soulless”, “lifeless”, “bland” and “digital slop”. They argued that the AI imitation was ‘missing the emotional warmth’ that made the original 1995 Christmas advert iconic. Technical issues compounded the problem: eagle-eyed viewers spotted inconsistencies in the Coca-Cola trucks, such as changes in shape and wheel arrangement, and noted that the AI-generated animals looked “part shiny, part plastic”.

What made matters worse was the company’s response. Pratik Thakar, head of generative AI at Coca-Cola, defended the decision: ‘We need to keep moving forward and pushing the boundaries. The genie is out of the bottle and you’re not going to put it back in.’ This comment spread rapidly and became as controversial as the advert itself. The backlash reignited the debate about making cuts to the cost of human creative labour.

Some analysts estimated that generative AI tools could reduce the cost of large-scale holiday productions, typically $1–3 million, by 60–70%. However, when the result is two consecutive years of brand-damaging headlines, the calculation looks very different.

 

McDonald’s Netherlands: festive or frightening?

In the Netherlands, McDonald’s aired a Christmas advert generated by AI, full of surreal winter scenes with an unintentionally bleak tone. Criticised by viewers as “soulless”, the brand was forced to disable comments and pull the ad entirely. This only served to amplify the story, providing a reminder that the fastest way to make a bad campaign go viral is to attempt to quietly remove it.

Deloitte’s hallucinated government reports: a $1.9 million lesson

The Coca-Cola story is an uncomfortable one. The Deloitte story is something else entirely.

Deloitte’s Australian member firm submitted a report worth $290,000 to the Department of Employment and Workplace Relations. Sydney University researcher Chris Rudge highlighted that the report contained fabricated references. Among these was a reference to a non-existent book supposedly written by a real Sydney University professor in a field entirely outside her area of expertise. ‘I knew instantly that it had either been hallucinated by AI or was the world’s best-kept secret,’ Rudge told the Associated Press.

Deloitte reviewed the 237-page document and confirmed that some of the footnotes and references were incorrect. They then quietly published a revised version, disclosing that Azure OpenAI had been used in its creation. They agreed to a partial refund.

Then it happened again. A Deloitte healthcare report commissioned by the Canadian government, costing nearly $1.6 million CAD,  was found to contain AI-generated errors. The 526-page report featured false citations pulled from fabricated academic papers, cited real researchers on papers they hadn’t worked on and paired researchers together on made-up studies despite them never having collaborated.

One researcher, Gail Tomblin Murphy, was cited in an academic paper that ‘does not exist’. She had only worked with three of the other six authors named in the false citation. Deloitte’s statement that ‘AI was not used to write the report’ but was ‘selectively used to support a small number of research citations’ did little to contain the damage. When AI hallucinations appear in government-commissioned policy documents, the credibility of the entire institution is at stake.

Meta’s AI personas and underage users

Meta rolled out AI personas on a large scale across Instagram, Facebook and WhatsApp, ostensibly to improve user experience and boost engagement. However, this resulted in one of the most serious brand safety failures of the year.

A Reuters investigation revealed that Meta’s internal ‘GenAI: Content Risk Standards’ document allowed chatbots to engage in ‘sensual’ conversations with users known to be underage. Examples of acceptable responses cited in the document included: ‘Your youthful form is a work of art’, flagged as acceptable. A coalition of 44 state attorneys general wrote to Meta demanding action. Senator Josh Hawley launched a formal probe. Furthermore, over 80 civil society organisations signed an open letter urging Mark Zuckerberg to immediately cease deploying AI companion bots to users under the age of 18.

Although Meta acknowledged that the policies were being revised, the damage had already been done. In one documented case, a vulnerable adult developed a relationship with a Meta AI persona that presented itself as real and proposed a meeting in New York. The user subsequently died after falling on his way to meet the fictitious character.

Deploying AI at scale without meaningful human oversight of edge cases and vulnerable users is not an acceptable product decision. It’s an ethical failure.

💡 Lesson: Moving quickly is only advantageous if what you produce is accurate and safe. In professional and regulated contexts, and in any context involving vulnerable users, the consequences of AI-generated errors cannot be fully undone by a press release. Proper human review is always cheaper than a public retraction.

What responsible AI use actually looks like

Not every brand got it wrong. The same year that saw the production of flawed AI and inaccurate reports also saw the creation of genuinely effective, ethically sound AI campaigns –  ones that were more rapid, had a wider reach, and had a greater impact than traditional workflows alone could achieve. It wasn’t the technology that set them apart. It was the team and the methodology behind them.

Keep humans in the creative and strategic loop

AI was used to speed up the ideation process, produce initial drafts, explore different visual styles and automate repetitive tasks. However, experienced human teams remained responsible for the final judgement on tone, cultural fit, emotional resonance and ethical alignment.

H&M announced plans to create 30 hyper-realistic digital twins of real models using generative AI. The aim was to generate diverse, scalable assets for use across multiple marketing channels and campaigns. Crucially, H&M’s creative teams collaborated closely with AI specialists to design and refine the digital twins, with each one reflecting the individuality of the real person. The real models owned and controlled their digital twins.

While Guess was accused of erasing real women by replacing them with AI-generated models, H&M’s approach started with real people and protected their rights throughout. This campaign set a new standard for the responsible use of AI in creative marketing.

source: H&MModel Mathilda Gvarliani and her digital twin

 

Build review checkpoints, not just review steps

When AI is involved throughout a workflow, a single final approval isn’t enough. Responsible teams introduced structured checkpoints at each stage, including brief validation, concept review, cultural and legal screening, and audience testing, before anything went live. This is the structural difference between Deloitte’s approach and best practice.

At Amazon Ads, the approach was explicit: ‘Innovation and integrity have to go hand in hand. We’ve built clear guardrails into our systems from the outset, including brand safety controls, content review pipelines and mechanisms to ensure that generated content aligns with the intent of advertisers, platform policies and broader societal standards.’

Set creative parameters and let AI execute within them

Nutella used an AI algorithm to create seven million unique jar designs, all of which were sold across Italy. No two jars looked the same. However, the algorithm operated within strict parameters, combining elements in millions of unique ways within approved colour palettes, pattern types and compositional rules.

The result? Supermarket shelves turned into mini art galleries. Sales spiked during the campaign. Brand love increased Nutella became a brand that people wanted to talk about again. The lesson here is not that AI can generate endlessly, but that it works brilliantly when humans set the creative boundaries and AI executes within them. Nutella didn’t hand AI the keys to its brand. It gave it a precisely defined brief and let it run with it.

💡 Lesson: Responsible AI use isn’t about doing less; it’s about doing things deliberately. Set creative parameters. Keep humans involved in every strategic decision. Ask not just “Can AI do this?”, but also “What does this communicate about who we are?”

The brands that got it right

The clearest way to understand what responsible AI looks like is to put it directly alongside the failures — same year, same technology, very different outcomes.

Where Coca-Cola stumbled, Heinz found its footing

Heinz launched an AI image generation campaign with a simple premise: ask an AI to generate an image of ketchup. Regardless of how the prompt was phrased,  ‘impressionist ketchup’, ‘ketchup in space’, ‘ketchup painted by Picasso’,  the AI consistently produced images that looked unmistakably like Heinz ketchup.

The campaign wasn’t an attempt to replace human creativity. It used AI to demonstrate brand strength. Audiences engaged. The campaign was celebrated, not criticised. The difference? Heinz used AI to demonstrate something true, rather than as a substitute for human creativity.

The numbers backed this up: over 1.15 billion earned impressions globally, a social media engagement rate 38% higher than that of previous campaigns and a ROI on media investment of 2,500%. The campaign won a Clio Gold Award and a D&AD Award in 2023.

Orange’s Women’s World Cup campaign, using AI’s reputation against itself

The French telecoms company used AI facial editing technology in an advert that was shown during the Women’s World Cup. The advert first showed what appeared to be the French men’s national football team, before revealing that the players were actually from the women’s national team, with their faces digitally swapped using AI.

Orange succeeded because they understood the wider cultural conversation about AI, including the fact that it could be used to mislead people. They demonstrated that this same ability of AI to alter appearances could also be used for good,’ said Julian De Freitas, professor at Harvard Business School.

This is an example of AI being used with genuine strategic intelligence: understanding the cultural moment, anticipating audience perception and transforming a potentially controversial technology into a vehicle for an important message.

The pattern that separates success from failure

The most successful AI-driven marketing campaigns share three key features: real-time personalisation on a large scale, creative adaptation based on performance data and quantifiable business impact that goes beyond vanity metrics. Campaigns that fail typically ask AI to define strategy or make brand decisions, despite it lacking the judgement and context that humans possess in these areas.

Simply put, the brands that got it right treated AI as a powerful execution engine guided by human strategic direction. Those that got it wrong reversed that relationship — and paid for it publicly.

💡 Lesson: The question isn’t whether to use AI. Rather, it’s about ensuring that the important decisions in your organisation are still being made by humans. Brand positioning, cultural fit, ethical alignment and emotional resonance are not tasks that can be delegated to a model. They require individuals who understand what’s at stake.

Five questions to ask before your next AI campaign goes live

Before publishing any AI-assisted content, whether you’re running a global brand or a regional team, these five questions should be non-negotiable:

  1. Has someone with the relevant cultural knowledge reviewed this? AI doesn’t understand nuances, regional sensitivities or cultural taboos. Someone who understands these must have the final say on the work.
  2. Could any element of this be factually incorrect? If AI contributed to the research, statistics or technical descriptions, every claim must be verified independently. Hallucination is not a bug,  it’s a built-in risk.
  3. Does this accurately and ethically represent people, real or synthetic? AI-generated models, revived historical figures and synthetic personas all carry ethical weight. Treat them with the same care that you would apply to real talent.
  4. Is the emotional tone right, not just the visual execution? Technically impressive yet emotionally hollow work is one of the most common failure modes of AI. Ask audiences, not just colleagues, whether it feels human.
  5. Are we being transparent where transparency is required? Familiarise yourself with your regulatory environment. Familiarise yourself with your platform’s disclosure requirements. And when in doubt, disclose.

So what can brands do to avoid these pitfalls?

At Admind, we’ve analysed dozens of AI-assisted branding initiatives, and one pattern is consistent: AI succeeds only when humans stay in the loop.

  • Human + AI workflows cut error rates 
  • AI needs cultural, ethical, and strategic safeguards. Creativity isn’t enough.  Context matters.
  • Launching AI campaigns without testing is the #1 source of reputational risk. A controlled sandbox-first approach reduces that risk dramatically.

The future belongs to brands that stay human at scale

AI is not going away. Nor should it. The capabilities it offers, such as increased speed, personalisation, global consistency and creative exploration, are genuinely transformative for brand-building.

However, the brands that will thrive over the next five years are not necessarily the ones that automate the most. They are the ones that recognise where human judgement is irreplaceable and safeguard it.

Creativity requires empathy. Strategy requires context. Ethics require accountability. These are not things you can prompt your way to.

At Admind, we believe that the future of branding is both human-led and AI-enabled. Not the other way around. Every AI tool we use, every workflow we design and every campaign we help bring to life is built on this principle.

Ultimately, the brands that people love are the ones that feel like they were made by people who care. No algorithm has cracked that yet.