- AI Trailblazers
- Posts
- Navigating AI: Growing Hallucinations and Lack of Trust
Navigating AI: Growing Hallucinations and Lack of Trust

In this edition of the newsletter, we explore OpenAI’s growing problem with hallucinations - which needs to be solved or risk the entire product, the lack of consumer enthusiasm towards AI adoption and the growing stigma behind openly using AI.
Now is the time to register for our annual AI Trailblazers Summer Summit, happening on June 5th in New York, NY. This year’s theme, “From Hype to Hands-On: AI in the Real World,” will spotlight how industry leaders are moving beyond speculation to real-world impact. Registration is now open - don’t wait to secure your spot, as space is limited.
Our past summits have featured senior leaders, both as speakers and guests, from Adobe, Amazon, Axios, Barclays, Bloomberg, Cisco, Citibank, Comcast Ventures, Diageo, Insight Partners, MasterCard, Mondelez International, Morgan Stanley, Nextdoor, PepsiCo, Publicis Groupe, Salesforce, Skadden, The New York Times, The Wall Street Journal, Uber, Unilever, the United Nations, and others.
AI Hallucinations Are Skyrocketing and So Are the Stakes

As ChatGPT gets smarter, it’s also getting more unreliable—at least when it comes to telling the truth. As Techradar reports, OpenAI’s newest models, GPT o3 and o4-mini, are designed to reason more deeply and mimic human logic, but they're also hallucinating at record rates. The result is an AI that’s more capable than ever, yet increasingly prone to confidently making things up—raising serious questions about trust, safety, and the future of AI in high-stakes environments.
Newer ChatGPT models are smarter—but more inaccurate.
OpenAI’s latest models, GPT o3 and o4-mini, are designed for more advanced reasoning than earlier versions. However, their hallucination rates have spiked, with GPT o3 inventing facts in 33% of cases and o4-mini in a staggering 48%. This marks a significant increase from the previous o1 model, raising concerns about reliability.Advanced reasoning may be fueling more errors.
The models’ ability to "think through" problems leads them into speculative territory, where they often generate plausible but false information. The more paths they consider, the more likely they are to stray from factual grounding. In essence, their complexity makes them better at improvisation—but that includes making things up.The hallucination problem is growing in everyday tasks.
In general knowledge benchmarks, hallucination rates ballooned to 51% for GPT o3 and 79% for o4-mini. That’s not just a bug; it’s a systemic issue that undermines the core promise of AI assistants. While these models shine in tasks like coding or test-taking, they still risk generating dangerously false information.As AI use spreads, hallucinations become higher stakes.
AI is now helping draft resumes, analyze medical data, and answer legal questions—domains where accuracy is critical. But as these tools gain sophistication, users may trust them more, making hallucinations even riskier. Until these issues are addressed, ChatGPT may be brilliant—but it's still the overconfident coworker you can’t trust without fact-checking.
AI Trailblazer Takeaways: As ChatGPT gets smarter, it’s also becoming harder to trust—a paradox at the heart of today’s AI evolution. The same reasoning power that makes newer models feel more human is also what drives them to fabricate with confidence. In high-stakes domains, that’s not just a flaw—it’s a fundamental risk. Until hallucinations are solved, every leap in capability must be matched with an even greater leap in skepticism.
Vendors Go All-In on AI—Users Say “Not Interested”

Despite the AI gold rush in tech, most Americans aren’t buying all the hype, literally. A new ZDNET/Aberdeen survey reveals that only 8% of U.S. adults are willing to pay extra for AI features, and many actively avoid them. As vendors embed AI assistants into everything from phones to productivity tools, a growing disconnect is emerging between what companies are building and what users actually want.
There’s a major gap between vendor hype and user demand for AI.
While tech companies rush to embed AI assistants into every product, most users are not eager to pay for them. According to a March 2025 survey, only 8% of U.S. adults would pay extra for AI features. Even among Gen Z, enthusiasm is lukewarm, with just 16% saying they'd pay for AI—far from a groundswell of demand.Most users don’t want AI assistants managing their tasks.
Features like AI-assisted scheduling, travel planning, and task management were among the least appealing. In fact, 64% of respondents said they would never use these tools, and many would disable them or even abandon a product over them. Even in younger generations, about half say they have no interest in using AI for daily task support.Frequent use of AI remains very low across all demographics.
Only one capability—using AI to answer questions—had more than 10% of users reporting frequent use. In most cases, interest in AI tools for writing, editing, or productivity tasks stays in the single digits. Even Gen Z, often assumed to be AI-native, shows only modest engagement and remains largely indifferent.Vendors may be overbuilding for a market that isn’t ready.
The report suggests AI assistants are entering the “trough of disillusionment,” as users push back against intrusive or unnecessary features. A full 31% of users say they’d stop using a product if they couldn’t disable AI, with another 38% unsure. Instead of forcing AI into every interaction, companies may need to refocus on where users actually find value.
AI Trailblazer Takeaways: We all knew that AI will have its hype-cycle followed by the “trough of disillusionment”. But make no mistake, AI will not go away. It is here to stay and its adoption will gain significant traction as companies realize how easy and useful it can be.
Using AI at Work? Keep It Quiet—Or Risk It All

As AI tools become more common in the workplace, a surprising new study reviewed by The Conversation reveals a troubling side effect: honesty about using AI can actually make people trust you less. Researchers from the University of Arizona found that openly admitting to AI use—whether for writing, grading, or creative tasks—often undermines credibility, even among tech-savvy peers. It’s a transparency trap that puts professionals in a bind: be upfront and risk skepticism, or stay quiet and face even worse consequences if discovered later.
Admitting to AI use can hurt your credibility.
A University of Arizona study found that disclosing the use of AI tools on the job—like for writing or grading—makes people trust you less. Across 13 experiments and 5,000+ participants, including hiring managers and investors, those who admitted using AI were consistently rated as less trustworthy. Even tech-savvy individuals showed the same bias, though slightly less strongly.People still expect human effort in creative and intellectual work.
When AI is credited for tasks traditionally done by people, it can make the final output seem less legitimate or authentic. This expectation applies across industries, from academia to advertising. The implication: transparency about AI use may feel honest, but it signals lower effort or originality to others.Not disclosing AI use can be even worse—if you get caught.
The study found that being secretly exposed for using AI leads to the sharpest drop in trust. This puts professionals in a bind: disclose and lose some credibility, or hide it and risk a bigger backlash if found out. As AI becomes more common, this transparency dilemma is only growing in relevance.Organizations face tough choices around AI norms.
There’s no consensus yet on how to handle AI disclosure—voluntary honesty, strict rules, or cultural normalization are all options. The researchers suggest that creating a workplace where AI use is accepted and understood may reduce the trust penalty over time. Until then, users must navigate a tricky balance between openness and optics.
AI Trailblazer Takeaways: The monicker of being an AI user shouldn’t be a badge of shame but a badge of honor. Meaning that you know what AI is, what it can do and how to use it. You’re ahead of the curve. But if it makes you feel any better, people on horseback used to scoff at those driving a horse-less carriage.
Quote of the Week
“Artificial intelligence will be part of our future. It’s inevitable.”
- Sundar Pichai, CEO of Google
Magnificent 7 Links
Apple Eyes Move to AI Search, Ending Era Defined by Google (Bloomberg.com)
Links of the Week
Everyone Is Cheating Their Way Through College (New York Magazine)
Find and Buy with AI: Visa Unveils New Era of Commerce (businesswire.com)
Anthropic hires a top Biden official to lead its new 'AI for social good' team (exclusive) (Fast Company)
This six-figure role was predicted to be the next big thing—it's already obsolete thanks to AI (Fortune)
New AI model uses NHS data to predict future disease and complications (The Independent)
AI Trailblazers Upcoming Forums Calendar
June 5th, 2025 - AI Trailblazers Summer Summit, New York, NY
December 3rd, 2025 - AI Trailblazers Winter Summit, New York, NY
Get in touch for speaking and sponsorship opportunities at these upcoming forums.
Partner with AI Trailblazers
Our community of business leaders is growing fast and we are planning more forums. To discuss partnership opportunities that connect our partners with AI decision makers including at our summits and this newsletter, email us.
What is AI Trailblazers?
AI Trailblazers is a vibrant platform dedicated to uniting corporate marketers, technologists, entrepreneurs, and venture capitalists at the forefront of artificial intelligence (AI). Our mission is to fuel growth, innovation, and career development among our members, who all want to be at the forefront of incorporating artificial intelligence into their businesses and their lives for strategic advantage. More information here.