Are We Entering A New AI "Trust Economy"?
The Next Frontier in AI: Separating Hype and Memes from Real Experience
Recently, the narrative surrounding AI has been shifting. Fueled by an underwhelming release of OpenAI’s GPT-5, achieving superintelligence is evolving from short-term ambition to long-term goal—again.
Industry analyst firm Gartner has moved Generative AI to the "Trough of Disillusionment" in its latest hype cycle. Is the industry hitting a wall? Ex-Crypto influencers turned (pseudo) AI experts are filling our social feeds with breaking news, doomsday predictions, and the evergreen get-rich-quick schemes. You’ve got to give it to them. They know what motivates us to read their content and return.
But here is where they often fall short: Real-world experience.
From the Information Economy to the Trust Economy
In recent years, we’ve quickly moved from the information economy to the attention economy. If you can build an audience, get it hooked on your content, and monetize it, you’re golden. If anything, Generative AI and Agentic AI have only accelerated this development.
YouTube is littered with tutorials about creating faceless content generators that churn out videos in seconds and make you thousands of dollars in ad revenue per month. But this is the sort of AI slop that doesn’t add value or offer any insights.
But while AI creates opportunities to make money even for those who have never scoped, built, sold, or delivered a single AI product, it creates a gap for established businesses whose business model is not content farming. See, on the way up the hype curve, entertainment and breaking news are great. They let us poke fun at the early adopters and tech bros.
But when the going gets tough and your house is on fire (because your AI project has gone sideways), you don’t call the same band of jesters and clowns that made you laugh before. (And you probably shouldn’t.) Their skills are vastly different from what leaders need now.
How did we even get here? You can think about it in these three phases:
The internet has solved access to information and distributing it.
Social media has solved creating attention and access to worldwide audiences.
Generative AI has solved creating information rapidly and at scale.
As a result, information is abundant, and we have a filtering problem. But what information is even relevant? And how do you separate the signal from the noise? The answer is trust. While the term trust economy has thus far been used to describe the trust between consumers and businesses, it will sooner or later expand to individuals and leaders as well.
Building, maintaining, and expanding trust requires constant attention. This means that your actions, as well as the quality and the relevance of your work, need to exceed expectations—every time.
What Leaders Can Do to Maintain Trust
The internet is being flooded with AI-generated content. That’s why the question of trust and trustworthiness is becoming more important than ever. Companies publishing dozens of blog posts that are optimized to rank on search engines, but that offer little substance or a unique perspective, “first drafts” shared verbatim after being generated with AI that land in your inbox for you to review, and AI-generated music and videos making on to childrens’ channels on YouTube, the question whom you trust expands to your own role as well.
When to Use AI (and When Not to)
Em dashes (—), “it’s not this, but that”-formulaic writing—nearly three years into the Generative AI hype, professionals are recognizing the patterns of AI slop (or think that they do). So, when to use AI and when not to?
Here’s a simple framework you can apply:
Are you creating anything that is associated with your name? Use AI lightly to generate ideas, edit, or revise.
Are you writing an anonymous corporate blog? Your own opinion and name are much less relevant compared to factual accuracy and hitting the right tone.
Are you summarizing existing information or pointing to a different piece of content? Use AI and check its output. The main information is there, and you’re likely not adding any additional value in your summary. But you should check that it’s correct.
Are you using an AI avatar of yourself? Write the script yourself, the words as you would speak them, and use AI as a shortcut to creation and production, not as a replacement for a personal message.
Are you responding to a personal situation or giving feedback? Write it yourself, even if you are not a prolific writer. You understand the nuances, and you will need to be able to confidently answer any questions the recipient might have.
Although some of these bullet points seem rather obvious, it bears repeating. As a simple rule of thumb, put yourself in the shoes of the recipient and ask yourself how you would react if you found out that the sender has generated the text you are reading with AI.
Concrete Steps for Your Team or Organization
Leaders can build trust while encouraging the use of AI:
Explore bringing back in-person experiences—from business reviews and town halls to hiring interviews, 1:1s, and the like.
Follow a handful of trusted and vetted sources known for objective, factual information.
Develop guidelines within your team and organization for the use of AI to keep both quality and accountability for the results high when using AI. (Stay tuned for my new LinkedIn Learning course.)
And finally, look for ways to authenticate individuals and information, including real-world expertise.
Summary
The internet is being flooded with AI-generated slop. When the bottleneck moves from information creation to filtering, trust becomes the next frontier. A rapidly developing trust economy that enables vetting of individuals and information might just be within reach. Leaders should provide guidance to their teams to maintain a high bar for quality and accountability for the results while simultaneously encouraging the use of AI. At the same time, leaders should think about how to ensure they become and remain trusted by their teams and organizations.
Send me a note to help your team balance AI-driven efficiency and human trust.
Explore related articles
Become an AI Leader
Join my bi-weekly live stream and podcast for leaders and hands-on practitioners. Each episode features a different guest who shares their AI journey and actionable insights. Learn from your peers how you can lead artificial intelligence, generative AI, agentic AI, and automation in business with confidence.
Join us live
September 09 - Alison McCauley (Author, Speaker, and Digital Strategist) will share how leaders can support their teams in times of AI-driven uncertainty.
September 23 - Jon Reed (Industry Analyst & Co-Founder of diginomica) is back on the show, when we will discuss what’s next with AI agents.
October 07 - Danielle Gifford (Managing Director of AI at PwC) will discuss how hybrid teams of agents and humans can best collaborate. [More details to follow on my LinkedIn profile…]
October 21 - Christian Muehlroth (CEO of ITONICS) will share his perspective on effectively driving radical innovation with AI. [More details to follow on my LinkedIn profile…]
November 04 - Tim Williams will join and talk about how to evolve Agentic AI identity, security, and trust. [More details to follow on my LinkedIn profile…]
Watch the latest episodes or listen to the podcast
Follow me on LinkedIn for daily posts about how you can lead AI in business with confidence. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas