Building Trust in Generative AI: Top Strategies for Leaders
Key actions for responsible AI, policymaking, and the future of work that you can implement today
On April 25, Ramsay Brown (Founder & CEO, Mission Control) joined me on “What’s the BUZZ?” and shared how business leaders can build trust in generative AI. Ramsay gave examples for four critical topics related to (generative) AI that were discussed at last month’s summit of leading responsible AI experts at the Intellectual Forum at Jesus College Cambridge: (1) the alignment between the use of AI and ESG goals, (2) policy recommendations for policy makers, (3) the state of responsible AI, and (4) preparing for the impact of generative AI on the future of work. Here’s what we talked about…
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
Insights from Leading Thinkers on Responsible AI
In March, Ramsay co-organized a summit on responsible AI at the Intellectual Forum at Jesus College Cambridge, bringing together leading thinkers and practitioners in responsible AI. They discussed four big topics.
The first topic focused on responsible AI and the risk of it becoming similar to greenwashing in the environmental, social, and governance (ESG) space, where large organizations only create the appearance of responsible AI without truly operationalizing it. The second topic looked at the recommendations for policymakers to create better policies faster, considering that the world is changing rapidly.
» One of the big takeaways was that policymakers are predominantly trying to write policy for a world that's previously existed as opposed to a world that is going to exist. «
— Ramsay Brown
The third topic explored what is and isn't working in the responsible AI movement and how we can trust AI. The compliance and data science sides of organizations are often disconnected due to different incentive structures, resulting in a gap where data scientists lack knowledge about compliance and vice versa. While some individuals understand both areas, there needs to be better collaboration and ownership of outcomes to improve performance. The lack of intersectionality between departments leads to friction, creating a cultural problem that requires a solution beyond technical aspects. Lastly, the fourth topic addressed the future of work and how we should prepare for a world where the cost of knowledge work might drop to zero within the next 18 months. Instead of using AI for specific tasks like prediction or classification, generative AI can perform a wide range of jobs within an organization. The consequences of this shift will likely be a major topic of conversation in social discourse moving forward.
Three Strategies for Leaders to Improve Trust in Generative AI
To better trust generative AI, leaders should focus on three aspects: people, processes, and technology. For people, improving generative AI success and trust involves culture and training. Teams that succeed with AI gain a competitive advantage, which is why organizations need to constantly review how they're winning with these tools. This culture comes from the top, with leaders encouraging employees to use AI to get their job done more efficiently. Additionally, leaders should instill a culture that critically evaluates AI-generated outputs, rather than assuming they are always accurate. By applying critical thinking to AI outputs, employees can accelerate their work without blindly trusting the AI.
The second aspect is processes. Organizations need to review their business value creation flows, from understanding the market to customer success and corporate strategy, to determine where generative AI tools fit in. Analyzing these processes helps determine where AI can amplify or augment how people are already operating. To build trust, governance policies should be in place, including specific, actionable, measurable, recordable, accountable, and documentable steps that teams are taking while using AI tools. This demonstrates accountability and seriousness about mission success with generative AI.
The third aspect is technology. Leaders should either develop in-house capabilities around generative AI or use third-party tools, such as OpenAI, Midjourney, or Stable Diffusion. However, there is a fundamental barrier for trust surrounding data security and data privacy with these tools. Recent incidents involving trade secrets leaking into generative AI systems have led to concerns about banning the technology in organizations. To counteract this, solutions are being developed that allow organizations to use generative AI without risking to leak their data, thus improving overall trust.
Moving away from reliance on checklists, businesses can actively intervene in automatic processes to enhance safety and scalability. Emerging generative ops, foundation model ops, and prompt ops fields provide business intelligence tooling and trust layers that are missing from how AI tools operate. By focusing on people, processes, and technology, leaders can ensure their teams are knowledgeable and incentivized to use AI, retune their business processes to capture value from AI, and invest in tooling that creates trust layers between organizations and third-party services they depend on.
In summary, to build trust in generative AI, leaders must prioritize their people, processes, and technology by fostering a culture of critical thinking, identifying where AI fits into their business processes, and investing in tools that address data security and privacy concerns.
A Helpful Lens for Business Decision Makers
Large enterprises often move slowly in adopting new technologies, while others work rapidly to build artificial general intelligence. The concept of "creative destruction" suggests that capital markets are effective at breaking down slow organizations and directing resources towards faster ones. This highlights the importance for businesses to accelerate the adoption of new technologies, such as AI, in order to stay competitive and adapt to the ever-evolving landscape.
How do you use generative AI in your role?
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
What’s next?
Appearances
June 8 - Panel discussion with Transatlantic AI eXchange on Web 3.0 Generative and Synthetic Data Application
Join us for the upcoming episodes of “What’s the BUZZ?”:
May 9 - Brian Evergreen, Founder & CEO The Profitable Good Company & Author, will discuss how manufacturing businesses can Create A Human Future With AI.
June 8 - Ravit Dotan, Director The Collaborative AI Responsibility Lab at University of Pittsburgh, will join when we cover how responsible AI practices evolve in times of generative AI.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas