Corporate AI Safety Councils: Preparing For AI-Generated Disinformation
Gain Insights Into The Drastic Transformation Generative AI Brings About And How To Prepare Your Business For It
On May 23, Rod Schatz (Data & Digital Transformation Executive) joined me on “What’s the BUZZ?” and shared what generative AI’s potential for misuse is and the regulatory challenges it presents. Rod discussed the power of education as a tool for discerning fact from AI-generated fiction, setting up AI safety councils, and the potential impact of AI on the workforce. Here’s what we talked about…
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
The Role of Generative AI For Disinformation
Generative AI has been a hot topic the last few months. However, this technology comes with its fair share of risks. Generative AI operates on data that often carries biases and can be misused by ill-intentioned actors, using the data for spreading disinformation or causing harm. Such disinformation can target individuals, big and medium-sized companies, and even entire political systems. Generative AI also amplifies societal issues and AI-generated falsehoods will be harder to spot, especially by vulnerable groups of the population. Recent incidents involving generative AI have shown its potential for spreading misinformation, highlighting the urgent need to keep pace with this rapidly evolving technology.
AI Safety Councils: Guardians of Ethics and Risk
One of the most effective ways against misinformation is education. We must learn to distinguish between trustworthy and deceptive content. Knowledge of technology, especially for those in leadership roles, is also crucial. Today's digital divide between executives and their understanding of supporting tech can be bridged with education. Furthermore, companies must prepare for the wave of digital transformation generative AI will bring. They need to formulate strategies, prepare for disinformation breaches, and create AI safety councils to handle ethical and risk management aspects of AI deployment.
» Organizations need to develop an AI safety council, which is taking on all the ethical components of it, but it's also looking at how to deal with all the risk. «
— Rod Schatz, Data & Digital Transformation Executive
Corporate Efficiency vs. Human Cost: The Ethical Dilemma
Generative AI also has the potential to reshape organizations drastically, often at a human cost. By enabling automation, it could cause downsizing of various departments within an organization. This raises ethical questions — should companies prioritize societal good or corporate efficiency? Historically, innovation has led to job displacement, followed by skill acquisition and redeployment. Discussions on this theme are necessary within organizations, starting with clear leadership vision.
» Do I make the right decision, which is to the good of society? Do I make the corporate decision, which is to let people go? I think that's definitely a question that all organizations need to start to discuss internally and come up with what their game plan is. «
— Rod Schatz, Data & Digital Transformation Executive
Moreover, self-regulation for corporations might not work given their profit-driven nature. Regulation, a collective effort involving governments, international organizations, and industry associations, is required. Industry-specific AI safety councils can play a crucial role in navigating this disruption as well.
Summary
Generative AI exposes us to the risk of misinformation and disinformation. Governments are currently struggling to regulate it effectively. This technology revolution can also lead to job displacement, calling for an ethical balancing act between societal good and corporate efficiency. One way to tackle these challenges is through comprehensive education, equipping everyone to discern between trustworthy and deceptive content. Organizations need to be proactive in building strategies to handle AI-induced disruption and creating AI safety councils for ethical and risk management.
What about your organization? Do you have a strategy to combat AI-generated disinformation?
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
What’s next?
Appearances
June 8 - Panel discussion with Transatlantic AI eXchange on Web 3.0 Generative and Synthetic Data Application
Join us for the upcoming episodes of “What’s the BUZZ?”:
June 8 - Ravit Dotan, Director The Collaborative AI Responsibility Lab at University of Pittsburgh, will join when we cover how responsible AI practices evolve in times of generative AI.
June 20 - Aurélie Pols, Data Privacy Expert & Advisor, will join as we discuss how leaders can shape their business’ accountability for generative AI.
July 6 - Abi Aryan, Machine Learning Engineer & LLMOps Expert, will share how you can fine-tune and operate large language models in practice.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas