From Boardrooms to Living Rooms: Are We Ready to Delegate the Responsible Use of AI to Everyday Users?
How Generative AI Tools are Reshaping Personal Accountability and What AI Leaders can Do to Shape it
We are all responsible — responsible for the ethical use of this new generation of AI tools. We? — Yes. Every individual user. And it has happened over night. What has changed from previous generations of AI and are we ready to trust the general public with making responsible choices when using generative AI?
» Watch the latest episodes of “What’s the BUZZ?” on YouTube or listen to it wherever you get your podcasts. «
The Emergence of AI Ethics in Business
During the last AI hype (2016-2020), AI was promised to revolutionize the way we work and to drive automation of tasks across industries and departments. Everything would happen at a fast pace: From self-driving cars to smart assistants knowing your every wish. And we’d have lots of time pursuing our true, creative passions. That hasn’t quite happened (yet). But very quickly, we’ve seen the first negative examples of using AI: Credit application denial, recidivism risk assessments, resume matching. As a result, voices demanding ethical AI practices got louder and companies started drafting their AI ethics principles, setting up AI ethics advisory boards, and putting ethics into practice.
Companies did most of the assessment which AI scenarios to build and what to use them for: Is this AI scenario aligned with our ethics principles and values? Does it align with existing rules and regulations? How can we remove and mitigate bias? Individuals using AI-driven software were primarily the consumers of information (e.g., predictions or recommendations) that a model generated: What is our liquidity and cashflow going to look like next quarter? Which sales opportunities have the highest propensity of closing? Which products should you buy together?
There are two key reasons for users being information consumers. One is the technology itself. AI models used to classify and predict data are fundamentally different from the foundation models that now create new data. The other reason is the prerequisites that only large companies could fulfill, such as access to large amounts of historic financial data, sales data, eCommerce data, and access to scalable infrastructure.
Hence, AI ethics was primarily a concern for companies and their experts building AI-driven products. Trust was mainly a factor of users’ acceptance: Can you trust that this prediction is accurate and how it has been made? And why is that prediction more accurate than my own experience? But this has recently changed.
Redefining Responsibility in the Age of Generative AI
Fast-forward to the release of generative AI tools like ChatGPT (text), MidJourney (image), and D-ID (video) last year. This new kind of AI has become accessible to anyone with an internet connection, often including free trials or being entirely free. While requirements largely remain the same for companies who build AI scenarios (e.g., business case, ethics review, infrastructure, etc.), the shift to generative AI evolves the responsibility for ethical behavior for individual users. Because, users who have previously acted on predicted or recommended information (based on corporate ethics) are now becoming creators of new information (based on their individual ethics). And the question of trust evolves: Can you trust this output to be accurate?
Generative AI has the potential to create new information including factual inaccuracies (accidentally or intentionally) — and users might not even notice it. In addition, the purpose for which individuals will use generative AI can vary greatly, from drafting memos to writing essays to creating disinformation. What is considered ethical will now depend on each individual’s own moral compass. But unlike formal, corporate AI ethics programs and guidelines, the general public is poorly educated on this topic. This creates risks of inadequate behavior and incentives.
We are already seeing the first examples of that and will likely see more information security violations, misinformation, etc. What anyone needs to ask themselves when using generative AI: What data do I input? What do I want it to do? What data does it output? What do I do with it? Although we are currently experiencing the shift of responsibility from corporate to individuals, the choices that individuals make based on their understanding and strength of ethics will also affect corporate and consumers of information again in a cyclical fashion. And individuals making poor ethical choices — however we define them — can have adverse effects on the company.
What AI Leaders Can Do To Help Their Organizations
AI leaders play a critical role in their organizations. In addition to building AI-driven products, there are five simple ways how AI leaders can work with users and prevent harm from the organization:
Educate users on the use of generative AI, incl. its opportunities and limitations
Expand AI ethics guidelines to include the use/ purpose of generative AI in line with company values and principles
Advise users which kinds of data they should not enter into generative AI tools to protect corporate IP (e.g., personal data and confidential information)
Use API versions of Large Language Models and read the terms and conditions how the vendor handles any input you provided
Summary
The field of AI ethics has rapidly evolved over the past 7 years. While companies used to define what is an ethical use of AI, that responsibility is now quickly transferred to the individuals’ using generative AI as the technology becomes widely and easily accessible. Individuals need to determine now what use of generative AI is ethical and acceptable — and what is not. However, individuals also need to be equipped with proper training and knowledge to make these decisions. AI leaders have an opportunity to educate business users on generative AI and to evolve the trainings and policies that guide the ethical use of AI.
Are we ready to put so much power into the hands of the general public? Are we ready to deal with the consequences?
» Watch the latest episodes of “What’s the BUZZ?” on YouTube or listen to it wherever you get your podcasts. «
What’s next?
Join us for the upcoming episodes of “What’s the BUZZ?”:
April 25 - Ramsay Brown, Founder & CEO Mission Control, will be on the show when we talk about How Businesses Can Trust Generative AI in times of rapid innovation.
May 9 - Brian Evergreen, Founder & CEO The Profitable Good Company & Author, will discuss how manufacturing businesses can Create A Human Future With AI.
June 8 - Ravit Dotan, Director The Collaborative AI Responsibility Lab at University of Pittsburgh, will join when we cover how responsible AI practices evolve in times of generative AI.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas