Navigating The Promise And The Pitfalls Of AI Ethics
How Leaders Can Identify And Address Ethical Dilemmas In An AI-Driven World
Reid Blackman (Founder & CEO, Virtue and Author of “Ethical Machines”) joined me on “What’s the BUZZ?” at the end of last year and shared how AI leaders can put AI ethics into practice. As AI's influence permeates every aspect of our lives, the ethical risks and dilemmas that emerge are challenging companies and societies alike. From globally recognized brands facing regulatory investigations due to potential AI bias, to the systemic approach needed to mitigate such bias, the journey is fraught with complexity. Add to this the notorious 'black box' problem that shrouds AI's decision-making processes in mystery, and you have a world where ethical navigation becomes not just necessary, but a matter of survival. Here’s what we talked about…
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
Why AI Ethics Matters
There are numerous instances of companies facing trouble due to the ethical risks posed by AI. Numerous multinationals have found themselves at the center of investigations, such as Goldman Sachs, which was investigated to determine if the AI-set credit limit for Apple credit cards was biased against women. Similarly, Optum Healthcare was investigated for an AI system that allegedly suggested healthcare providers pay more attention to Caucasian patients than to African American patients.
» There's just lots of instances in which companies are getting in trouble for realizing AI ethical risks. […] There's all sorts of reputational risks that have gotten increased attention over the past few years. «
— Reid Blackman
There are also risks involved with AI-driven systems like self-driving cars, and issues with algorithms used by social media giants like Facebook. With the potential for such reputational risks, combined with impending regulations like the EU AI Act, it's understandable why AI ethics is garnering attention. AI has massive impacts, and it operates at a large scale. Given this, and the kinds of risks it carries, it's inevitable that people would focus on whether these impacts are positive or negative.
The Struggle To Implement AI Ethics Statements
While the issue of bias in AI is significant, an AI ethical risk program should consider other ethical risks too. Companies need to weave a systematic approach to identifying and mitigating bias throughout the AI lifecycle. This includes everything from the conceptual phase to the design, development, deployment, monitoring, and maintenance stages. However, many companies are not doing enough. While some are taking these issues seriously, others are barely doing anything, and many are simply waiting to see the outcome of the EU AI Act. Furthermore, companies often start with creating an AI ethics statement, but often struggle with its implementation.
The Need for Explainability in AI
The issue of explainability in AI is a major concern due to the 'black box' problem. As machine learning identifies patterns in vast quantities of data, understanding the processes can be a daunting task for non-specialists. Explainability becomes even more important in high-stakes situations like medical diagnoses, loan approvals, or job applications. Companies currently tackle the issue with technical tools like LIME and SHAP, which provide simplified explanations. However, the effectiveness of these tools and their comprehensibility to non-data scientists is debatable. Therefore, a crucial part of the explainability discussion is to identify who needs the explanation and in what format it should be delivered.
Balancing The Impact And Ethics Of AI
AI ethics is a complex, multilayered issue that requires urgent attention. Ethical risks and the resultant reputational damage have forced many multinationals into investigations. AI's widespread impacts are leading to more intense scrutiny of its effects, be it positive or negative. Furthermore, a systematic approach to identifying and mitigating bias is needed throughout the AI lifecycle, as well as the need for companies to go beyond merely drafting AI ethics statements. There is a need for greater transparency in AI's decision-making processes, making explainability a key issue. Current solutions provide technical explanations but often fail to deliver comprehensible explanations to non-data scientists.
Listen to this episode on the podcast: Apple Podcasts | Spotify
What’s next?
Join us for the upcoming episodes of “What’s the BUZZ?”
August 1 - Scott Taylor, aka “The Data Whisperer”, will let us in on how effective storytelling help you get AI project funded.
August 17 - Supreet Kaur, AI Product Evangelist, and I will talk about how you can upskill your product teams on generative AI.
August 29 - Eric Fraser, Culture Change Executive, will join and share his first-hand experience how much of his leadership role he is able to with generative AI.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas