Managing Generative AI: Between Guardrails And Guidelines
How Leaders Can Innovate Responsibly With Generative AI
On June 8, Ravit Dotan (Director The Collaborative AI Responsibility Lab at University of Pittsburgh) joined me on “What’s the BUZZ?” and shared how businesses can put guardrails in place when rolling out generative AI. Ravit discussed the risk for misinformation, the biases that sneak into inferences, and the illusion of human-like interaction, and how existing regulations could be amended with new laws to manage these challenges. Here’s what we talked about…
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
AI’s Illusion of Human-Like Intelligence
The most notable challenge related to generative AI is misinformation. People sometimes view AI like a search engine, assuming it will always present the truth. However, there are instances when AI unintentionally generates inaccurate data. For example, Ravit shared how a friend once asked ChatGPT for a list of lawyers with a specific expertise in New York. The AI provided a plausible sounding list of ten lawyers who didn't actually exist. This shows the ease with which AI can unintentionally create false information. Another issue is gender biased assumptions made by AI which generative AI might amplify.
» There's lower awareness to unintentional misinformation, because people think of generative AI as a search engine or something or as a source of truth. That is simply not the case. «
— Ravit Dotan
Lastly, another major risk is the misconception that chatbots are intelligent, human-like entities. This is largely due to the design of these tools that have them say things like "I think," or "I will tell you," which makes us feel as though we're interacting with a human. But it's crucial to remember that this is simply an illusion created by the design.
Balancing Benefits and Ethical Concerns for AI Regulation
For the regulation of AI, a combination of enforcing current laws and creating new ones is necessary. Laws that already exist, like non-discrimination laws, need to be enforced. However, there may be certain AI capabilities that aren't covered by existing laws. In such cases, we'll need to innovate existing laws or create new ones.
Laws alone aren't enough to keep AI in check, though. AI needs regulation from different sectors, including education and self-regulation from the companies themselves. Financial organizations investing in AI should also be responsible. They should question the source of data and the self-regulation measures a company is taking. Insurance companies, investors, and those in procurement, particularly in the public sector, should also take responsibility.
» I would want more laws saying you’ve got to do a risk assessment, an impact assessment and understand what the tool is likely to do. «
— Ravit Dotan
The E.U. AI Act is a good example of AI-specific law. It categorizes AI applications into different risk levels and assigns different regulations for each. It also enforces impact assessments for any emerging technology that could have vast impacts, including AI. It would also be beneficial to have laws requiring transparency about what a company measures to track fairness, environmental impact, explainability, controllability, and displacement impacts.
Recommendations for Using Generative AI Tools in Business
Companies need to define policies for the usage of generative AI tools. Without policies, they expose themselves to compliance issues, as well as risks related to their trade secrets and operational efficiency. These policies should cover when and how to use the tool, as well as how to use it safely.
When using AI tools like ChatGPT, it's essential to be transparent about their use and how they're used because of the potential for mistakes. Any data put into applications based on generative AI can become the provider’s data, raising issues related to Intellectual Property. Therefore, companies need policies to handle these matters. In addition to this, there should also be a policy for fact-checking, to ensure that the information generated by AI is accurate. Lastly, it's crucial that the policy creation process includes the voices of employees.
Summary
Our discussion has uncovered two significant aspects of AI: the characteristics and implications of generative AI, and the importance of robust regulation in its application. While making AI interactions more apparent, generative AI poses risks such as misinformation and bias. There is also the danger of misunderstanding chatbots as intelligent entities. On the regulation front, enforcement of existing laws and enactment of new ones specific to AI are critical. And we discussed the need for responsible investing and procurement practices, alongside company policies to ensure fact-checking and compliance in using generative AI tools.
Can you imagine a scenario where misuse of AI could have serious consequences, and how it could be avoided?
» Watch the latest episodes on YouTube or listen wherever you get your podcasts. «
What’s next?
Appearances
July 10 - Monday Morning Data Chat with Joe Reis & Matt Housley on whether your business should chase generative AI.
Join us for the upcoming episodes of “What’s the BUZZ?”
June 20 - Aurélie Pols, Data Privacy Expert & Advisor, will join as we discuss how leaders can shape their business’ accountability for generative AI.
July 6 - Abi Aryan, Machine Learning Engineer & LLMOps Expert, will share how you can fine-tune and operate large language models in practice.
August 1 - Scott Taylor, aka “The Data Whisperer”, will let us in on how effective storytelling help you get AI project funded.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas
Thanks for the interview with the expert on Generative AI and guardrails. The advise given was high level. Can you detail how to implement for the Enterprise, for production LLM implementations?