Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Introduction
The host introduces the episode and explains that it is part of a series on how artificial intelligence is changing our lives, livelihoods, and culture. She also invites listeners to send in questions or ideas for future episodes.
Liability in AI
This section discusses the issue of liability when it comes to content generated by AI tools. It uses an example from Australia where a mayor is considering suing OpenAI, the maker of chat GPT, for defamation after the tool made false statements about him.
- New generative AI tools like chatbots and search engines can get things wrong.
- Brian Hood, a mayor in Australia, discovered that chat GPT was making false statements about him.
- Chat GPT claimed that Hood had been charged with serious criminal offenses and sentenced to jail.
- Hood had actually reported bribery at his former company and helped lead an investigation into one of Australia's biggest banking scandals.
- When Hood contacted local lawyers about this issue, they filed a defamation case against OpenAI.
- OpenAI has removed mentions of Brian Hood from chat GPT but has not satisfied his demands for a public apology and monetary compensation.
Conclusion
The host concludes by noting that Brian Hood's story is not unique and that others have also been harmed by incorrect information from AI chatbots.
- People who are harmed by incorrect information from AI chatbots may seek legal action or demand corrections and apologies from companies like OpenAI.
Generative AI and Legal Liability
This section discusses the potential legal liability of generative AI programs when they produce incorrect or harmful responses. It also explores the responsibility of online platforms for user-generated content.
Legal Liability of Generative AI Programs
- Hallucinations can occur in generative AI programs due to incomplete or inaccurate data.
- These programs are designed to create natural-sounding responses, not necessarily accurate ones.
- Responsibility for incorrect or harmful responses varies by country and is a complex issue in the US.
- Section 230 of the Communications Decency Act protects online platforms from being held liable for user-generated content.
- Individuals who post content online are generally responsible for it, not the platform that hosts it.
Limits of Section 230
- The limits of Section 230 are being tested in a Supreme Court case involving Google's subsidiary YouTube and its responsibility for terrorist videos linked to the Paris attacks.
- Experts suggest this case could impact how courts deal with generative AI programs since they train on user-generated content but are designed by companies.
Legal Liability of Search Engines vs. Generative AI Programs
- CDA 230 would cover general search engines like Google since they only provide links to third-party websites.
- The legal liability of generative AI programs depends on how their information is characterized, which is difficult to predict since these programs respond to prompts by users.
The Role of User Prompting in AI Design
In this section, the speakers discuss how user prompting can impact the design and training of AI models. They also explore the role of data sets in training AI models and how lawmakers around the world are working to regulate artificial intelligence.
User Prompting vs. AI Design
- User prompting can influence the output of an AI model.
- Companies behind generative AI build models and decide what information is being put into them, which impacts potential outputs.
- This is fundamentally different from platforms where users post content freely.
Training Data Sets for AI Models
- Most systems are trained on massive data sets scraped from the internet.
- There is a choice between training on a giant data set or not, with some obvious exceptions like child pornography or violent content.
- Courts will likely have to draw a line on what can be included in these data sets.
Global Regulation of Artificial Intelligence
- Lawmakers around the world are debating ways to regulate artificial intelligence.
- China has already put laws in place around algorithms and generative AI, while the European Union is working on legislation called the "AI Act."
- The Biden Administration is weighing new regulations due to concerns about discrimination and harmful information spreading.
Addressing Risks at the Engineering Level
In this section, Karen Howe explains how GPT works and why it sometimes generates harmful content. She also discusses efforts to address these risks at an engineering level.
How GPT Works
- GPT constructs sentences based on the probability of certain words and phrases being frequently used together.
- It's not identifying specific pieces of information, but rather assembling words based on frequency within internet text.
Addressing Risks at the Engineering Level
- No bullet points available for this section.
Responsible AI and the Challenges of Generative AI Chatbots
In this section, we learn about responsible AI and the challenges posed by generative AI chatbots. We also learn about how people tend to trust what they see online, regardless of the source.
The Rise of Generative AI Chatbots
- Chaudry has worked in the field of responsible AI for five years before starting a consulting firm.
- There is a conversational interactivity to generative AI chatbots that makes it feel like you're talking to a smart researcher.
- The conversational approach of generative AI chatbots is one of their best and worst features.
Trusting What We See Online
- Our reliance on the internet has made more people come to believe what they see and read online, no matter what the source.
- Over half of US adults under age 30 say they trust what they read on social media as much as what they see on national news outlets.
- People tend to trust search engines and don't generally go beyond the first two or three hits.
Challenges Posed by Generative AI Chatbots
- It's difficult to tell fact from fiction with generative AI chatbots because it's disembodied information that we can't easily assess for its trustworthiness.
- Large language models that generative AI programs are built on are not immune to hallucinations.
- Companies building AI tools work with ethicists to find and improve issues once the programs are built.
Steps Taken by Companies
- Google's AI-powered chatbot Bard is not a search engine, but it generates content that users can check on Google search to verify.
- Microsoft's AI-powered search engine Bing provides linked citations for answers and advises users to check the links to learn more.
- Microsoft explicitly advises users that they may need to verify information themselves.
Techniques Used in Responsible AI
- Prompt hacking is figuring out ways in which you can get the model to say and do things that would otherwise be explicitly denied.
- Red teaming involves bringing in subject matter experts to push AI on specific issues.
Artificial Intelligence Technology Developments
In this section, the speaker talks about the rapid development of artificial intelligence technology and how there are still many questions to be answered.
AI Technology Development
- The area of artificial intelligence technology is developing quickly.
- There are still many more questions to answer in this field.
Conclusion
In this section, the speaker concludes the podcast episode and thanks the listeners for tuning in.
Wrapping Up
- Thanks for listening to the podcast episode.
- The episode was produced by Julie Chang with editorial support from Falana Patterson and Robert Wall. Melanie Roy is the supervising producer, Chris Sinsley is the executive producer, and Michael Lavalle mixed this episode.
- For more tech news, check out wsj.com.