AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

AI's Impact on Society: A Researcher's Perspective

Introduction to AI Concerns

  • The speaker, an AI researcher with over a decade of experience, shares an unusual email warning that their work in AI could end humanity.
  • Highlights the dual nature of AI news: positive advancements like medical discoveries and negative incidents such as harmful chatbot suggestions.

Current Issues with AI

  • Emphasizes that while future risks are uncertain, current issues include contributions to climate change and unauthorized use of creative works.
  • Calls for transparency in tracking AI's impacts to foster trustworthiness and sustainability in future models.

Environmental Sustainability of AI

  • Discusses the environmental costs associated with training large language models, citing participation in the BigScience initiative to create Bloom.
  • Reports that training Bloom consumed energy equivalent to 30 homes for a year and emitted significant carbon dioxide; contrasts this with other models like GPT-3 which have higher emissions.

Trends in Model Size and Environmental Costs

  • Notes the trend towards larger models ("bigger is better") leading to increased environmental costs; recent findings show larger models emit significantly more carbon.
  • Urges focus on tangible impacts rather than hypothetical existential risks, advocating for tools to measure and mitigate these effects.

Tools for Measuring Impact

  • Introduces CodeCarbon, a tool designed to estimate energy consumption and carbon emissions during AI training, promoting informed decision-making regarding model selection.

Copyright Issues for Artists

  • Addresses challenges artists face proving unauthorized use of their work in training datasets; introduces Spawning.ai’s tool “Have I Been Trained?” for searching data sets.
  • Shares personal experience querying a dataset revealing how common names can lead to misrepresentation in generated images; highlights implications for artists seeking justice against copyright infringement.

Collaboration for Ethical Data Use

  • Describes collaboration between Spawning.ai and Hugging Face to establish opt-in/out mechanisms for data usage, emphasizing ethical considerations around using human-created artwork.

Addressing Bias in AI Models

Understanding AI Bias and Its Implications

The Impact of Facial Recognition Systems

  • Common facial recognition systems exhibit significant bias, performing poorly for women of color compared to white men. This disparity can lead to false accusations and wrongful imprisonment when such biased models are used in law enforcement.
  • A notable case is that of Porcha Woodruff, who was wrongfully accused of carjacking while eight months pregnant due to an AI system's incorrect identification.

The Challenges of AI Transparency

  • Many AI systems, particularly those involved in image generation, operate as "black boxes," making it difficult even for their creators to understand their decision-making processes.
  • Image generation models often reflect societal biases; for instance, they predominantly depict scientists as white males, failing to represent the diversity present in real-world professions.

Addressing Bias Through Tools

  • The Stable Bias Explorer tool was developed to help users explore biases within image generation models across various professions. It highlights the lack of representation for non-white and non-male individuals in professional imagery.
  • As AI becomes integrated into everyday life—affecting social media, justice systems, and economies—it is crucial that these technologies remain accessible and understandable to all users.

Creating Solutions for AI Governance

  • By developing tools that measure the impact of AI on society, stakeholders can begin addressing issues like bias and sustainability. These insights can guide companies in selecting responsible models and assist legislators in crafting effective regulations.
Channel: TED
Video description

AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent. If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership Follow TED! Twitter: https://twitter.com/TEDTalks Instagram: https://www.instagram.com/ted Facebook: https://facebook.com/TED LinkedIn: https://www.linkedin.com/company/ted-conferences TikTok: https://www.tiktok.com/@tedtoks The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Watch more: https://go.ted.com/sashaluccioni https://youtu.be/eXdVDhOGqoE TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com #TED #TEDTalks #AI