When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED

When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED

The Challenge of Differentiating Real from Fake

The speaker discusses the increasing difficulty in distinguishing between AI-generated and human-generated content, particularly with the rise of deepfakes. The harm caused by falsified sexual images is highlighted, along with the growing problem of deceptive and malicious audiovisual AI.

Spotting Real from Fake

  • It is becoming harder to differentiate between real and fake content.
  • Generative AI and deepfakery advancements make it easier to create realistic fakes.
  • Falsified sexual images were initially the main concern, but now the problem extends to various contexts worldwide.
  • Advances in generative AI not only make it easier to create fake reality but also lead to dismissing actual reality as possibly faked.

Growing Harm and Challenges

The speaker emphasizes that while audiovisual AI is not the root cause of societal problems, it contributes significantly. Examples are given, such as audio clones in electoral contexts, claims clouding human rights evidence, targeting women with sexual deepfakes, and impersonating news anchors.

Deceptive and Malicious AI

  • Audiovisual AI contributes to societal problems without being their root cause.
  • Audio clones proliferate in electoral contexts.
  • Claims of "is it or isn't it" regarding authenticity cloud human rights evidence.
  • Women are targeted through sexual deepfakes in public and private settings.
  • Synthetic avatars impersonate news anchors.

WITNESS: Protecting Rights with Video Technology

The speaker introduces WITNESS, a human-rights group focused on using video technology for protection and defense. They have coordinated a global effort called "Prepare, Don't Panic" for the past five years. This effort includes a deepfakes rapid-response task force composed of media forensics experts who debunk deepfakes and claims of deepfakes.

WITNESS and "Prepare, Don't Panic"

  • WITNESS is a human-rights group utilizing video technology for protection.
  • The "Prepare, Don't Panic" effort has been ongoing for five years.
  • A deepfakes rapid-response task force is part of this effort.
  • Media forensics experts and companies collaborate to debunk deepfakes and claims of deepfakes.

Deepfake Case Studies

The speaker discusses three recent audio clips received by the task force. People claimed that these clips were deepfaked, but experts analyzed them to determine their authenticity.

Case Studies: Sudan, West Africa, and India

  • Three audio clips from Sudan, West Africa, and India were submitted to the task force.
  • People claimed that these clips were deepfaked.
  • Experts used machine-learning algorithms trained on synthetic speech examples to authenticate the Sudan clip.
  • Challenges analyzing audio from Twitter prevented a definitive conclusion in the West Africa case.
  • Leaked audio of an Indian politician was partially determined to be real through personalized voice modeling.

Challenges in Discerning Fact from Fiction

The speaker highlights the difficulty in rapidly and conclusively differentiating between true and false content. Examples are given of political leaders targeted by deepfakes, incorporation of fake footage into political ads, and sharing AI-generated imagery as real from crisis zones.

Warning Signs: Fact vs. Fiction

  • Discerning fact from fiction poses significant challenges.
  • Political leaders have been targeted with audio and video deepfakes.
  • Fake footage is incorporated into political ads.
  • AI-generated imagery is shared as real from crisis zones.

Diminishing Baseline of Trustworthy Information

The speaker emphasizes the importance of maintaining a shared baseline of trustworthy information for thriving democracies. The specter of AI is used to manipulate beliefs and deny inconvenient truths.

Trustworthy Information in Democracies

  • Dismissing stories from human rights defenders and journalists is not new.
  • Deceptive shallow fakes have been used to spread confusion and disinformation.
  • Diminishing the baseline of shared, trustworthy information harms democracies.
  • AI can be used to plausibly believe desired narratives and deny unwanted truths.

Preventing a Future of Misinformation

The speaker suggests that by taking action now, we can prevent a future dominated by misinformation. Panic plays into the hands of those who abuse fears, and outdated detection skills are insufficient. Structural solutions are needed to discern authenticity, fortify credibility, and develop powerful detection technology.

Taking Action Against Misinformation

  • Panic is not helpful; it benefits governments, corporations, and those spreading confusion.
  • Detection skills and tools should be accessible to journalists, community leaders, and human-rights defenders.
  • Outdated tips for spotting deepfakes are no longer effective.
  • Technical advances erase visible and audible clues that help differentiate real from fake.
  • Big-picture structural solutions are necessary for authenticating content and fortifying critical voices.

Enhancing Detection Skills

The speaker emphasizes the importance of providing detection skills and tools to those who need them. Journalists, community leaders, and human-rights defenders require assistance in identifying glitches or signs of deepfakery in audio recordings.

Empowering Detection Skills

  • Journalists, community leaders, and human-rights defenders lack access to detection skills.
  • Identifying glitches or signs of deepfakery in audio recordings is challenging but crucial.

Detection and Responsibility in the Age of AI

This section discusses the challenges in detecting deepfakes and the need for responsible use of AI in media.

Challenges in Deepfake Detection

  • Detection tools often work on specific types of deepfakes, requiring multiple tools to cover different techniques.
  • These tools struggle with low-quality social media content.
  • The reliability of confidence scores is uncertain without knowledge about the underlying technology's effectiveness.

Limitations of AI Manipulation Detection

  • Tools designed to spot AI manipulation may not detect manual edits.

Accessibility and Availability of Detection Tools

  • Not everyone has access to detection tools, creating a trade-off between security and access.
  • Making detection tools available to everyone renders them useless as new deception techniques can be developed to evade them.
  • Journalists, community leaders, and election officials should have access to these tools as they are the first line of defense against misinformation.

Importance of Media Literacy

  • AI will be pervasive in communication, necessitating a better understanding of what we consume.
  • Content provenance and disclosure are crucial for transparency in AI-generated media.
  • Efforts are being made to add invisible watermarking and cryptographically signed metadata to files, providing information about how AI was used.

Balancing Privacy and Authenticity

  • Building infrastructure for authenticity must ensure privacy protection and avoid global backlashes.

Pipeline of Responsibility

  • A responsible pipeline is necessary from foundation models through deployment into systems, APIs, apps, and platforms where media is consumed.

Transparency, Accountability, and Liability

  • Governments should ensure transparency, accountability, and liability within the pipeline for AI technologies.

The Consequences of Inaction

  • Without effective detection methods, reliable provenance tracking, and a responsible pipeline,

it becomes easier to fake reality or dismiss reality as potentially faked.

Channel: TED
Video description

We're fast approaching a world where widespread, hyper-realistic deepfakes lead us to dismiss reality, says technologist and human rights advocate Sam Gregory. What happens to democracy when we can't trust what we see? Learn three key steps to protecting our ability to distinguish human from synthetic — and why fortifying our perception of truth is crucial to our AI-infused future. If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership Follow TED! X: https://twitter.com/TEDTalks Instagram: https://www.instagram.com/ted Facebook: https://facebook.com/TED LinkedIn: https://www.linkedin.com/company/ted-conferences TikTok: https://www.tiktok.com/@tedtoks The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Watch more: https://go.ted.com/samgregory https://youtu.be/Hnjr7o-HNx8 TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com #TED #TEDTalks #AI