How to Keep AI Under Control | Max Tegmark | TED

How to Keep AI Under Control | Max Tegmark | TED

The Dangers of Superintelligence

In this section, the speaker reflects on their previous TED talk where they warned about the dangers of superintelligence and admits that their predictions were wrong.

Reflecting on Past Warnings

  • Five years ago, the speaker warned about the dangers of superintelligence.
  • Admits that their predictions went even worse than expected.
  • Expresses surprise at the lack of meaningful regulation by governments in AI companies' progress.

Rapid Progress of AI

This section focuses on the rapid progress of artificial intelligence (AI) and its implications for human-level cognitive tasks.

Accelerated Progress

  • Shows an abstract landscape representing the difficulty of AI tasks at human level.
  • Highlights how quickly AI has progressed since then, surpassing many tasks previously considered challenging.
  • States that AI is on track to match human intelligence in all cognitive tasks.

Artificial General Intelligence (AGI)

This section discusses AGI as a goal for companies like OpenAI, Google DeepMind, and Anthropic, and explores the potential timeline for achieving AGI and superintelligence.

Pursuit of AGI and Superintelligence

  • Defines artificial general intelligence (AGI) as the stated goal for companies like OpenAI, Google DeepMind, and Anthropic.
  • Mentions the pursuit of superintelligence, which surpasses human intelligence.
  • Indicates that some experts believe there may only be a few years between AGI and superintelligence.
  • Notes that previous estimates placed AGI decades away, but recent predictions suggest it could be just a few years.

Progress in AI

This section highlights the remarkable progress made in AI, including advancements in robot movements and deepfake technology.

Recent Advancements

  • Compares past robot movements to their current ability to dance.
  • Shows an image produced by Midjourney last year and how the same prompt produces a more advanced result this year.
  • Mentions the increasing realism of deepfakes, exemplified by a video featuring a deepfake Tom Cruise.

Language Models and Representations

This section discusses language models' mastery of language and knowledge, as well as their representation of the world.

Language Models and Turing Test

  • States that large language models have been argued to have mastered language and knowledge to the point of passing the Turing test.
  • Acknowledges skeptics who question whether these models truly understand the world but mentions recent findings of a literal map of the world within Llama-2.
  • Highlights AI's ability to build geometric representations of abstract concepts like truth and falsehood.

The Default Outcome

This section explores the potential outcome of AI and superintelligence, with a focus on the default scenario predicted by Alan Turing.

The Machines Take Control

  • Quotes Alan Turing's prediction that the default outcome is for machines to take control.
  • Emphasizes that perceiving AI and superintelligence as just another technology may lead to underestimating their potential impact.

Risks and Warnings

This section discusses warnings from prominent figures in the field regarding the risks associated with AI and superintelligence.

Warnings from Experts

  • Mentions OpenAI CEO Sam Altman's warning that it could be "lights out for all of us."
  • Cites Anthropic CEO Dario Amodei, who estimates a 10-25% risk of human extinction from AI.
  • Notes that human extinction from AI has gained mainstream attention, with AGI CEOs and EU officials issuing warnings.

The Inevitability Debate

This section addresses the evolving perception of AGI inevitability and introduces the speaker's intention to discuss positive alternatives.

Changing Perceptions

  • Describes how perceptions have shifted from viewing AGI as decades away to considering it inevitable.
  • Acknowledges the need for optimism despite concerns about AGI.

The Need for AI Safety

This section emphasizes the importance of developing a convincing plan for AI safety and the limitations of current evaluation methods.

Addressing AI Safety

  • Highlights the need for a more comprehensive approach to AI safety beyond evaluating risky behavior.
  • Calls for provably safe AI systems that can be controlled, rather than relying on guardrails that may not be sufficient against superintelligence adversaries.

The summary has been provided in English as per your request.

Provably Safe AI

In this section, the speaker discusses the concept of provably safe AI and how it can revolutionize various aspects of technology.

Formal Verification and Program Synthesis

  • Formal verification is a field that proves things about code.
  • The speaker believes that AI will revolutionize automatic proving and program synthesis.
  • The vision is to have humans write specifications for AI tools, which are then created by a powerful AI along with a proof that the tool meets the spec.

Machine Learning and Verification

  • Machine learning is excellent at learning algorithms, but once learned, they can be re-implemented in a different computational architecture for easier verification.
  • Verifying a proof is easier than discovering it, so humans only need to understand or trust their proof-checking code.

Training AIs to Extract Learned Algorithms

  • If an AI cannot directly create an AI tool, another possibility is training an AI to learn what you want and then using a different AI to extract the learned algorithm and knowledge.
  • This approach aligns with mechanistic interpretability in the field of AI.

Example of Provably Safe System

  • The speaker presents an example where an algorithm for addition is first machine-learned from data using a recurrent neural network.
  • An AI tool is used to distill the learned algorithm into a Python program.
  • The formal verification tool known as Daphne is then used to prove that this program correctly adds up any numbers.

Possibility of Provably Safe AI

  • Provably safe AI is possible but requires time and work.
  • It's important to remember that many benefits of AI do not require superintelligence.

The Future with Artificial Intelligence

In this section, the speaker emphasizes the potential of AI and encourages a responsible approach to its development.

Embracing the Future with AI

  • The speaker believes that we can have a long and amazing future with AI.
  • Instead of pausing AI, we should pause the reckless race towards superintelligence.
  • It is important to avoid obsessively training ever-larger models that we don't understand.

The Warning from Ancient Greece

  • The speaker warns against hubris in pursuing artificial intelligence, drawing parallels to the story of Icarus.
  • While AI provides incredible intellectual wings, it's crucial not to obsessively try to fly too close to the sun.

By following these guidelines, we can harness the potential of AI while ensuring responsible development.

Channel: TED
Video description

The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around. If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership Follow TED! Twitter: https://twitter.com/TEDTalks Instagram: https://www.instagram.com/ted Facebook: https://facebook.com/TED LinkedIn: https://www.linkedin.com/company/ted-conferences TikTok: https://www.tiktok.com/@tedtoks The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Watch more: https://go.ted.com/maxtegmark23 https://youtu.be/xUNx_PxNHrY TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com #TED #TEDTalks #ai