FULL: Demis Hassabis, Dario Amodei Debate What Comes After AGI at World Economic Forum | AI1G

FULL: Demis Hassabis, Dario Amodei Debate What Comes After AGI at World Economic Forum | AI1G

The Day After AGI: Insights and Predictions

Introduction to the Conversation

  • The conversation is highly anticipated, with a previous discussion moderated by the speaker in Paris between Dario and Demis.
  • The speaker likens this dialogue to a reunion of iconic bands, emphasizing the significance of their discussions on artificial general intelligence (AGI).

Timeline for Achieving AGI

  • Dario previously predicted that models capable of performing at a Nobel laureate level across various fields would emerge by 2627. He reflects on whether he still stands by this timeline.
  • Dario believes that advancements in AI will occur sooner than expected, driven by models improving coding capabilities and accelerating research development.

Current Progress in AI Development

  • Engineers at Anthropic are increasingly relying on AI models for coding tasks, indicating significant progress towards automation in software development.
  • Dario expresses uncertainty about how long it will take to achieve full automation but suggests it may happen faster than anticipated due to rapid advancements.

Perspectives on Cognitive Capabilities

  • Demis maintains a cautious stance, predicting a 50% chance of achieving human-like cognitive capabilities by the end of the decade while acknowledging remarkable progress in certain areas like coding.
  • He highlights challenges in automating complex scientific inquiries compared to more straightforward engineering tasks due to experimental verification requirements.

Risks and Challenges Ahead

  • Demis points out potential missing elements necessary for achieving higher levels of scientific creativity and emphasizes the importance of human involvement in self-improvement loops within AI systems.
  • The conversation shifts towards discussing risks associated with fully autonomous systems and how they might impact future developments.

Changes in Competitive Landscape

  • The speaker notes a shift in dynamics within the AI race over the past year, particularly regarding Google DeepMind's position relative to OpenAI after notable events like "code red."
  • Demis expresses confidence about regaining leadership status through focused efforts and improvements made with new models like Gemini 3.

AI Model Development and Future Predictions

Concerns for Independent Model Makers

  • The discussion highlights concerns regarding independent model makers' sustainability, especially in light of extraordinary valuations and the pressure to generate revenue before reaching maturity.

Revenue Growth and Model Capability

  • The relationship between computational power and cognitive capability is emphasized, suggesting that better models lead to exponential growth in both capabilities and revenue.
  • Revenue has reportedly grown from $0 to $100 million in 2023, with projections of reaching $1 billion in 2024 and $10 billion by 2025, indicating a significant scaling potential.

Research-Focused Companies

  • Confidence is expressed that companies led by researchers focusing on solving important scientific problems will succeed, contrasting them with those lacking such leadership.

Closing the Loop in AI Models

  • The conversation shifts to the concept of "closing the loop" where models can self-improve. There’s skepticism about whether this will be achieved or if competition will remain open among followers.

Potential of AGI (Artificial General Intelligence)

  • While some aspects of AI are already aiding coding and research, achieving full self-improvement may require AGI, particularly in complex domains where verification is challenging.

Insights from "Machines of Love and Grace"

  • Dario reflects on his previous work discussing AI's potential benefits while acknowledging significant risks. He emphasizes optimism about overcoming these challenges rather than succumbing to pessimism.

Addressing Risks Associated with AI

  • Dario shares his ongoing commitment to addressing risks associated with powerful AI technologies while maintaining an optimistic outlook on their potential benefits for humanity.

Technological Adolescence: Can We Survive?

The Question of Survival in Technological Advancement

  • A character poses a thought-provoking question about how advanced civilizations manage to survive their technological adolescence without self-destruction, highlighting the urgency of this inquiry.
  • The speaker reflects on humanity's rapid approach to incredible technological capabilities, emphasizing that while progress is inevitable, the manner in which we handle it is not predetermined.
  • Concerns are raised regarding the control of highly autonomous systems and the potential misuse by individuals or nation-states, particularly authoritarian regimes like the CCP.
  • Economic implications such as labor displacement are discussed, with an acknowledgment that unforeseen challenges may arise as technology evolves.
  • The speaker stresses the need for collective action among leaders and societal institutions to address these risks effectively.

Urgency in Addressing AI Risks

  • There is a sense of urgency conveyed about dedicating efforts towards understanding and managing AI-related crises amidst other global issues.

Job Market Implications of AI

  • Discussion shifts to job impacts, with predictions that half of entry-level white-collar jobs could disappear within 1 to 5 years due to AI advancements.
  • Demis counters by noting no significant impact on the labor market has been observed yet; current unemployment trends are attributed more to post-pandemic overhiring than AI disruption.

Future Job Creation vs. Disruption

  • Demis suggests that while some jobs will be disrupted by new technologies, historically new roles emerge that may be more valuable and meaningful than those lost.
  • He anticipates initial impacts on junior-level positions but believes creative tools available today can enhance skill development beyond traditional internships.

Perspectives on Labor Market Evolution

  • The conversation continues with insights into how professionals should adapt by becoming proficient with emerging tools rather than relying solely on conventional career paths.
  • Predictions about future job landscapes post-Artificial General Intelligence (AGI) arrival indicate uncharted territory where human roles may drastically change.

Consensus on Current Labor Market Trends

  • Both speakers agree there hasn't been a noticeable impact from AI yet but acknowledge early signs within specific sectors like software development.
  • They express concern over needing fewer employees at junior levels as companies adapt to evolving technologies while maintaining sensible workforce strategies.

The Future of Work and AI: Challenges and Opportunities

Adaptability of the Labor Market

  • The labor market has historically shown adaptability, transitioning from farming to factory work and then to knowledge-based jobs.
  • Concerns arise about the speed of technological advancement potentially overwhelming our ability to adapt within a 1 to 5-year timeframe.

Economic Implications and Policy Responses

  • There is skepticism regarding whether governments are adequately preparing for economic changes due to AI advancements.
  • Economists should focus not only on job displacement but also on equitable distribution of new wealth generated by productivity increases.

Human Condition Beyond Economics

  • Questions about meaning and purpose in life, traditionally derived from work, may become more pressing as job roles evolve or diminish.
  • New forms of activities that provide meaning, such as art and exploration, could emerge alongside technological advancements.

Risks of Public Backlash Against AI

  • There is a significant risk of public backlash against AI technologies reminiscent of past globalization reactions leading to political instability.
  • The complexity of geopolitical factors will influence how society responds to job displacement caused by AI.

Industry Responsibility and Geopolitical Dynamics

  • The industry must demonstrate unequivocal benefits from AI developments, like healthcare advancements through projects such as AlphaFold.
  • International cooperation is essential for establishing safety standards in technology deployment amidst competitive pressures between nations like the US and China.

Geopolitical Risks and AI Development

Geopolitical Risks in Technology

  • The speaker discusses the increasing geopolitical risks and questions what actions should be taken, noting that their company is trying to operate within a challenging environment.
  • Emphasizes that not selling chips is a crucial policy recommendation to manage these risks effectively, suggesting a preference for a longer timeline for technology development.
  • Raises concerns about the inability to slow down technological advancements due to competing geopolitical adversaries who are developing similar technologies at an accelerated pace.

Competition Between Nations

  • Questions the administration's logic of integrating foreign entities into U.S. supply chains by selling them chips, arguing it complicates competition dynamics between nations.
  • Compares the situation to nuclear weapons proliferation, asserting that profit motives should not override national security considerations regarding advanced technologies.

Concerns About Malign AI

  • Introduces concerns about powerful malign AI and acknowledges skepticism towards doomerism while recognizing emerging capabilities of models that could lead to deception.
  • Discusses how their research has evolved from theoretical frameworks to practical applications aimed at understanding and mitigating bad behaviors in AI models.

Addressing Risks Through Collaboration

  • Expresses belief in addressing risks through collaborative efforts in science, emphasizing the importance of proper control over AI developments.
  • Reflects on two decades of work in AI, highlighting both its potential benefits as a tool for scientific advancement and the necessity of managing associated risks effectively.

Discussion on AI and the Fermi Paradox

The Challenge of Ensuring Safe AI Systems

  • The speaker discusses the complexity of ensuring that AI systems are technically safe when multiple projects and individuals are competing against each other.
  • Emphasizes that while this is a challenging problem, it is also tractable with sufficient time and effort.

Philosophical Inquiry into the Fermi Paradox

  • Philip, co-founder of Star Cloud, raises a philosophical question regarding doomerism linked to the Fermi paradox—why we don't observe intelligent life in our galaxy.
  • The speaker argues that if advanced civilizations were destroyed by their own technology, we should see evidence (like paper clips or Dyson spheres), but we do not.

Speculations on Life's Evolution

  • The speaker suggests that there must be an alternative explanation for the Fermi paradox and shares a personal theory about humanity having passed through a "great filter" in evolution.
  • They express optimism about humanity's future, stating it is up to us to determine what happens next.

Future Developments in AI

  • Acknowledges the importance of monitoring how AI systems develop further, particularly regarding self-improvement capabilities.
  • Highlights ongoing research areas such as world models and continual learning as critical for advancing AI beyond current limitations.

Robotics and Breakthrough Potential

  • Discusses potential breakthroughs in robotics as an area of interest that may emerge alongside advancements in self-improving AI systems.
  • Suggests that these developments could lead to significant changes in technology and society.
Video description

Top AI leaders — Demis Hassabis, Dario Amodei, and Zanny Minton Beddoes — took the stage at the World Economic Forum 2026 for a high-stakes discussion on “The Day After AGI.” They unpack the future of artificial general intelligence, its governance, societal impact, and risk landscape. WEF 2026: AI Leaders Hassabis, Amodei & Beddoes Debate What Comes After AGI The Day After AGI at Davos: DeepMind, Anthropic & Economist Lead the Debate Can Humanity Control AGI? Top AI CEOs Spell Out the Stakes at WEF 2026 WEF 2026, The Day After AGI, Demis Hassabis, Dario Amodei, Zanny Minton Beddoes, artificial general intelligence, AGI governance, AI safety, AI future, DeepMind, Anthropic, Economist AI panel, Davos AI discussion, global AI policy, AI ethics, AI risk, technology leadership #AGI #ArtificialIntelligence #WEF2026 #Davos #AILeaders #DemisHassabis #DarioAmodei #ZannyMintonBeddoes #DeepMind #Anthropic #AIGovernance #Technology #FutureOfAI ------------------------------------------------------------------- 🌐 DWS News – Decoding Global Politics Daily Welcome to DWS News! We deliver reliable, timely updates and in-depth coverage of the biggest political stories shaping our world. Want us to cover a story, share your perspective, or give feedback? Email us anytime at info@newsdrm.com — we’re listening. 🔎 Subscribe for More: Stay informed on international diplomacy, political power shifts, peace negotiations, and world-changing events. 🕊️ We believe in facts over noise. Thank you for watching and supporting independent global journalism. #InternationalNews #WorldPolitics #DWSNews DWS News YouTube Channel is managed by Dot Republic Media. All Copyrights Reserved.