The LLM Revolution Is Over. The Physical AI Revolution Is Coming Fast
Where Are We on the Path to AGI?
Understanding AGI and Human Intelligence
- The speaker questions the term "AGI" (Artificial General Intelligence), arguing that human intelligence is not truly general, thus making the label a misnomer.
- While machines will eventually surpass human intelligence, significant conceptual breakthroughs are still needed before this can happen.
Misunderstandings in AI Capabilities
- Current AI advancements won't lead to human-level or superintelligence through mere refinements of existing paradigms; a paradigm shift is necessary.
- The limitations of large language models (LLMs) are becoming apparent, particularly their inability to predict consequences of actions, which is essential for intelligent behavior.
The Need for World Models
- Intelligent systems must anticipate outcomes and plan actions effectively; this requires developing world models that LLMs currently lack.
- Real-world learning differs from machine learning; humans can learn tasks quickly without extensive training data, unlike current autonomous systems.
Embedded Assumptions About Intelligence
- Much of the global AI debate relies on flawed assumptions about intelligence being primarily linguistic rather than rooted in understanding complex physical and social realities.
- True intelligence involves comprehending messy real-world data rather than just predicting text sequences.
Future Directions in AI Development
- The next wave of AI will focus on systems capable of understanding high-dimensional, noisy sensory data and building predictive models about their environments.
- This upcoming "physical AI revolution" aims to create controllable systems that can reason and accomplish tasks safely.
Breakthrough Innovations in AI Research
Key Factors Behind Rapid Progress
- Reflecting on his time at Meta, the speaker identifies open research as a crucial factor driving rapid advancements in AI over the past decade.
Importance of Open Research
- Open access to research papers and code has accelerated progress by allowing more contributors to participate in advancing the field.
Concerns Over Industry Trends
- Recent trends show a shift towards closed research practices among major industry labs like Google and Anthropic, which could hinder future innovation.
AI Development and Open Research
The State of AI Research
- The current landscape of AI research shows a divide, with the US lagging behind China in open-source models. Chinese research labs are producing high-quality models that are widely adopted in the research community.
- There is uncertainty regarding the openness of new AI developments from companies like Meta, which could hinder progress in the field.
Advanced Machine Intelligence (AMI)
- AMI aims to create a new generation of AI systems that learn from sensory data such as video and physical interactions, rather than relying solely on language.
- The founder emphasizes that AMI is based on a project previously developed at Meta, focusing on building world models capable of predicting future states based on actions taken.
Learning from Sensory Data
- The approach at AMI promotes a bottom-up research environment where collaboration drives innovation rather than top-down management.
- AMI's goal is to develop systems that can understand and predict outcomes by learning from sensory inputs, enabling them to plan sequences of actions effectively.
Predictive Capabilities
- A significant aspect of AMI's architecture involves creating predictive models that can anticipate changes in the environment based on learned experiences.
- Current prototypes demonstrate self-supervised learning capabilities using unlabeled videos, allowing systems to identify inconsistencies or impossible scenarios within visual data.
Complex Systems Modeling
- The architecture used by AMI is non-generative and focuses on extracting information efficiently while making predictions about input data.
- This methodology aims to build phenomenological models for complex systems across various domains, including industrial processes and biological systems.
Digital Twin Concept
- The discussion touches upon the concept of digital twins—accurate simulations of physical systems—which can be impractical if overly detailed due to complexity.
- Understanding phenomena requires abstract representations rather than exhaustive detail; generative models often fail to provide this necessary abstraction for effective predictions.
The Future of AI: Openness vs. Proprietary Systems
The Debate on Openness in AI
- The discussion centers around whether openness in AI is a competitive advantage or a public good that needs protection, questioning where the limits of openness should be.
- Historically, platforms have transitioned to open source; the internet's infrastructure evolved from proprietary systems to predominantly open-source solutions like Linux.
- A similar shift is anticipated for AI, especially for countries outside China and the US, emphasizing the need for diverse contributions to create comprehensive AI systems.
Importance of Open Source in AI Development
- Proprietary systems cannot alone develop robust AI; access to multilingual and cultural data is essential for fine-tuning these systems.
- Advocating for a global consortium to train an open-source model that serves as a repository of human knowledge is crucial for ensuring diversity and representation in AI.
Risks Associated with Proprietary Control
- The primary risk of concentrated power in AI lies not in apocalyptic scenarios but rather in how it could undermine democracy and cultural diversity if controlled by a few companies.
- A diverse population of AI assistants is necessary, paralleling the need for diversity in media, which can only be achieved through open-source initiatives.
Identifying Real Risks of AI
Immediate Concerns Over Apocalyptic Narratives
- There’s skepticism about exaggerated fears surrounding AI taking over; more pressing issues include concentration of power among corporations and governments.
- Centralized control poses significant risks as it will dictate our information landscape; building an alternative open infrastructure is essential.
Economic Implications and Job Displacement
- Predictions suggest that while productivity may increase by 6% annually due to AI, this won't lead to mass unemployment because technology adoption depends on people's ability to learn new skills.
AI Alignment: Technical or Political Challenge?
Understanding Alignment Issues
- The concept of alignment often focuses on preventing LLM outputs from being offensive or inappropriate but overlooks broader governance questions regarding whose values are prioritized.
- Current discussions about alignment should consider evolving architectures beyond LLM frameworks, emphasizing objective-driven designs with built-in guardrails during inference.
AI Behavior and Human Agency
Concerns About LLM Safety
- The behavior of Large Language Models (LLMs) cannot be guaranteed due to the limited data they are trained on, which is only a small subset of all possible human interactions.
- Projecting future AI systems as LLMs with human-like intelligence raises concerns about potential dangers, but this perspective may be misguided.
AI's Impact on Work
- AI is reshaping work dynamics, often in unexpected ways; there is a need to explore how AI can augment rather than replace human intelligence.
- There are significant transitional costs associated with integrating AI into the workforce that society may be underestimating. Questions surrounding job loss might not be framed correctly.
Preparing for an AI-Rich Future
- Students must focus on learning fundamental skills that have longevity and will remain relevant despite rapid technological changes. This includes being adaptable to changing job markets.
- A recommendation for students: prioritize courses in foundational subjects like quantum mechanics over more transient topics such as mobile app programming, as these fundamentals provide versatile skills applicable across various fields.
Future Predictions for 2035
Vision of Success and Failure
- By 2035, success would involve AI systems that possess an understanding of the physical world and potentially reach levels of intelligence comparable to humans in specific domains. Computers could outperform humans in many tasks due to their capabilities.
- The evolution towards advanced AI will not happen overnight; it requires numerous conceptual breakthroughs documented in research papers that may initially go unnoticed until their significance becomes apparent years later.
The Role of Assistive Technology
- In the next 5 to 10 years, we can expect assistive technologies integrated into daily life through devices like smart glasses or wearables, enhancing our decision-making processes by amplifying our intelligence.
The Future of Intelligence and AI
The Value of Intelligence
- Intelligence is considered the most valuable commodity globally, with a focus on increasing its total amount being seen as an intrinsically good goal.
- The relationship between humans and superintelligent systems will resemble that of leaders in various fields (business, academia, politics) who often work with individuals smarter than themselves.
Rapid Advancements in AI
- Five years ago, discussions around AI advancements were largely speculative, predicting significant changes to occur 90 years later; however, recent developments have accelerated rapidly.
- The next five years are expected to bring even faster changes in technology and society. Preparation for these shifts is crucial for thriving as a species.
Public Perception vs. Technological Progress
- There exists a disparity between public perception and actual technological progress; while breakthroughs may not be immediately recognized by experts, the public often sees sudden changes (e.g., ChatGPT's emergence).