Can AI Match the Human Brain? | Surya Ganguli | TED
What Happened in AI Over the Last Decade?
Overview of AI's Evolution
- The emergence of a new type of intelligence in AI over the last decade is noted, characterized by remarkable capabilities and significant errors.
- Understanding AI requires placing it within the historical context of biological intelligence, starting from our common ancestor 500 million years ago.
Human Intelligence Development
- The evolution of human intelligence led to profound advancements in math and physics, enabling us to understand complex concepts without relying on AI tools like ChatGPT.
- A multidisciplinary approach combining physics, neuroscience, psychology, and computer science is essential for developing a new science of intelligence.
Addressing Critical Gaps in AI
Data Efficiency
- Current AI systems require vast amounts of data compared to humans; for instance, language models are trained on about 1 trillion words while humans learn efficiently with far less.
- The legacy of evolution provides minimal information (about 700 megabytes), highlighting the efficiency gap between human learning and AI data requirements.
Scaling Laws and Their Limitations
- Revisiting scaling laws reveals that reducing error often requires exponentially more training data, which is unsustainable long-term.
- Large random datasets are redundant; creating non-redundant datasets can significantly improve learning efficiency.
New Approaches to Learning
- Developing theories and algorithms that allow for better data selection can bend unfavorable power laws into more favorable exponential relationships.
Enhancing Machine Learning Techniques
Beyond Traditional Training Methods
- Current training methods for large language models are inefficient; a comparison is made with teaching children through structured problem-solving rather than random exposure to information.
Energy Efficiency Challenges
Understanding Energy Efficiency in Biological Computation
Biological vs. Computational Approaches
- Biology computes answers just in time, using slow and unreliable intermediate steps, contrasting with traditional computing that relies on energy-intensive bit flips.
- While computers use complex circuits for addition, biological neurons directly add voltage inputs, leveraging the laws of electromagnetism for efficiency.
Rethinking Technology for AI
- To enhance energy-efficient AI, we must align computational dynamics with physical dynamics and explore fundamental limits on computation speed and accuracy based on energy budgets.
- Research has established lower bounds on error related to energy budgets in sensing computations performed by neurons.
Insights from Neuroscience
- The study found that increased neural activity correlates with elevated ATP levels, suggesting a predictive energy allocation principle within the brain.
- This principle indicates that the brain can anticipate its energy needs and allocate resources accordingly.
Exploring Quantum Neuromorphic Computing
Advancements Beyond Evolution
- By integrating neural algorithms discovered through evolution into quantum hardware, we can surpass evolutionary limitations.
- Neurons can be represented by atoms while synapses are replaced by photons, enabling communication through photon emission and absorption.
Potential Applications
- A quantum associative memory system built from atoms and photons could enhance memory capacity and recall compared to classical systems.
- New types of quantum optimizers can be developed using photons to solve optimization problems more efficiently.
The Role of Explainable AI in Understanding the Brain
Building Accurate Models of Neural Function
- Explainable AI allows scientists to create accurate models of complex systems like the retina while seeking a conceptual understanding rather than merely replicating functions.
Experimenting with Perception
- An experiment demonstrated how retinal neurons respond when Newton's first law is violated; this response is crucial for understanding motion perception.
Insights from Model Performance
Accelerating Neuroscience Discovery with AI
Digital Twins of the Brain
- The concept of creating digital twins of the brain opens new pathways for accelerating neuroscience discovery using AI.
- A significant project at Stanford aims to build a digital twin of the entire primate visual system to understand its workings.
- This technology allows for bidirectional communication between brains and machines, enabling control over neural activity patterns.
Mind-Machine Communication
- Recent experiments demonstrated that AI can decode images shown to mice from their brain activity, revealing how they perceive the world.
- Researchers achieved the ability to induce specific hallucinations in mice by controlling just 20 neurons in their brains, showcasing direct manipulation of perception.
- The potential for limitless communication between brains and machines could lead to advancements in understanding, curing, and augmenting brain functions.
The Role of Academia in Intelligence Research
- Emphasizing transparency, the pursuit of a unified science of intelligence should be conducted openly to share findings globally.
- Academia is positioned uniquely to explore this field without corporate pressures like quarterly earnings or legal censorship.