Jeff Dean: AI isn't as smart as you think -- but it could be | TED

Jeff Dean: AI isn't as smart as you think -- but it could be | TED

Introduction

In this section, Jeff introduces himself and discusses the progress and potential of AI.

Jeff's Background and AI Progress

  • Jeff has been with Google for over 20 years.
  • AI has made tremendous progress in the last decade.
  • AI can help computers see, understand language, and understand speech better than ever before.

Applications of AI

This section highlights some of the great applications enabled by AI capabilities.

Key Applications Enabled by AI

  • Predicting flooding to keep everyone safe using machine learning.
  • Translating over 100 languages for better communication.
  • Better prediction and diagnosis of diseases for improved treatment.

Key Components of AI Systems

This section discusses two key components that have contributed to the progress in AI systems.

Neural Networks

  • Neural networks are a breakthrough approach to solving difficult problems.
  • They have been around since the 1960s and 70s but have seen significant advancements in the last 15 years.

Computational Power

  • Computational power plays a crucial role in making neural networks effective.
  • The availability of increased computational power in the last 15 years has enabled significant progress in AI.

Challenges and Wrong Approaches in AI

This section addresses some of the challenges and wrong approaches in current AI practices.

Historical Perspective on Building Computers with Intelligence

  • People have been trying to build computers that can see, understand language, and understand speech since the beginning of computing.
  • Early attempts at hand-coding algorithms for these tasks were not successful.

Unexpected Advancement with Neural Networks

  • Neural networks emerged as a breakthrough approach that advanced progress in various problem spaces.
  • Neural networks are based on real neural systems and have been around since the 1960s and 70s.

Learning in Neural Networks

This section explains how neural networks learn and adapt to perform complex tasks.

Learning Process in Neural Networks

  • Neural networks learn by making tiny adjustments to weight values.
  • These adjustments strengthen or weaken the influence of inputs, driving the system towards desired behaviors.
  • Neural networks can be trained to perform complicated tasks like language translation and object recognition.

Progress in Computational Power

This section discusses the progress in computational power that has enabled advancements in neural network training.

Initial Expectations vs. Reality

  • In 1990, there was optimism about parallel training of neural networks with increased compute power.
  • However, it took about a million times more computational power than available at that time to achieve impressive results with neural networks.

Successes with Large Neural Networks

This section highlights successes achieved using large neural networks and their ability to learn from data patterns.

Recognition of Objects, Including Cats

  • Training a system with millions of randomly selected frames from YouTube videos resulted in the capability to recognize various objects.
  • The system learned to recognize cats without being explicitly taught what a cat is, solely through patterns in data.

Tailored Hardware for Neural Network Computation

This section discusses the development of hardware specifically designed for efficient neural network computations.

Special Properties of Neural Network Computations

  • Neural network computations are tolerant of reduced precision.
  • Algorithms primarily involve matrix and vector operations.

Tensor Processing Units (TPUs)

  • TPUs are hardware designed for efficient neural network computations.
  • They excel at low-precision matrix and vector operations, making them ideal for neural network computations.

Conclusion

Introduction to DeepMind AlphaGo Matches

The speaker discusses how the DeepMind AlphaGo matches involved competing against racks of TPU cards. They mention that subsequent versions of TPUs have been built and highlight that despite successes, there are still areas for improvement.

Competing with TPUs

  • In the DeepMind AlphaGo matches against Lee Sedol and Ke Jie, they were actually competing against racks of TPU cards.
  • Subsequent versions of TPUs have been developed that are even better and more exciting.

Areas for Improvement

  • Despite the successes achieved, there are still many things that are being done wrong in AI.
  • The speaker will discuss three key things that need to be addressed and how they can be fixed.

Training Neural Networks for Specific Tasks

The speaker explains that most neural networks today are trained for a single task only. They describe the process involved in training a neural network for a specific task and highlight the limitations of this approach.

Single Task Training

  • Most neural networks today are trained to do one thing only.
  • Training a neural network for a particular task is a heavyweight activity involving data curation, selecting network architecture, weight initialization, and computation adjustments.
  • This results in separate models for different tasks, which is not efficient or similar to how humans learn.

Learning from Human Learning

  • Humans learn by building upon existing knowledge when acquiring new skills.
  • Computers should also be able to leverage existing expertise when learning new tasks instead of starting from scratch every time.

Multitask Models for Efficient Learning

The speaker proposes training multitask models capable of performing thousands or millions of different tasks. They explain the advantages of this approach compared to training separate models for each task.

Leveraging Multitask Models

  • Instead of training separate models for each task, multitask models can be trained to perform thousands or millions of different tasks.
  • Each part of the model specializes in different kinds of things, allowing for efficient learning and leveraging existing expertise.

Faster Adaptation to New Tasks

  • With a multitask model, when a new task arises, the model can leverage its existing knowledge and quickly adapt to perform the new task.
  • This is similar to how humans identify relevant knowledge when confronted with a new problem.

Fusion of Different Modalities

The speaker discusses the limitation of current models that deal with only a single modality of data. They propose building models that can handle multiple modalities simultaneously by fusing them together.

Using Multiple Modalities

  • Humans continuously use all their senses to learn from and interact with the world.
  • Models should also be able to process different modalities such as text, images, and speech simultaneously.
  • By fusing these modalities together, the same response can be triggered regardless of how the information is presented.

Handling Various Input Data

  • Models should be capable of dealing with various input data types, including nonhuman inputs like genetic sequences or 3D point clouds.
  • The goal is to create models that are flexible and adaptable across different types of input data.

Sparse Activation in Models

The speaker highlights the difference between current dense models and how our brains work. They propose using sparse activation in high-capacity models to call upon relevant parts based on specific tasks.

Dense vs Sparse Models

  • Current AI models are dense, meaning they are fully activated for every task or example.
  • Our brains work differently; different parts are activated depending on the task at hand.

Sparse Activation for Efficiency

  • High-capacity models with sparse activation allow calling upon specific parts for different tasks.
  • During training, the model can learn which parts are good at specific tasks and use them accordingly.
  • This approach enables having a high-capacity model that is efficient by only activating relevant parts.

Building General-Purpose Models

The speaker summarizes the three improvements discussed: training general-purpose models, handling multiple modalities, and using sparse activation. They emphasize that these improvements will lead to more powerful AI systems.

Improvements for Powerful AI Systems

  • Instead of training thousands of separate models, train a handful of general-purpose models capable of performing thousands or millions of tasks.
  • Handle all modalities simultaneously by fusing them together in the model.
  • Use sparse activation in high-capacity models to call upon relevant parts for specific tasks.

Responsible AI and Ethical Considerations

The speaker acknowledges the importance of responsible AI and ethical considerations. They mention Google's AI principles and the need to ensure fairness, interpretability, privacy, and security when building powerful AI systems.

Responsible AI

  • It is crucial to ensure that powerful AI systems benefit everyone and are developed with fairness, interpretability, privacy, and security in mind.
  • Thoughtful collection of representative data from diverse communities worldwide is essential.

Google's AI Principles

  • In 2018, Google published a set of AI principles guiding their research and product development in this space.
  • These principles help address complex questions about how to responsibly use AI in society.
  • The principles are continuously updated as more knowledge is gained.

The Importance of General-Purpose Intelligent Systems

In this section, the speaker discusses the significance of transitioning from single-purpose systems to general-purpose intelligent systems. These systems have a deeper understanding of the world and can help solve complex problems faced by humanity.

  • General-purpose intelligent systems are crucial for addressing major challenges such as disease diagnosis, engineering better medicines, advancing educational systems, and tackling issues like climate change and clean energy solutions.
  • The development of these systems requires multidisciplinary expertise from people worldwide.
  • Computing advancements in the past have helped millions of people understand the world better, and AI has the potential to benefit billions of people.

Exciting Times Ahead

The speaker expresses enthusiasm about the current era and highlights the potential impact of AI on various aspects of life.

  • We live in exciting times where AI has the potential to make significant advancements.
  • The speaker concludes with gratitude for being part of this transformative period.

Moving Beyond Pattern Recognition

This section explores how AI is evolving beyond pattern recognition and working with richer-layered concepts.

  • Traditional AI focused on computers recognizing patterns and becoming better than humans through machine learning.
  • However, modern AI goes beyond patterns and aims to understand complex concepts that make up objects or phenomena like leopards.
  • This shift opens up possibilities for new applications in various fields.

Generalizing Tasks with Few Examples

The speaker discusses the challenge of generalizing tasks in AI and introduces an approach that allows machines to learn new tasks with relatively few examples.

  • The grand challenge in AI is enabling machines to generalize from known tasks to new ones effortlessly.
  • Training separate models for each task requires a large amount of data, effectively trying to learn everything from scratch.
  • By building systems that can perform thousands or millions of tasks, machines can be taught new tasks with only a few examples.

Self-Supervised Learning and Few Examples

This section delves deeper into the concept of self-supervised learning and its potential impact on AI.

  • Self-supervised learning allows machines to learn with relatively few examples.
  • The hope is to develop systems where providing just five examples of a new task enables the machine to learn and perform it.
  • This approach reduces the dependency on massive amounts of training data.

Responsible Application of AI

The speaker emphasizes the importance of responsible application and careful consideration when using AI.

  • The consequences of AI depend on how it is applied. It can be a powerful force for good or have negative consequences if not used thoughtfully.
  • Having a set of principles helps guide the ethical use and development of AI applications.

Concerns about Biased Learning

This section addresses concerns about biased learning in AI systems and questions whether these principles are genuinely upheld.

  • There has been controversy regarding biased learning in AI systems, particularly related to Google.
  • While not addressing specific cases, the speaker assures that they are committed to upholding principles and have numerous researchers working on related topics.
  • Openly publishing research papers demonstrates their commitment to advancing fairness, interpretability, and safety in machine learning models.

Balancing Real World Data with Desired Values

This section explores how real-world data is used in training machine-learning models while ensuring alignment with desired values.

  • The challenge lies in using real-world data to train models that reflect the values we want, rather than solely reflecting the existing world.
  • Collaboration between research groups and commercial teams at Google allows for a balance between commercial interests and advancing the state of the art.
  • Openly publishing research papers is essential for making progress in developing safe and responsible AI models.

Maximizing Values for the World

The speaker addresses concerns about maximizing values for the world and avoiding undue influence from commercial interests.

  • Concerns arise regarding whether AI development prioritizes maximizing profitability or serves broader societal goals.
  • The speaker emphasizes collaboration with various groups within Google while maintaining a clear separation between commercial interests and research objectives.
  • Openly publishing research papers demonstrates their commitment to advancing AI for the benefit of all.
Channel: TED
Video description

What is AI, really? Jeff Dean, the head of Google's AI efforts, explains the underlying technology that enables artificial intelligence to do all sorts of things, from understanding language to diagnosing disease -- and presents a roadmap for building better, more responsible systems that have a deeper understanding of the world. (Followed by a Q&A with head of TED Chris Anderson) Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more. You're welcome to link to or embed these videos, forward them to others and share these ideas with people you know. Become a TED Member: http://ted.com/membership Follow TED on Twitter: http://twitter.com/TEDTalks Like TED on Facebook: http://facebook.com/TED Subscribe to our channel: http://youtube.com/TED TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy (https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy). For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com