How to Get Inside the "Brain" of AI | Alona Fyshe | TED
Introduction
The speaker introduces the topic of understanding and interpreting the world around us, and how sometimes we attribute more intelligence than might actually be there.
People's Perception
- People are constantly trying to understand and interpret the world around them.
- Sometimes people attribute more intelligence than might actually be there.
Examples of Misinterpretation
- The speaker shares an example of how they mistake a black bunched up sweater for their black cats.
- Clever Hans, a horse who could do math, was not doing math but had learned to watch people in the room to tell when he should tap his hoof. He communicated his answers by tapping his hoof.
AI Understanding Language
The speaker discusses whether AI understands language or if we are having our own Clever Hans moment.
Advancements in AI Models
- AI models today are much better than those from five years ago. It is remarkable how much progress has been made.
Chinese Room Argument
- Philosophers developed something called the Chinese room argument to illustrate that computers may never understand language.
- In the Chinese room, a person who does not understand Chinese has instructions on how to respond in Chinese to any sentence written in Chinese.
- Performance on this task does not show that you know Chinese.
Do AIs Understand Us?
- When we speak with AIs like ChatGPT, it looks like they understand us because we're feeding in English sentences and getting English sentences back.
- However, under the hood, these models are just following a set of instructions.
How to Know if AI Understands Us
- To know if AI understands us, we need to compare it with someone who actually speaks the language.
- The Chinese room argument can be used to illustrate this point. When a person who actually speaks Chinese gets a piece of paper that says something in Chinese, they can read it without any problem. But when an imposter gets it, he has to use his set of instructions to figure out how to respond.
Understanding AI: Scratching the Surface
In this section, the speaker introduces the concept of scratch pads and how they can be used to understand how humans and AI process information.
Scratch Pads in Brains
- The speaker explains that scratch pads are like little notebooks inside our brains where we write down everything we need to remember for a task.
- Even if two people have the same input and output for a task, their scratch pads may look completely different.
- To determine if AI truly understands language like humans do, we need to see its scratch pad and compare it to that of someone who actually understands language.
- Brain imaging techniques such as fMRI or EEG can provide snapshots of the brain's scratch pad while reading.
Scratch Pads in AI
- Neural networks are commonly used in AIs, and each neuron computes a number when fed with a word. These numbers tell us something about how the neural network is processing language.
- All these numbers together give us an idea of what the neural network's scratch pad looks like.
Comparing Scratch Pads
- Researchers train a new model to predict the brain's scratch pad for a particular word based on its neural network's scratch pad. If there is nothing in common between them, this prediction task would not be possible.
- 75% of the time, predicted neural network scratchpads for specific words were more similar to true neural network scratchpads than those for other randomly chosen words.
Is AI an Imposter?
In this section, the speaker discusses whether or not AI is an imposter and how it can be tested.
Testing AI
- Researchers have come up with a scratch pad prediction task to test whether AI is doing something similar to what the brain does.
- The results of this task suggest that while AI is not exactly like the brain, there are similarities between them.
Imposter Syndrome
- Even if AI generates plausible dialogue and answers questions as expected, it may still be an imposter of sorts.
- To truly understand if AI understands language like humans do, we need to know what it's doing.
Can AI Understand Language Like We Do?
In this talk, the speaker explores whether artificial intelligence (AI) can understand language like humans do. The speaker discusses scratch pad prediction tasks and how they show above-chance accuracy in neural networks, but the underlying correlations are still weak. The speaker also notes that neural networks lack the same structure and complexity as the human brain and have never experienced the world like humans have.
Scratch Pad Prediction Tasks
- Neural networks and AI do not understand language like humans do.
- Scratch pad prediction tasks show above-chance accuracy, but underlying correlations are still weak.
- Neural networks lack the same structure and complexity as the human brain.
- Neural networks have never experienced the world like humans have.
Understanding Language
- Can a neural network that has never experienced the world really understand language about it?
- As neural networks get more accurate, they start to use their scratch pad in a way that becomes more brain-like.
- AI is not doing exactly what the brain is doing, but it's not completely random either.
Getting Inside of AI
- To know if AI really understands language like we do, we need to get inside of it and compare its processes to those of humans.
- Looking only at input and output can be misleading; we need to see what's happening inside of AI to truly understand it.
Conclusion
- Humans will continue to look for meaning and interpret the world around us.
- It's what's inside that counts when it comes to understanding AI.