The History of Artificial Intelligence [Documentary]

The History of Artificial Intelligence [Documentary]

Introduction to AI

This video is an introductory compilation of some of the best documentaries on AI. It covers the ancient origins of artificial intelligence, modern computational-based AI, machine learning algorithms, and sci-fi topics.

Evolution of AI

  • The field of AI has evolved over time with prominent groups and figures responsible for its development.
  • Appreciating the advances in technology we often take for granted today requires understanding how the field of AI has evolved.

Can Machines Think?

  • The question of whether machines can think is a hard one to answer since we know so little about thought processes or information that makes up thought processes.
  • There are many things which machines can do today which if done by human beings would be considered thinking. However, until machines produce genuinely new things, it's difficult to say they think.

Dreaming About Thinking Machines

  • The concept of a thinking machine has been man's dream for centuries and his nightmare as well.
  • The exploitation of this dream was largely in the hands of fiction writers and colleagues in the motion picture industry until recently.

Learning Processes

  • Children learn through copying others' actions but make mistakes at first before getting them right.
  • How children learn letters is still not fully understood.

There were several parts where music played without any relevant content. These parts were not included in the notes.

Can Computers Learn?

In this section, the host introduces the idea of computers being able to learn and compares it to a child's ability to learn. The computer is shown learning the alphabet.

Learning the Alphabet

  • The computer is shown a letter and asked to identify it.
  • The computer correctly identifies a "W" after some difficulty.
  • The computer learns by comparing letters it has seen before with new ones.
  • The computer's success rate improves as it gains more information about each letter.

Can Machines Think?

This section explores whether machines can think and discusses ongoing research in various fields related to this question.

Understanding Logical Problems

  • Researchers are studying logical problems, such as the cannibals and missionaries puzzle, to better understand how machines process information.
  • A specific example of this type of problem is presented, and viewers are encouraged to try solving it themselves.
  • Viewers are given instructions on how to approach solving the problem.

Swimming and Problem Solving

In this section, the characters discuss swimming and problem-solving. They also talk about Barbara's solution to a problem.

Swimming

  • None of the characters are swimmers.
  • One of the characters offers to bring a missionary over to help with swimming.

Problem Solving

  • The characters discuss Barbara's difficulty in solving a problem.
  • Barbara finds the correct solution, which is recorded by Professor Simon.
  • The computer finds the same solution as Barbara.

How Computers Work

In this section, the characters discuss how computers work and their capabilities.

Computer Capabilities

  • The machine tries things that seem most likely based on probabilities or reasonableness programmed into it.
  • Computers can do many things, but we are just beginning to understand their capabilities.
  • There is film available that illustrates some of these capabilities.

Playing Checkers Against a Computer

  • A man plays checkers against a computer.
  • Dr. Al Samuel programmed the computer to play checkers so he could study machine learning.

Programming Computers

  • You can make adders, multipliers, and other mathematical operations using building blocks in computers.
  • Programming involves laying out a series of steps for the computer to follow.
  • Once a machine has learned how to do something, it can print out the instructions on paper.

Comparing Computers and Nervous Systems

In this section, the characters discuss similarities and differences between computers and nervous systems.

Similarities

  • Both systems use electrical signals.

Differences

  • Neurophysiologists think there are more differences than similarities between computers and living nervous systems.

Conclusion

The video concludes with a discussion of what we can learn from studying computers.

Studying Thought Processes

  • We can learn about thought processes by studying how computers solve problems.
  • However, it is dangerous to assume that the nervous system works like a computer.

Programming vs Instinct

In this section, Dr. Wiesner and Mr. Wayne discuss whether men are programmed or born with certain instincts.

Men's Programming

  • Men have built-in programming.
  • Programming can be hereditary or learned through experience.

Animal Programming

  • Animals start life with a large part of their nervous system knowing what to do.
  • Ducks raised in isolation were used in an experiment to differentiate between a goose and a hawk.
  • Animals seem to start life with more built-in information than previously suspected.

Instinct vs Programming

  • Instinct is the word used for programming determined by heredity.
  • Research on frogs at MIT shows that some animals are born with more built-in information than previously thought.

Frog's Eye Reports Specific Information

In this section, Professor Levin discusses how the frog's eye reports only specific information to the brain.

Frog's Eye Reporting

  • The frog's eye reports only very specific information to the brain.
  • The frog only sees things that move.

Experiment on Frogs

  • Professor Levin puts a target in front of the frog's eye and uses a magnet to move a small metal disc around it until he finds the point at which the particular fiber is looking.
  • The fiber reacts every time something moves, which could explain why the frog didn't eat the dead flies.

Frog Eye and Human Vision

The frog's eye only reports specific things to the brain related to its survival. Professor Levin's theories suggest that people are also born with certain information built into their nervous system.

Frog Eye Fibers

  • The fibers from the frog's eye only report specific things to the brain related to its survival.
  • One group of fibers looks for sharp edges, while another group is a bug detector.
  • Professor Levin's theories are not yet accepted by all in the scientific community.

Human Instinct

  • People are born with a certain amount of information built into their nervous system.
  • A demonstration shows that children rely exclusively on their eyes to form concepts of the world around them.
  • Children have preconceived notions about the world around them, some of which are wrong and need correction as they grow up.

Seeing and Believing

  • Our eyes tend to see only those attributes of objects which our nervous system is designed and programmed to see.
  • These experiments indicate that seeing isn't believing but believing is seeing.

Illusions

  • An illusion demonstrates how our eyes tell us anything longer is closer, even when it's not true.
  • Covering an object can help our brains correct illusions.

What Do You See This Time?

The professor asks the student what they see.

  • The professor asks the student what they see.
  • The student responds that they see nothing.

Rules and Perception

The professor explains how perception can be influenced by rules.

Tube Bending

  • The professor shows the student a tube bending back on itself.
  • He explains that this is because the student assumed it was made of rubber.
  • He tells the student to assume it's made of steel instead, and it will cut through the window instead of bending.

Programmed Reactions

  • The professor explains that everyone is somewhat programmed to react similarly to a machine.
  • He says that he is born with certain rules built into him.
  • He compares this to machines having rules built into them by man.

Computer Creativity

The professor discusses whether computers can do anything original or creative.

Writing a Play

  • The professor asks if writing a television Western would be considered original.
  • He then asks if a computer could write a play.
  • He says that computers can write pretty good plays, and they will see one written by the computer soon.

Computer Playlet

  • A playlet written by a computer is shown being printed out.

Artificial Intelligence in Playwriting

Doug Ross from MIT explains how artificial intelligence can be used in playwriting.

Rule-Obedient Behavior

  • Doug Ross explains that intelligent behavior is rule-obeying behavior.
  • He says that they are trying to show how a computer can be made to do creative work in the type of play their program is designed to write.

Programming Rules

  • The human playwright already knows things that must be taught to the computer by programming.
  • For example, if the gun is in the robber's hand and he is in the corner, the human knows immediately that the gun is also in the corner.
  • They must make the computer able to keep track of all these things.

Reasonable Alternatives

  • The computer will choose reasonable alternatives for the sheriff depending on whether or not he can see the robber and vice versa.
  • They have given the computer rules for determining reasonable behavior and modifying those rules.
  • For example, they have an inebriation factor which controls the actions of the robber depending on how much he has had to drink.

The Capabilities of Computers

In this section, the speaker discusses how computers can write scripts and create different plays every time. He also talks about the limitations of computers.

Writing Scripts with Computers

  • Storing a script in a computer's memory would print out the same play every time.
  • However, when a computer writes a script, it creates a different play every time.

Limitations of Computers

  • Making a program for a computer involves trial and error.
  • The capabilities of computers depend on how much we are able to find out about learning processes.
  • There are many things that computers do not know and sometimes they just do not work.

Studying Information Processing Systems

In this section, the speaker talks about studying information processing systems through experiments with humans and recording signals from the brain.

Recording Signals from the Brain

  • A series of rapid clicks is put into the subject's ear while recording signals coming from electrodes in their headset.
  • The peaks and valleys seen in the recording are a result of the clicks that the person hears.
  • These techniques are being used to determine deafness in children.

Studying Logical Processes in Giant Computers

  • Lincoln Laboratory is using huge installations like TX2 to find out how signals move between neurons in the brain.
  • TX2 contains about 2.5 million memory units, which is only a tiny fraction of the elements housed in the human brain.
  • Belmont Farley studies the behavior of signals in networks by making a wave travel through TX2.

The Future of Machines and Thinking

In this section, the speakers discuss the possibility of machines being able to think like humans. They also talk about the potential impact of this development on society.

Can Machines Think?

  • Scientists have different opinions on whether machines can truly think.
  • Some believe that machines can be programmed to behave intelligently but cannot produce anything truly new or creative.
  • Others are convinced that machines will eventually be able to think and behave like humans.

Direct and Indirect Effects

  • The direct effects of machine intelligence include using them for various tasks that would otherwise require human labor.
  • Indirect effects include learning from working with computers, which could help solve problems in fields such as mental health, social issues, and economics.

The Second Industrial Revolution

  • Professor Norbert Wiener believes that we are currently living through the Second Industrial Revolution, where computers assist human minds in ways previously impossible.
  • As time goes on, we will find more ways to use computers to do things that our unaided minds cannot accomplish alone.

Simulating Human Brain on a Computing Machine

  • When discussing simulating the human brain on a computing machine, it is important to distinguish between past accomplishments and future goals.
  • While past accomplishments have been impressive, there is still much work to be done in order to achieve true machine intelligence.

The Future of Machine Learning

In this section, Dr. Weizenbaum discusses the three main components of a machine capable of learning by experience and forming inductive and deductive thought.

Components of a Machine Capable of Learning

  • Sense organs similar to the human eye or ear would allow the machine to take cognizance of events in its environment.
  • A large general-purpose flexible computer program would enable the machine to learn from experience, form concepts, and perform logic.
  • Output devices similar to the human hand would allow the machine to make use of its cognitive processes to affect the environment.

Challenges and Excitement in Modern Scientific Work

In this section, Dr. Weizenbaum acknowledges that while he is excited about progress being made in machine learning, he also recognizes that it poses challenges.

Challenges Posed by Machine Learning

  • The problems posed by computers are no different than those posed by other products of technology.
  • It will take wisdom on our part to manage these challenges, but if we do so successfully, we can create a better world.

Solving Problems Through Trial and Error

In this section, Claude Shannon demonstrates how an electrically controlled mouse can solve problems through trial and error means.

Electrically Controlled Mouse Solves Problems Through Trial and Error

  • The electrically controlled mouse can solve problems through trial and error means.
  • The mouse can remember the solution to a problem and use that information to solve the same problem in the future.
  • The mouse's ability to solve problems and remember solutions involves a certain level of mental activity, which is akin to that of a brain.
  • While the mouse's authority is too small to contain even a small computing machine, it serves as an example of how such machines could work.

Telephone Relays and Computing Machines

In this section, Claude Shannon discusses how telephone relays can be used to build computing machines capable of solving mathematical problems.

Telephone Relays Used for Computing Machines

  • Telephone relays can be used to build computing machines capable of solving mathematical problems in minutes.
  • Bell Labs uses knowledge gained from telephone relays to build gun director equipment for the Armed Forces.
  • The electrically controlled mouse is an example of intelligent behavior that can adapt to changes.

How the Mouse Solves the Maze

The section explains how a bar magnet mounted on three wheels is used to move a mouse around a maze. The mouse is moved by an electromagnet that can move in two different directions and is controlled by a pair of motors. The position of the mouse is sensed by Reed switches located under different squares of the maze.

Mechanism for Moving the Mouse

  • The mouse is moved by an electromagnet that can move in two different directions, driven by a pair of motors.
  • The position of the mouse is sensed by Reed switches located under different squares of the maze. If the mouse enters one of these squares, it closes the appropriate switch, which signals to the electromagnet to move over to a position underneath that square.
  • Once the relay circuit takes over control of the electromagnet, it moves and thereby moves the mouse.

Method for Exploring Mazes

  • When exploring a maze, Thesis rotates his trial direction for any square in a clockwise manner (north, east, south, west) until he's able to escape from that square. He also takes account of his previous knowledge from his last visit to that square and what direction he entered from. This method will eventually solve any possible maze due to topology guarantees.

Do Computers Really Think?

This section explores whether computers really think or not.

Definition of "Think"

  • To answer whether computers really think or not requires first defining what "think" means. One definition is "to call to mind; remember." Electronic computers are good at storing information and recalling it, which makes their operation automatic.

Memory Devices

  • Memory devices have been around since man learned to use substitutes or symbols to represent things he wanted to remember. For example, using pebbles to represent cows owned instead of counting the herd again. Mechanical memories use all kinds of symbols such as lines carved in marble or strings on fingers. Modern computers use fast magnetic memory devices such as tapes or disks stacked like jukebox records or tiny cores woven together like Indian beads.

Thinking Machines

This video explores the capabilities of computers and how they relate to human thinking. It discusses logic, visualization, memory, recognition, language translation, feelings, and creativity.

Logic

  • Computers are capable of logical thought.
  • Playing chess involves a high degree of logic.
  • A computer at MIT has been programmed to play a respectable game of chess.
  • Logic is a predictable series of facts or events.
  • Elementary logic circuits are the basic building blocks that form complex logic networks we call computers.

Visualization

  • Computers can produce pictures on cathode ray tubes by processing abstract data.
  • They are good at simulating designs or systems.

Memory

  • Computers have good memory capabilities.
  • They can form mental images but lack imagination.

Recognition

  • Computers are limited in their ability to recognize patterns beyond simple well-defined ones like post office zip code numbers provided they're tight and properly positioned.
  • Teaching a computer to generalize recognition is difficult.

Language Translation

  • Mechanical translation between languages is problematic due to the lack of an absolute one-to-one correspondence between words in different languages.

Feelings

  • Computers do not have feelings but can be programmed to simulate human emotion.

Creativity

  • Creativity is still considered a uniquely human capability.
  • A computer has been used to produce animated pictures and films.

The Computer and Its Usefulness

In this section, the speaker talks about how computers are created by humans and how they are useful tools.

Creation of Computers

  • The computer is an electronic hardware created by man.
  • Programs created by humans make the computer a useful tool.

Usefulness of Computers

  • A computer can perform billions of correct mathematical operations without making a mistake.
  • Computers are efficient and productive tools.

Can Computers Think?

In this section, the speaker discusses whether computers can think like humans or not.

Similarities between Human Thought and Computer Processes

  • Some processes carried out by computers are similar to human thought.

Artificial Intelligence

  • Artificial intelligence is a field that explores the idea of machines being able to think like humans.
  • MIT was one of the institutions that explored artificial intelligence in its early days.

Early Successes in Artificial Intelligence

  • Marvin Minsky and John McCarthy set up a department at MIT to explore artificial intelligence.
  • One of their students, Jim Slagle, programmed a computer to solve problems in freshman calculus with great success.

Locating Mental Activities in the Mind

In this section, the speaker talks about how mental activities are located in an abstract realm called the mind.

Thinking as a Mysterious Activity

  • Thinking intelligent thoughts is a mysterious activity.

Locating Mental Activities in the Mind

  • Philosophers tend to locate mental activities in an abstract realm called the mind.

Artificial Intelligence and the Mind

The pioneers of artificial intelligence believed that if a brain can be a mind, then so can a computer. They viewed the mind as something different from the brain and saw the mind as a symbolic processing entity while the brain was hardware. Hubert Dreyfus, a philosopher at MIT, was convinced that machines could think.

The Analogy between Brain and Computer

  • The pioneers of artificial intelligence believed that if a brain can be a mind, then so can a computer.
  • They viewed the mind as something different from the brain and saw the mind as a symbolic processing entity while the brain was hardware.
  • The analogy between software running on hardware in computers and minds running on brains was made.

Blindly Copying Nature's Way of Doing Things

  • Blindly copying nature's way of doing things wasn't always successful.
  • Attempts at artificial flight based on how birds fly had been disastrous.

Machines Can Think

  • Hubert Dreyfus, a philosopher at MIT, was convinced that machines could think.
  • He did not mean that machines would behave like humans or that we would have difficulty distinguishing between humans and robots. However, he believed that computers would do things we consider thinking.

Challenges in Building an AI System

Building an AI system is challenging because it requires teaching computers to recognize objects in various forms.

Stacking Blocks with an AI System

  • Scientists at MIT built an AI system with grippers for hands and TV cameras for eyes to stack blocks.
  • It turned out to be more difficult than expected because recognizing blocks is complicated. Blocks have different shapes, shadows, surfaces, and sometimes things written on them.
  • The program had some strange ideas about what happened to blocks when you let them go. For example, it did not know that if you let go of something, it will fall due to gravity.

Learning from Mistakes

  • It took several years for the robot to learn how to build a tower of blocks because it did not know about gravity or other things that every two-year-old child knows.

The Challenges of Computer Vision

This section discusses the challenges of computer vision and how it is more complex than human vision.

Computer Vision Challenges

  • After processing, the computer sorts out four major regions: top of the cup, body of the cup, hole in the handle, and an irregular region which is a shadow.
  • Moving and seeing at the same time is challenging for computers. Researchers tried to get a cart connected to a massive computing engine to cross a space avoiding objects in its path. Each meter of travel was accompanied by 15 minutes of computation.
  • A four-year-old child can detect objects and avoid collisions effortlessly because they are equipped with wonderful circuitry to look at the world. In contrast, researchers used a hierarchy of programs approach for anything but effortless tasks.

Turing Test

  • Alan Turing proposed that machines would think one day and created a test called the Turing test. The test requires that a machine must pass before it could be considered truly intelligent.
  • The Turing test involves communicating via screen with an entity somewhere else. It may be a person or a computer program. You have to determine whether you were talking to a human or machine based on language use.

Language Use

  • Joseph Weizenbaum's program Eliza sought to use language convincingly by responding intelligently using tricks like turning replies into another question.
  • Eliza understands nothing about meaning behind words but responds aggressively when important words like mother, father or dream are mentioned.
  • Understanding spoken or written sentences is vastly more complex than solving calculus problems because it depends not so much on English as on your knowledge of what people want and don't want.

The Complexity of Language

This section discusses the complexity of language and how it is more complex than solving calculus problems.

Understanding Sentences

  • A simple everyday sentence like "Mary saw the bicycle in the store window she wanted" might refer to the bicycle, store window, or the store. Which it refers to depends not so much on English as on your knowledge of what people want and don't want.
  • If you add more information like "she saw the bicycle through the store window, looked at it longingly, and pressed her nose up against it," then it probably refers to the store window instead of the bicycle. You have to bring in knowledge of human anatomy and emotions.

Electronic Translation

  • One of the first non-numerical applications of computers was electronic translation from Russian into English. However, they hadn't reckoned with ambiguity when they set out to use computers to translate languages.

Introduction

The video introduces the topic of artificial intelligence and its potential to replace human translators. It also highlights the challenges that computers face in understanding language.

The End of Human Translators?

  • Computers are becoming faster and more efficient, but can they replace human translators?
  • Translating scientific and technical material may be possible for computers, but it is not as easy as it seems.

Understanding Language

  • Humans have a vast amount of common knowledge that allows them to understand each other despite different languages and traditions.
  • For computers to understand language, they would need to know what humans know about goals, beliefs, sensitivities, and fears.
  • Language is full of ambiguity, making it difficult for computers to understand context and meaning.

Challenges in Artificial Intelligence

The video discusses the challenges that artificial intelligence faces in learning language and performing tasks that come naturally to humans.

Easy vs Hard Tasks

  • Calculus can be done with just a few hundred pieces of program code, but tasks like recognizing faces or walking are still difficult for robots.
  • AI has faced many failures over the years, leading some people to believe that it is doomed.

Machine Learning

  • Researchers have been working on machine learning techniques to help computers recognize patterns and learn from data.
  • One successful project was Terry Winograd's program SLURP which could use English intelligently within a micro world of simulated blocks. However, this success was limited by the narrow scope of discussion topics available within this micro world.

The Future of AI

  • Despite the challenges, some researchers believe that AI still has a bright future. Edward Feigenbaum realized that while microworlds might not be very large, they might be large enough to be useful in capturing the intelligence displayed by experts and specialists.

Expert Systems and Brittleness

In this section, the speaker discusses how expert systems are limited in their ability to function outside of their narrow field of knowledge due to brittleness. The speaker compares this to the condition of idiosavantism and emphasizes the importance of studying children who excel in broad but shallow knowledge to capture general human intelligence.

Expert Systems and Narrow Knowledge

  • Geologists, medicine, and science have areas where deep but narrow knowledge is used.
  • Computers can achieve expert behavior in useful but narrow areas with a few hundred or thousand pieces of knowledge.
  • Human experts know many things outside their specialty, while expert systems are hopeless outside their field of knowledge.

Brittleness of Expert Systems

  • Expert systems are brittle when they meet new situations.
  • An expert system for blood disease analysis is brilliant at deciding which blood disease a patient has based on objective tests but cannot answer questions about germs or patients' preferences.
  • An expert system for approving automobile loans granted a loan to someone who put down that they had 20 years of experience on the same job even though they also put down that they were only 19 years old.
  • An expert system for skin disease diagnosis diagnosed measles when asked about a car.

Idiosavantism and General Human Intelligence

  • The brittleness of expert systems has been likened to idiosavantism, where a person is brilliantly gifted in one small area but backward in every other sense.
  • A deep but narrow mind will always break when it meets new situations.
  • General human intelligence creates a broad model of the world, enabling us to cope with all kinds of situations.
  • To capture general human intelligence in a computer program, we have to study children who excel in broad but shallow knowledge.

Following Simple Stories

In this section, the speaker discusses how language researchers were hard at work trying to get computers to follow simple stories as children do. They discovered that the problem wasn't what the story said but rather the huge number of things left unsaid because they were too obvious to be worth saying.

The Problem with Simple Stories

  • Language researchers were trying to get computers to follow simple stories as children do.
  • The problem wasn't what the story said but rather the huge number of things left unsaid because they were too obvious to be worth saying.

Example Story

  • An example story was about Jack's birthday and buying him a kite.
  • The story presupposes a vast amount of knowledge, such as assuming they are going to a birthday party and why they were buying a kite.

Building Frames and Scripts for AI

In this section, the speaker discusses the idea of giving computers context by building frames or scripts for situations they might encounter. However, this approach faces challenges when it comes to common sense knowledge.

Challenges with Common Sense Knowledge

  • The challenge of storing general background knowledge that doesn't belong in specific frames or scripts.
  • The difficulty of defining a strict rule for when someone wants another item just like one they already have.
  • The problem of understanding everything else being equal in common sense knowledge.

Common Sense Knowledge and Machine Learning

  • Common sense knowledge is the vast database of intuitive knowledge shared by everyone.
  • Scientists have long been interested in machine learning, but computers start from such a low level that they struggle to learn quickly.
  • To overcome this challenge, researchers attempted to feed computers millions of pieces of common sense knowledge through projects like Psych.

The Ultimate Test: Building an Artificial Mind

  • The Psych project aimed to input the kind of obvious knowledge that encyclopedias don't include to build an artificial mind capable of understanding language and learning on its own.

The Acquisition of Common Sense Knowledge

In this section, the speaker discusses how common sense knowledge is acquired through experiences and skills.

Acquiring Common Sense Knowledge

  • Children acquire common sense knowledge through playing with blocks, sand, and water.
  • Common sense knowledge consists of a huge number of special cases that have tuned the neurons to recognize similar patterns and trigger appropriate actions or expectations.
  • People who never experience much of the world can still acquire and use language with common sense.
  • Oliver Sacks' collection of neurological cases includes a patient named Madeleine who was born blind and unable to move her limbs but could use language with common sense despite having limited experiences.

Building an Artificial Brain

  • The human brain is not like a computer; it is made from billions of neurons connected together in thousands of ways.
  • Scientists pursued the idea of building an artificial brain in the 1950s to imitate how the brain's network of neurons learned from experiences by recording them as the strength of connections between neurons.
  • Scientists built working perceptrons (artificial brains) in the 1950s and 60s to explore how they learn, such as recognizing differences between males and females based on facial features and hair outline.
  • This approach to machine intelligence virtually died out but underwent a revival in the late 70s when AI problems seemed insurmountable.

Neural Networks and Common Sense

The video discusses how neural networks work and their limitations in terms of common sense. It also explores the challenges of training neural networks to recognize patterns and navigate the world.

How Neural Networks Work

  • Neural networks are driven by a neural network that has learned by itself how to keep a vehicle on the road.
  • The network has been trained to imitate a person driving, learning from watching a person drive along about a 500-meter stretch of road.
  • The network has learned to key on the position of the white line and edge of the road to determine where the road is and hence in what direction it should steer.

Limitations of Neural Networks

  • While neural networks are appealing, they have limitations in terms of common sense. They have problems that conventional software doesn't have.
  • Researchers don't yet know how nets learn, which matters because what they may be learning may not be what researchers think they're learning.
  • Early neural networks contained some big surprises. For example, when researchers took pictures of tanks hidden behind trees and trees without any tanks behind them, they trained a connectionist net to distinguish between pictures with tanks and those without. However, when given new pictures taken on different days, it failed completely.

Training Neural Networks

  • To train a network called "net" to associate patterns of letters with their sounds when read aloud, researchers made random attempts at pronouncing phrases like "grandmother's house."
  • After each attempt, the phonetic difference between the guess and right pronunciation was sent back through the network so that the net could adjust its connection strengths.
  • The net slowly improved and eventually learned to associate letters with sounds, but it still understands nothing about language.

Conclusion

  • Today's neural networks are very small, and attempts to make them bigger than a few hundred neurons backfire because the training time explodes.
  • The brain has managed to solve problems by being a collection of many special-purpose machines rather than one big general-purpose machine.

Practical Applications of Artificial Intelligence

In this section, the speaker discusses the practical applications of artificial intelligence and how they differ from the original quest for general-purpose intelligence based on common sense.

Specialized AI Applications

  • AI has produced specialized applications such as chess-playing programs, expert systems, rudimentary robots in hospitals, computers that read books to the blind, and limited translation capabilities.
  • These practical applications have appropriated the name "artificial intelligence," but they have little to do with the original quest for general-purpose intelligence based on common sense.

The Quest for General-Purpose Intelligence

  • None of these specialized AI applications could pass Alan Turing's test proposed in 1950.
  • Project Doug Leonard began in 1984 with a goal to build a mind capable of understanding language and learning how humans transport. The project is still ongoing.
  • The chances of success were originally estimated at about 10%, but now it is estimated at over 60% due to overcoming various obstacles along the way.
  • Researchers have dedicated their lives to formalizing, studying, and codifying topics such as time-space causality, belief emotions rationality.

Building a Mind

  • The team working on Project Doug Leonard call themselves Cyclists and are trying to capture the world piece by piece in order to build a mind that knows enough so that it can understand language and learn how humans transport.
  • Psych was having some success handling ambiguous phrases such as "Mary read Melville" by understanding high metaphor sensibility.
  • Building a mind is painstaking work. Psych's mind has to be filled with all the details of what a nurse does, including taking temperatures and giving medicines.
  • Psych sees the world in a pretty novel way and comes up with new generalizations by looking for inconsistencies in its database.

Entity with No Body

  • Even though Psych runs on computers, it has no body at all. It is just software, a pure mind.
  • Psych wonders what an entity with no body makes of all the knowledge that it has acquired about a world it has not directly experienced.

The Potential of Artificial Intelligence

Dr. John McCarthy discusses the potential of artificial intelligence and its ability to capture a great deal of intelligence in a disembodied way. He believes that if AI can understand stories that four-year-old children can understand, it would be a significant achievement. If successful, AI could pass the Turing test and power machine learning programs to learn things unknown to humanity at present.

Understanding Stories

  • Dr. McCarthy believes that if AI can understand stories that four-year-old children can understand, it would be a significant achievement.
  • This would show that a great deal of intelligence could be captured in a totally disembodied way.

Potential Payoff

  • The potential payoff for successful AI development is passing the Turing test and powering machine learning programs to learn things unknown to humanity at present.
  • Using AI as an intelligence amplifier could allow us to do things in a few decades that people today cannot dream of doing.

History and Growth of Artificial Intelligence

Dr. John McCarthy reflects on his personal experience with artificial intelligence since he began working on it in 1956. He notes that progress has been slower than he hoped due to difficult conceptual problems, but he has devoted his life's work to tackling these problems.

Personal Reflection

  • Dr. McCarthy began working on artificial intelligence in 1956.
  • He notes progress has been slower than he hoped due to difficult conceptual problems.
  • His life's work has been devoted to tackling these problems.

Formal Models of Intelligence

  • Dr. McCarthy's work has focused on providing the underpinnings for formal models of intelligence that would be equivalent to human intelligence.
  • One part of this problem is developing a language in which we can express facts and reasoning about the Common Sense world necessary for intelligent behavior.

Early Progress

  • In the early years of AI, there was some striking progress made on difficult problems.

The Challenges of Computer Intelligence

In this section, the speaker discusses the challenges of computer intelligence and how it has been approached from both a biological and computer science perspective.

Approaches to Computer Intelligence

  • Speech recognition and solving difficult mathematical problems have been challenging for computer intelligence.
  • Two approaches to computer intelligence are imitating the nervous system or imitating human psychology.
  • The computer science approach is more successful in dealing with the common sense world.

Psychology and Computer Science

  • Computers have had a profound effect on psychology by allowing researchers to understand how information is processed.
  • Behaviorism was a reaction to 19th-century philosophy that went too far in its efforts to be scientific by saying that only externally observable things were subjects for science.
  • Consciousness offers some conceptual difficulties when it comes to programming computers, but there has been progress in developing self-consciousness.

Philosophical Issues

  • There are deep philosophical issues surrounding the development of self-consciousness in computers.
  • Alan Turing developed the famous Turing test, which suggested that if a computer could imitate a human being so well that you couldn't tell whether you were communicating with a real human or not, then you might as well say that the computer was conscious.

Behavioral Criteria and Consciousness

In this section, the speakers discuss the concept of consciousness and whether machines can possess it. They also talk about how consciousness is viewed as a system composed of many components.

Behavioral Criteria for Thinking

  • Some philosophers accept behavioral criteria to determine if something is thinking or not.
  • Others argue that if a machine is only doing what it was programmed to do, then it cannot be considered as truly thinking.

Components of Consciousness

  • Consciousness is viewed as consisting of many components such as memory, emotions, and attention.
  • Some people believe that consciousness is nothing more than the sum of its parts.
  • The mind can be seen as a structure composed of parts interacting in a specialized way.

Machines vs Systems

  • A machine isn't just the sum of its parts; they have to be connected in a specified way and interacting in a specified way.
  • The mind can be seen as more of a system than just a heap of parts.

Human Intuition and AI Optimism

In this section, the speakers discuss human intuition and spirituality. They also talk about AI optimism and how machines are expected to realize aspects of human consciousness that have not been realized yet.

Human Intuition

  • Some people believe that humans have intuition, spirituality, and something that transcends mechanistic aspects.
  • This view has been in retreat for several hundred years due to discoveries about human physiology and psychology.

AI Optimism

  • There are aspects of human consciousness that have not been realized in machines or computer programs yet.
  • However, optimists about AI expect to get there eventually.

Chess Program Example

  • In 1968, one speaker made a wager with another chess player that within ten years, computers would be able to beat him at chess.
  • In 1978, a state-of-the-art program nearly beat the chess player, winning two games to the machine's one.
  • Current programs could probably beat David Levy, who was a graduate student in computer science and never made it to be grandmaster.

Common-Sense Reasoning

In this section, the speakers discuss how humans use mental shortcuts and rules of thumb to solve problems. They also talk about how formalized logic can enable machines to work in that fashion.

Mental Shortcuts

  • Humans use mental shortcuts and rules of thumb to solve problems rather than brute intellectual force.
  • The collection of problems on which computer brute force can be applied is limited.

Formalized Logic

  • The central problem of artificial intelligence involves how to express the knowledge about the world necessary for intelligent behavior.
  • Mathematical logic has been pursued as a tool for this purpose.
  • However, there are difficult problems for their realization.

Non-Monotonic Reasoning

In this section, the speaker explains non-monotonic reasoning and how it differs from ordinary logic. He gives an example of how human reasoning is not always monotonic and how built-in assumptions affect our understanding of context.

Non-Monotonic Reasoning

  • Ordinary logic has the property that if you can draw a certain conclusion from some premises than if you add more premises, you can still draw that conclusion.
  • Human reasoning doesn't always have that property, which is what we call non-monotonicity.
  • An example of non-monotonic logic is when someone says they have a bird and want a birdcage built for it. If no other information is given, one would assume the bird can fly and build a cage with a top. However, if later on, it's revealed that the bird is actually a penguin, then there's no need for a top on the cage.
  • Our understanding of context relies heavily on implicit built-in assumptions and conventions in language. This makes non-monotonicity only part of the problem with context.

Using Non-Monotonic Reasoning in Computers

  • Non-monotonic reasoning can be used as a mathematical tool to build into computers an awareness of context and implicit assumptions in language.
  • Instead of trying to account for all possible exceptions to rules (e.g., birds that cannot fly), assuming something until evidence proves otherwise may be more efficient.
  • The hardest part about achieving artificial intelligence may not be building machines to do things but rather explaining why people didn't think of it 200 years ago. Our ability to observe our own mental processes has historically been limited, and we've only recently begun to formalize non-monotonic reasoning.

Socrates and the Future of Artificial Intelligence

In this section, John McCarthy discusses Socrates' interest in demonstrating people's ignorance and how it relates to the field of artificial intelligence. He also talks about the conceptual breakthroughs that need to be made for computers to carry out processes like humans.

Socrates' Interest in Demonstrating Ignorance

  • McCarthy mentions that while people were competent at what they did, they couldn't explain how they did it.
  • He draws a parallel between this and Socrates' interest in demonstrating people's ignorance.
  • McCarthy questions whether computers can carry out processes like humans.

Conceptual Breakthroughs Needed for AI

  • McCarthy believes that there are conceptual breakthroughs that need to be made for computers to carry out human-like processes.
  • He acknowledges that it could take anywhere from a few decades to several centuries before we have computer programs as intelligent as humans.
  • McCarthy bets on 50 years being a likely timeframe, but admits he doesn't know for sure.

The Project of Artificial Intelligence

  • McCarthy sees the project of artificial intelligence as one of the most difficult tasks facing humankind.
  • He believes that replicating human intelligence is not philosophically impossible, despite critics who argue otherwise.
  • Ultimately, he sees the project as an attempt to "know thyself" - a noble pursuit urged by Socrates.
Video description

Visit Our Parent Company EarthOne ➤ https://earthone.io/ This video is the culmination of documentaries that cover the history and origins of computing-based artificial intelligence. 00:00 Intro 0:44 The Thinking Machine 52:22 In Their Own Worlds (Claude Shannon) 59:26 The Thinking Machines 1:13:47 The Machine That Changed The World 2:07:42 John McCarthy Interview Thank You To The Members Who Supported This Video ➤ Wyldn Pearson Garry Ttocsra Brian Schroeder Become A Member & Help Us Grow ➤ https://subscribe.futurology.earthone.io/member Learn More About Us Here ➤ https://futurology.earthone.io Join Our Discord ➤ https://subscribe.futurology.earthone.io/discord Soundtrack ➤ ♫ 00;00 "April Showers" by ProleteR Producer ➤ Ankur Bargotra Follow The Producers Social Media Accounts ➤ https://twitter.com/enchorb http://www.snapchat.com/add/enchorb http://www.instagram.com/enchorb

The History of Artificial Intelligence [Documentary] | YouTube Video Summary | Video Highlight