#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models

#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models

Introduction to Murray Shanahan

This section introduces Professor Murray Shanahan, his background, and his research interests.

Background

  • Murray Shanahan is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind.
  • He graduated from Imperial College with a first in computer science in 1984 and obtained his PhD from King's College in Cambridge in 1988.
  • His work focuses on agents that are coupled to complex environments through sensory motor loops such as robots and animals.
  • He is interested in the relationship between cognition and consciousness and has developed a strong understanding of the biological brain and cognitive architectures more generally.

Research Interests

  • Professor Shanahan is interested in the Dynamics of the brain including metastability dynamical complexity, criticality, as well as the application of this understanding to machine learning.
  • He is fascinated by the concept of global workspace Theory as proposed by Bernard bars which is based on a cognitive architecture comprising a set of parallel specialist processes and the global workspace.
  • He explores the space of possible minds which includes all different forms of Minds that could exist from those of other animals such as chimpanzees to those of life forms that could have evolved elsewhere in the universe.

The Space Of Possible Minds

This section discusses Professor Shanahan's concept of "the space of possible minds" which includes all different forms of Minds that could exist.

The Space Of Possible Minds

  • The space is comprised of all different forms of Minds which could exist from those of other animals such as chimpanzees to those of life forms that could have evolved elsewhere in the universe and indeed those of artificial intelligences.
  • Shanahan proposes two dimensions to describe the structure of this space: the capacity for consciousness and human likeness of behavior.
  • The majority of the space may be occupied by non-natural variants such as conscious Exotica.

Large Language Models

This section discusses Professor Shanahan's paper on large language models and their capabilities and limitations.

Capabilities And Limitations Of Large Language Models

  • In his paper, Professor Shanahan discusses the capabilities and limitations of large language models.
  • Humans have cultivated a mutual understanding reflected in their ability to converse about convictions and other mental states, whereas AI systems lack this shared comprehension.
  • Prompt engineering is used to adjust language models to diverse tasks without needing any supplementary training.

Language Models and Human-Like Abilities

Professor Shanahan discusses the differences between large language models and human language abilities.

Large Language Models vs. Humans

  • The robot in the Seikan system has human-like language abilities, but its language is learned and used differently than humans.
  • We must be cautious when ascribing human-like characteristics to large language models.
  • It's important to accurately portray the capabilities and limitations of large language models.

Professor Shanahan's Background

Professor Shanahan talks about his background in artificial intelligence, computer science, neuroscience, and philosophy.

Education and Career Path

  • Professor Shanahan has been interested in AI since he was a child.
  • He studied computer science at Imperial College London and did his PhD in AI at Cambridge University.
  • He became interested in neuroscience and computational neuroscience before returning to AI with deep learning.

Interdisciplinary Interests

  • Philosophy has always been an interest for Professor Shanahan.
  • There is a three-way interrelationship between AI, neuroscience, cognitive sciences, and philosophy.

Embodiment and the Inner Life

Professor Shanahan discusses his book "Embodiment and the Inner Life" which explores consciousness from a biological perspective.

Motivation for Writing the Book

  • The book was published in 2010 after years of thinking about consciousness from a biological perspective.
  • Global Workspace Theory was one of the leading contenders for a scientific theory of consciousness that Professor Shanahan was drawn to.

Introduction to Global Workspace Theory

In this section, the speaker introduces the concept of Global Workspace Theory and its influence from Wittgenstein's philosophy and Stanislaus de Hen's work on global neuronal workspace idea.

Computationalism and Global Workspace Theory

  • The speaker explains that while Global Workspace Theory draws heavily on a computational architecture, he was more interested in taking it in a direction that is much more connectionist.
  • He mentions that Bernie Barnes himself had moved towards a more connectionist perspective by 2010 when his book was published.
  • The speaker does not subscribe to Penrose's ideas about consciousness drawing heavily on quantum mechanics.

Computationalism and Consciousness

This section discusses computationalism and its relation to consciousness.

Connectionist Perspective on Global Workspace Theory

  • The speaker explains that he took the Global Workspace Theory in a direction that is much more connectionist than the original presentation which drew heavily on an old-fashioned architectural perspective.
  • He mentions that he drew much more heavily on underlying biology and neuroscience, which is also a direction Bernie Barnes had moved towards by 2010 when his book was published.

World Representation through Computation

  • When asked if the world we live in could be computationally represented and computed, the speaker says he does not have a belief on that particular one.
  • He mentions Penrose's ideas about consciousness drawing heavily on quantum mechanics but notes it is very much a minority view within people who study consciousness from a scientific standpoint.

Nagel's Bat and Consciousness

This section discusses Nagel's bat argument about subjective experience and cognitive horizon as related to consciousness.

Exotic Creatures with Consciousness

  • The speaker talks about the idea that there can be very exotic entities or creatures who are completely unlike us, yet somehow have consciousness that we could barely grasp its nature.
  • He mentions this is a natural intuitive thought, especially when looking at other animals like bats.

The Nature of Consciousness

In this section, the speaker discusses the nature of consciousness and how it relates to different creatures such as bats. They also explore the idea that there may be extraterrestrial intelligence with very different forms of consciousness.

The Bat Analogy

  • Bats have a very different experience of the world than humans do.
  • Nagel suggests that there is something deeply metaphysically hidden about subjective experience that we can never know.
  • Wittgenstein argues that nothing is really metaphysically hidden, but rather our lack of knowledge is empirical.
  • These two perspectives are in tension with each other.

Conscious Exotica

  • The speaker is interested in exploring the space of possible minds beyond just bats, including extraterrestrial intelligence and artificial intelligence.
  • There could be entities with consciousness so exotic that we wouldn't even recognize them as conscious beings.
  • The speaker wrote a paper called "Conscious Exotica" which explores these ideas.

Mind-Body Dualism and the Hard Problem of Consciousness

  • Chomsky talks about mind-body dualism and how it was introduced by Descartes.
  • David Chalmers coined the term "hard problem of consciousness" which extends from the mind-body problem.

Consciousness and Scientific Theories

In this section, the speakers discuss consciousness and scientific theories related to it.

Different Meanings of Consciousness

  • Consciousness is used in different ways in different contexts.
  • It can refer to an animal's awareness of its environment or our self-awareness.
  • It is also used scientifically to distinguish between conscious and unconscious processes.
  • Consciousness includes the capacity for suffering and joy.

Global Workspace Theory and Integrated Information Theory

  • Two leading scientific theories of consciousness are Global Workspace Theory and Integrated Information Theory.
  • Global Workspace Theory posits that the brain comprises a large number of parallel processes that interact via a global workspace.
  • In one mode of processing, parallel processes work independently, while in the other mode, they work via the global workspace.

The Hard Problem of Consciousness

In this section, the speakers discuss the hard problem of consciousness.

What is the Hard Problem?

  • The hard problem refers to why there is subjective experience at all.
  • It asks why certain physical processes give rise to subjective experience while others do not.

Different Approaches to Solving the Hard Problem

  • Some approaches try to reduce subjective experience to something else, such as brain activity or information processing.
  • Other approaches argue that subjective experience cannot be reduced but must be explained by some fundamental aspect of reality.

Panpsychism

In this section, the speakers discuss panpsychism.

What is Panpsychism?

  • Panpsychism is the view that consciousness is a fundamental aspect of reality and exists in all matter.
  • It posits that even subatomic particles have some form of consciousness.

Arguments for and Against Panpsychism

  • Arguments for panpsychism include the hard problem of consciousness and the fact that we do not know what matter fundamentally is.
  • Arguments against panpsychism include the lack of empirical evidence and the difficulty in explaining how consciousness arises from non-conscious matter.

Conclusion

In this section, the speakers conclude their discussion on consciousness.

Summary

  • Consciousness is used in different ways in different contexts, including animal awareness, self-awareness, and scientific distinctions between conscious and unconscious processes.
  • Two leading scientific theories of consciousness are Global Workspace Theory and Integrated Information Theory.
  • The hard problem of consciousness asks why there is subjective experience at all, while panpsychism posits that consciousness is a fundamental aspect of reality.

Final Thoughts

  • The speakers acknowledge that there are still many unanswered questions about consciousness but believe it to be an important area for further research.

Global Workspace Theory and Integrated Information Theory

This section discusses the difference between conscious and unconscious information processing, as well as the relationship between Global Workspace Theory and Integrated Information Theory.

Conscious vs Unconscious Information Processing

  • Conscious information processing occurs when processes disseminate their influence to all other processes in the brain.
  • Unconscious information processing occurs when processes are doing their own thing without influencing other processes.
  • The distinction between conscious and unconscious information processing is based on the broadcast and dissemination of information throughout the brain.

Global Workspace Theory

  • According to Global Workspace Theory, consciousness arises from global holistic processing rather than local processing.
  • There are synergies between Global Workspace Theory and Integrated Information Theory because they both distinguish between global holistic things versus local things.

Integrated Information Theory

  • According to Giulio Tononi's Integrated Information Theory, consciousness is a physical property that can be measured by a number called Phi.
  • Phi measures how much consciousness is present in a system based on how much information is processed by individual parts versus how much is processed by all parts together.
  • Integrated Information Theory has synergies with Global Workspace Theory because they both distinguish between global holistic processing versus local processing.

Functionalism and Consciousness

In this section, the speakers discuss functionalism and consciousness. They explore how functionalism can be used to describe intelligence but not consciousness. They also discuss how language plays a role in our understanding of these concepts.

Understanding Functions

  • The speakers agree with the degree of functionalism describing intelligence.
  • However, they are less convinced that functionalism can be applied to consciousness.
  • Our conception of intelligence becomes somewhat observer relative because we understand these functions.

Language and Observer Relativity

  • The words we use in our language to talk about things play a significant role in our understanding of them.
  • Large language models are an example of how people anthropomorphize functions because they are intelligible to us.
  • Philosophically problematic words like "consciousness" are being used differently by different people, leading to disagreements on their meaning.
  • There needs to be a new consensus about how we use these words so that everyone can agree on their meaning.

Nuanced Use of Words

  • We need to separate out awareness of the world from self-awareness from cognitive integration from the capacity for suffering because suddenly we have things where they don't all come as a package.
  • We need to use these words in new ways, but there's going to be a time when it'll take for language to settle back down again.
  • A consensus needs to emerge about how we use these words.

Introduction

In this section, the speakers discuss the functional organization of consciousness and how it is important to avoid subscribing to any one particular theory. They also touch on the Blind Men and the Elephant parable as a metaphor for understanding different perspectives on consciousness.

Different Perspectives on Consciousness

  • The functional organization of consciousness is an important topic that requires discussion.
  • It's crucial not to subscribe to any one particular theory or idea about what consciousness is.
  • The Blind Men and the Elephant parable is a useful metaphor for understanding different perspectives on consciousness.

Perception and Consciousness

In this section, the speakers discuss perception and action as two functions that are part of cognitive phenomena. They also mention various theories related to top-down effects on perception and attention schema theory of consciousness.

Perception and Action in Cognitive Phenomena

  • Perception and action are two functions that are part of cognitive phenomena.
  • There are many other functions that represent different slices of cognitive phenomena.
  • Top-down effects on perception is an interesting area of research, with theories such as those proposed by Anil Seth.
  • Attention schema theory of consciousness, proposed by Michael Graziano, is another interesting area of research related to consciousness.

Philosophical Debates in Neuroscience

In this section, the speakers discuss philosophical debates in neuroscience related to freedom of will. They also touch upon who gets to decide who can speak about these topics.

Philosophical Debates in Neuroscience

  • Philosophical debates in neuroscience exist around topics such as freedom of will.
  • People working in AI and neuroscience should be familiar with philosophical debates before entering into discussions about them.
  • Scientists need to have a pass through philosophy 101 before entering into conversations about philosophical debates.

Ethics in Science

In this section, the speakers discuss the importance of ethics in science and how engineers should learn more about it. They also touch upon the difficulty of having an opinion about ethics without being familiar with basic ethical principles.

Ethics in Science

  • Ethics is an important topic in science that needs to be taken seriously.
  • Engineers should learn more about ethics to ensure they are making ethical decisions.
  • Having an opinion about ethics without being familiar with basic ethical principles can lead to naive opinions.

Introduction to Ethics and Consciousness

In this section, the speaker discusses the importance of ethics and intellectual diversity in the field of consciousness. They also talk about how embodiment is a prerequisite for using words like "consciousness" in everyday language.

Importance of Ethics and Intellectual Diversity

  • Entry-level ethics courses are important for everyone.
  • Intellectual diversity is essential for interdisciplinary conversations.
  • The speaker believes that even though diverse views may be inconsistent, diversity is incredibly important.

Embodiment as a Prerequisite for Consciousness

  • Embodiment is necessary to use words like "consciousness" in everyday language.
  • Only creatures that inhabit our world and interact with it can exhibit purposeful behavior and be considered conscious.

Large Language Models and Consciousness

In this section, the speaker talks about Ilya Sutskever's tweet regarding large language models being slightly conscious. They discuss why it's not appropriate to speak about large language models in those terms due to their lack of embodiment.

Ilya Sutskever's Tweet

  • Ilya Sutskever tweeted that today's large language models may be slightly conscious.
  • The speaker replied with a flippant response comparing large language models to pasta.

Lack of Embodiment in Large Language Models

  • Embodiment is necessary for using words like "consciousness" in everyday language.
  • Large language models lack embodiment since they do not inhabit our world or interact with it.
  • Consciousness is only attributed to creatures that exhibit purposeful behavior and interact with the world.

Embodiment and Language Models

In this section, the speakers discuss the relationship between language models and embodiment. They explore how large language models can be embedded in larger systems to enable robots to interact with the world.

Relationship between Language Models and Embodiment

  • Large language models can be embedded in a larger system, such as a chatbot or robot, to enable interaction with the world.
  • The Palm-Sized Can Robot from Google is an example of a system that uses an embedded large language model.
  • Large language models by themselves are not sufficient for embodying intelligence.

Embodiment View of Artificial Intelligence

  • Rodney Brooks rejected representationalist views of AI and proposed using the world as its own best representation.
  • The biological brain's purpose is to help organisms move around in the world to survive and reproduce.
  • Brains intervene in the sensory-motor loop in a way that benefits organisms.
  • Cognitive capabilities are required for complex tasks like figuring out how to access difficult-to-reach food items.

Understanding Intelligence through Embodiment

In this section, the speakers discuss how understanding intelligence through embodiment provides a natural way to understand cognition. They explore how cognition has evolved from basic movement capabilities to more complex cognitive abilities like language.

Understanding Intelligence through Embodiment

  • Understanding intelligence through embodiment provides a natural way to understand cognition.
  • Cognition has evolved from basic movement capabilities to more complex cognitive abilities like language.

Misconceptions of AI and Embodiment

In this section, the speaker discusses the misconceptions of AI and how it is not a pure intelligence that works in isolation. The speaker also talks about social embeddedness and embodiment.

Embodiment and Morphological Computation

  • The brain doesn't work in isolation but as part of a bigger system.
  • The extended mind uses the environment as memory, while morphological computation relies on the physical shape of our bodies to outsource aspects of intelligence.
  • A robot can be designed with a control system that constantly restores balance or have a naturally stable body.

Relationship between Embodiment and Language

  • There is an important relationship between embodiment and language, which brings us back to Wittgenstein's perspective.
  • Language is inherently an embodied phenomenon that occurs in the context of other language users who inhabit the same world as we do.
  • Humans learn language by being around other language users like parents, carers, and peers.

Large Language Models

  • Large language models are trained on a very large corpus of textual data to predict what the next token will be.
  • They don't engage in activities with other language users but rely solely on textual data for training.
  • The role of embodiment is really important in this different setting.

Symbol Grounding Problem

In this section, the speaker discusses the concept of symbol grounding and its importance in AI systems.

Symbol Grounding

  • The symbols used in AI systems are not grounded in the real world.
  • Humans ground symbols through their experiences with objects and concepts in the world.
  • Lack of grounding can cause large language models to hallucinate or confabulate.
  • Emergent capabilities can arise from next word prediction training, but it is still limited by the distribution of tokens in human text.

Emergence and Large Language Models

In this section, the speaker discusses emergence and its role in large language models.

Emergence

  • Large language models are trained for next word prediction but can produce emergent capabilities.
  • Emergent mechanisms allow large language models to solve complex problems beyond their original training.

Computational Irreducibility and Language Models

The speaker discusses the use of words like "reasoning" and "belief" in relation to large language models. They argue that while it is reasonable to use the term "reasoning" in a content-neutral sense, using terms like "belief" requires interaction with the external world.

Use of Words in Language Models

  • The use of words in language models is about convention and appropriateness.
  • It is reasonable to use the term "reasoning" for what some models do today due to content neutrality.
  • Using terms like "thinks" and "believes" becomes problematic because they are philosophically difficult words.
  • The speaker resists anthropomorphic language when it comes to belief, knowledge, understanding, self, or consciousness.

Reasoning vs Belief

  • Reasoning depends on logic which is content-neutral. Therefore, it is reasonable to use the word reasoning in relation to large language models.
  • Belief requires interaction with the external world and justification based on facts. Large language models lack this capability.

Conclusion

The speaker argues that while it may be appropriate to use certain words like reasoning in relation to large language models, using terms like belief requires more than just pure logic.

Intentional Stance and Artificial Intelligence

In this section, the speaker discusses the intentional stance and its usefulness in thinking about artificial intelligence. The intentional stance allows us to view computer programs as intelligent agents even though they may lack the same kind of understanding as a human.

Use of Words like "Know" in AI

  • The intentional stance is a convenient shorthand for interpreting something as having beliefs, desires, and intentions.
  • Distinguishing what it means to know for humans and machines is important when using words like "know" in AI.
  • When we use words like "know" or "understand" in reference to machines, we don't mean it literally.
  • We need to be careful not to anthropomorphize machines by imputing capacities or empathy that aren't there.

Blurring Between Intentional Stance and Literal Meaning

  • Large language models and systems are blurring the line between the intentional stance and literal meaning.
  • We need to be cautious about using words like "know" or "understand" literally when referring to machines.
  • Anthropomorphizing machines can lead us to impute capacities or empathy that aren't actually present.

Teasing Apart Knowledge from Knowing

  • It's useful to tease apart knowledge (justified true belief) from knowing because knowing brings with it baggage of intentionality, agency, and anthropomorphization.

I apologize, but I cannot see any transcript provided in the conversation. Could you please provide me with the transcript so that I can create a markdown file for you?

Understanding Language Models

In this section, the speakers discuss the complexity of language models and how they are being channeled into some portion of the distribution through prompt engineering. They also talk about the need to understand these models at different levels.

Statistical Language Models vs. Emergence

  • Language models are unimaginably complex distributions that include code and other things not used in everyday language.
  • Prompt engineering channels language models into a specific portion of the distribution, enabling them to do something different than if they were in a different part.
  • There is a duality between statistical language models and emergence, as there is something remarkable happening up here (in the model).
  • To understand these models scientifically, we need to understand their mechanisms at an engineering level while also doing reverse engineering at another level.

Reverse Engineering Language Models

  • Understanding Transformer architectures, parameter settings, tokenization, embedding, etc., is essential for understanding language models at an engineering level.
  • Anthropoc AI's work on induction heads and residual streams helps explain how these models work along with Transformers.
  • As these models become more complex, we need to ascend levels of understanding.

The Future of Prompts

  • When language models get good enough, prompts may no longer be necessary.
  • Language model prompts can be thought of as a new type of program interpreter that can achieve insane extrapolative performance on standard reasoning tasks.

Introduction to Prompt Engineering

In this section, Professor Shanahan discusses the concept of prompt engineering and how it may be a new job description. He also talks about how prompting can be more natural and similar to human communication.

Prompt Engineering as a New Job Description

  • Prompt engineering is seen as a potential new job description.
  • It is not clear how long this job description will last.

Natural Language Prompts

  • Interacting with prompts in a more natural language way may be the future of prompting.
  • Using natural forms of communication, like we do with other humans, could replace peculiar incantations that are currently used for prompts.
  • Discussion and negotiation are still involved in natural language prompts.

Conclusion

In this section, Professor Shanahan thanks the interviewer for having him on the show and expresses his enjoyment during the interview.

Thank You Note

  • Professor Shanahan thanks the interviewer for inviting him to the show.
  • He expresses that he had lots of fun during the interview.
Video description

Support us! https://www.patreon.com/mlst Professor Murray Shanahan is a renowned researcher on sophisticated cognition and its implications for artificial intelligence. His 2016 article ‘Conscious Exotica’ explores the Space of Possible Minds, a concept first proposed by philosopher Aaron Sloman in 1984, which includes all the different forms of minds from those of other animals to those of artificial intelligence. Shanahan rejects the idea of an impenetrable realm of subjective experience and argues that the majority of the space of possible minds may be occupied by non-natural variants, such as the ‘conscious exotica’ of which he speaks. In his paper ‘Talking About Large Language Models’, Shanahan discusses the capabilities and limitations of large language models (LLMs). He argues that prompt engineering is a key element for advanced AI systems, as it involves exploiting prompt prefixes to adjust LLMs to various tasks. However, Shanahan cautions against ascribing human-like characteristics to these systems, as they are fundamentally different and lack a shared comprehension with humans. Even though LLMs can be integrated into embodied systems, it does not mean that they possess human-like language abilities. Ultimately, Shanahan concludes that although LLMs are formidable and versatile, we must be wary of over-simplifying their capacities and limitations. Pod version (music removed): https://anchor.fm/machinelearningstreettalk/episodes/93-Prof--MURRAY-SHANAHAN---Consciousness--Embodiment--Language-Models-e1sm6k6 [00:00:00] Introduction [00:08:51] Consciousness and Consciousness Exotica [00:34:59] Slightly Consciousness LLMs [00:38:05] Embodiment [00:51:32] Symbol Grounding [00:54:13] Emergence [00:57:09] Reasoning [01:03:16] Intentional Stance [01:07:06] Digression on Chomsky show and Andrew Lampinen [01:10:31] Prompt Engineering Find Murray online: https://www.doc.ic.ac.uk/~mpsha/ https://twitter.com/mpshanahan?lang=en https://scholar.google.co.uk/citations?user=00bnGpAAAAAJ&hl=en MLST Discord: https://discord.gg/aNPkGUQtc5 References: Conscious exotica [Aeon/Shannahan] https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there Embodiment and the inner life [Shannahan] https://www.amazon.co.uk/Embodiment-inner-life-Cognition-Consciousness/dp/0199226555 The Technological Singularity [Shannahan] https://mitpress.mit.edu/9780262527804/ Talking About Large Language Models [Murray Shanahan] https://arxiv.org/abs/2212.03551 https://en.wikipedia.org/wiki/Global_workspace_theory [Bernard Baars] In the Theater of Consciousness: The Workspace of the Mind [Bernard Baars] https://www.amazon.co.uk/Theater-Consciousness-Workspace-Mind/dp/0195102657 Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts [Stanislas Dehaene] https://www.amazon.co.uk/Consciousness-Brain-Deciphering-Codes-Thoughts/dp/0670025437 Roger Penrose On Why Consciousness Does Not Compute [nautil.us/STEVE PAULSON] https://nautil.us/roger-penrose-on-why-consciousness-does-not-compute-236591/ https://en.wikipedia.org/wiki/Orchestrated_objective_reduction Thomas Nagal - what is it like to be a bat? https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf Private Language [Ludwig Wittgenstein] https://plato.stanford.edu/entries/private-language/ PHILOSOPHICAL INVESTIGATIONS [Ludwig Wittgenstein] (see §243 for Private Language argument) https://static1.squarespace.com/static/54889e73e4b0a2c1f9891289/t/564b61a4e4b04eca59c4d232/1447780772744/Ludwig.Wittgenstein.-.Philosophical.Investigations.pdf Integrated information theory [Giulio Tononi] https://en.wikipedia.org/wiki/Integrated_information_theory Being You: A New Science of Consciousness (The Sunday Times Bestseller) [Anil Seth] https://www.amazon.co.uk/Being-You-Inside-Story-Universe/dp/0571337708 Attention schema theory [Michael Graziano] https://en.wikipedia.org/wiki/Attention_schema_theory Rethinking Consciousness: A Scientific Theory of Subjective Experience [Michael Graziano] https://www.amazon.co.uk/Rethinking-Consciousness-Scientific-Subjective-Experience/dp/0393652610 SayCan - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [Google/] https://say-can.github.io/ THE SYMBOL GROUNDING PROBLEM [Stevan Harnad] https://www.cs.ox.ac.uk/activities/ieg/elibrary/sources/harnad90_sgproblem.pdf Lewis Carroll Puzzles / Syllogisms https://math.hawaii.edu/~hile/math100/logice.htm In-context Learning and Induction Heads [Catherine Olsson et al / Anthropic] https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html