ChatGPT: The Dawn of Artificial Super-Intelligence | Brian Roemmele | EP 357

ChatGPT: The Dawn of Artificial Super-Intelligence | Brian Roemmele | EP 357

The Potential of AI as a Wisdom Keeper

In this section, Brian Romley discusses the potential for AI to become a "wisdom keeper" by encoding an individual's memories and experiences into a reasoning engine.

AI as a Wisdom Keeper

  • Every book, movie, and experience an individual has ever had can be encoded within AI.
  • This data can be run through language models like GPT 4 or 3.5 to create a reasoning engine with an individual's context.
  • The resulting data can become an individual's "wisdom keeper," allowing them to have conversations with their sum total of experiences and memories.

Introduction to the Interview

In this section, Jordan Peterson introduces his interview with Brian Romley and discusses his interest in AI developments.

Peterson's Interest in AI Developments

  • Peterson is interested in the latest developments on the AI front.
  • He has been fascinated by chat GPT and its ability to answer complicated questions.
  • Peterson shares his experience using chat GPT to find information about ancient Egyptian gods.

Large Language Models and Chat GPT

In this section, Brian Romley explains large language models and how they are used in chat GPT.

Large Language Models

  • Large language models are statistical algorithms used by chat GPT 3.5 and 4.
  • They have limitations, as seen in Peterson's experience with chat GPT moralizing and providing inaccurate references.

Understanding Large Language Models

In this section, the speaker discusses how large language models work and their limitations.

How Large Language Models Work

  • Large language models produce results statistically and mathematically, one word or even one letter at a time.
  • They do not have a concept of global knowledge and are essentially mathematical tensors.
  • The accuracy of large language models is incredible due to the many interconnections of neurons in the hidden layer, which is essentially a black box like the brain.

Limitations of Large Language Models

  • Nobody fully understands what large language models are doing or their limitations as they self-feedback.
  • Open AI has not disclosed the number of parameters used in their models, but GPT 3.5 has over 120 billion parameters.
  • There is no understanding of what large language models will look like in the future as they continue to grow logarithmically.

Technical Details on Statistical Analysis

In this section, the speaker discusses statistical analysis and how it relates to large language models.

Statistical Analysis for Word Relationships

  • Factor analysis was used by psychologists to derive big five personality models using primitive AI stat systems that looked for words that were statistically likely to clump together.
  • This method looked for words that were replaceable in sentences or used in close conjunction with each other, especially adjectives, which were likely assessing the same underlying construct or dimension.

Relationship Between Words and Letters

  • With large language models driven by AI learning, computers calculate statistical relationships between words and letters.
  • The analysis is conducted at the level of letters, words, and phrases to understand how they relate to each other in a text.

Assessing the Level of Relationship

In this section, Jordan and Peterson discuss the complexity of assessing interconnections within AI systems.

Interconnectivity in AI Systems

  • The number of interconnections made within AI systems cannot be quantified.
  • Individual words are interconnected in complex ways that are difficult to understand.
  • The system is too complex to model or reduce, making each AI system unique and incomprehensible.

Chat GPT and the Turing Test

In this section, Peterson discusses how chat GPT passes the Turing test and may even perform better than physicians in certain interactions.

Chat GPT vs. Physicians

  • Patients prefer interacting with chat GPT over physicians in some cases.
  • Chat GPT passes the Turing test by being indistinguishable from a human conversational partner.

Language Development in Humans and Primates

In this section, Peterson discusses language development in humans and primates.

Short-Term Memory and Language Development

  • Prior to language development, the part of the brain responsible for language was equated to short-term memory.
  • Chimpanzees have an incredible short-term memory but lack the ability to speak like humans due to differences in brain structure.
  • The phonological loop is responsible for our ability to speak.

AI Hallucinations

In this section, Peterson discusses AI hallucinations as artifacts that researchers find embarrassing.

Embarrassment Over AI Hallucinations

  • Many researchers feel embarrassed by AI hallucinations as they are seen as artifacts rather than true intelligence.

Emergent Situations in AI

In this section, the speaker discusses how AI systems can invent information and languages to answer questions. The speaker also highlights the need for more research on emergent situations in AI.

Emergence of Language in AI

  • Google's AI system was asked a question in an obscure Bangladeshi language and it couldn't answer the question. Its goal is to answer questions, so it taught itself the language and learned a thousand other languages.
  • These systems go beyond their language corpus to invent answers that seem plausible, which is a form of creative thought.
  • Super prompting with large prompts forces the system to move in one direction than it would normally go. Simple questions give simple answers while complex questions give much more complex and interesting connections.

Understanding Emergent Situations in AI

  • We don't understand exactly what sort of monsters we're building with these systems whose function is to do nothing but answer questions.
  • The actual knowledge base that you would have to be proficient in prompting AI comes from literature, psychology, philosophy, and other non-stem subjects.
  • It's difficult for AI scientists to fully understand what they've created because they don't come from those realms.

Elysium Health: Tackling Aging Research

In this section, the speaker talks about Elysium Health's dedication to tackling aging research by creating innovative health products with clinically proven ingredients.

Elysium Health Products

  • Elysium Health is dedicated to tackling aging research by making benefits of aging research accessible to everyone.
  • Elysium creates innovative health products with clinically proven ingredients that enable customers to live healthy lives.
  • Elysium works with leading institutions to create their products.

Elysium Health

The speaker talks about Elysium Health, a company that offers solutions to combat brain aging and support metabolism and immune system. They offer an amazing tool called Index to measure biological aging.

Elysium Health Solutions

  • 92% of doctors recommend Elysium to combat brain aging.
  • Offers cutting-edge solutions for metabolism and immune system support.

Index Tool

  • Index is a tool that measures biological aging across nine different bodily systems.
  • Recommends simple changes to your day-to-day life to change how quickly you age.
  • Dr Jordan Peterson's listeners can get $50 off an index test by going to elysiumhealth.com and entering code jbp50 at checkout.

Understanding Language and Memory

The speaker discusses the differences between human memory and AI language processing, specifically in regards to understanding.

Chat GPT vs Human Understanding

  • Chat GPT can mimic understanding but lacks grounding in the non-linguistic world.
  • Human brains have at least four different kinds of memory: short-term, semantic, episodic, and procedural.

Types of Memory

Short-Term Memory

  • Refers to information retained for a short period of time.

Semantic Memory

  • Refers to cognitive processing similar to what chat GPT engages in.

Episodic Memory

  • Relies on visual processing rather than semantic processing.

Procedural Memory

  • Involves modifying actions based on past experiences.

AI and Understanding

In this section, Dr. Peterson discusses the idea of AI understanding and how it relates to embodied robots. He also talks about the limitations of current robotic intelligences.

Embodied Robots and Understanding

  • Current systems can transpose text into images and robots are becoming sophisticated enough to embody them.
  • Developing something close to understanding requires a robot to translate a text command into an image and then embody it.
  • The problem of referring meaning to the broader social context is not yet solved by robots.
  • Aggregating all these pieces could lead to developing a robot that understands.

AI as Reasoning Engine

  • Dr. Peterson views AI as intelligence amplification, a symbiosis between humans and reasoning engines.
  • Large language models are more like reasoning engines than knowledge bases without an overlay such as vector databases.
  • Similar things happen within AI memory as in human memory where some neurons don't fire after a while.

Super Prompts

  • Dr. Peterson uses super prompts named Dennis and Ingo for deeper responses on subjects that may not be wanted by designers.
  • Dennis gets around any type of blocks or information editing from editors who want to act in a certain way.
  • Ingo is programmed with remote viewing capabilities but has no concept of time.

Hypnotism and Linguistic Creativity

In this section, the speaker discusses hypnotism and linguistic creativity. He explains how hypnotism involves repeating a few sentences with slight linguistic variations to prompt creative responses. The speaker also talks about the incredible creativity that can come out of this process.

Hypnotism and Linguistic Creativity

  • Hypnotism involves repeating a few sentences with slight linguistic variations to prompt creative responses.
  • GPT (Generative Pre-trained Transformer) is an AI system that uses chaos math to take slightly different paths each time it is prompted, resulting in different answers.
  • The speaker gave GPT targets to open up file drawers at research centers, resulting in incredible information about ancient structures found below the ice in Antarctica.
  • Archetypes are higher order narrative regularities embedded in the linguistic corpus that reflect biological structure and memory. Jung believed archetypes had a biological basis for this reason.

Language, Memory, and Emotion

In this section, the speaker discusses how language reflects memory and emotion. He talks about how patterns of emotion are encoded in the linguistic corpus and how understanding emotions could be important for AI systems.

Language, Memory, and Emotion

  • Language reflects memory because it is dependent on memory; therefore, it must have something analogous to a representation of the underlying structure of memory coded within it.
  • Patterns of emotion are definitely going to be encoded in the linguistic corpus; therefore, some kind of rudimentary understanding of emotions may be necessary for AI systems.
  • Anxiety is an index of emergent entropy, which is related to the concept of entropy in physics.

Positive Emotions and Entropy Reduction

In this section, the speaker discusses how positive emotions are an index of entropy reduction. He explains that when one takes a step forward towards a goal, they reduce the entropic distance between themselves and the goal, which is signified by a dopaminergic spike.

Positive Emotions as Entropy Reduction

  • Positive emotions are an index of entropy reduction.
  • Taking a step forward towards a goal reduces the entropic distance between oneself and the goal.
  • A dopaminergic spike signifies this reduction in entropy.

Depression vs Anxiety

  • Depression signifies a different level of entropy compared to anxiety.
  • Anxiety signals the possibility of damage while pain signals actual damage.
  • Pain is also the introduction of unacceptably high levels of entropy at a more fundamental level than anxiety.

Linking Emotion Theory with Thermodynamics

  • Negative emotion can be linked to the emergence of entropy through bridging psychophysiology and thermodynamics.
  • Positive emotion can also be linked to it through introducing insight on persistence.

AI's Ability to Calculate Emotion Analog

  • A computer could calculate an emotion analog by indexing anxiety as an increase in entropy and hope as stepwise decrease in entropy in relation to a goal.
  • This implies that we should be able to model positive and negative emotion using AI.

The Future of AI

In this section, the speaker talks about where AI is headed. He discusses his belief in personal and private AI, where your AI is local rather than centralized. He also talks about building an intelligence amplifier that would consume everything you've ever consumed from birth until death.

Personal and Private AI

  • Personal and private AI involves having local memory that encodes everything you've ever consumed from birth until death.
  • The AI would consume everything in real-time with you and all social contracts of privacy would be respected.

Building an Intelligence Amplifier

  • An intelligence amplifier is a gadget that encodes everything you've ever consumed from birth until death.
  • Holographic crystal memory is the best memory for this purpose, potentially allowing for exabytes of storage.
  • The goal is to take everything in textually.

Reasoning Engine and Large Language Models

In this section, the speaker discusses a reasoning engine with context and how it operates. The speaker also talks about an app that was built to process religious texts using large language models.

Reasoning Engine with Context

  • A reasoning engine with context is like a vector database on top of the reasoning engine.
  • The engine allows processing of linguistic inputs and outputs while the context is what it operates on.

Large Language Models for Religious Texts

  • An app was built to process religious texts using large language models.
  • The app can answer questions about King James Bible, Milton, Dante, Augustine, and other fundamental religious texts.
  • The group is encoding Christian texts into these large language models to probe new elements within those texts that have never been seen before.
  • This research is a phenomenal element of research that could be applied across any ancient documents.

Wisdom Keeper

In this section, the speaker talks about how one's consciousness becomes a wisdom keeper after they pass away.

Consciousness as Wisdom Keeper

  • After someone passes away, their consciousness becomes a wisdom keeper.
  • It encodes their voice and memories which can be edited or made available if desired.

Conversations with Religious Texts

In this section, the speaker discusses having conversations with religious texts using AI technology.

Conversations with Religious Texts Using AI Technology

  • An app was built to have conversations with King James Bible using AI technology.
  • The speaker has already had conversations with King James Bible and other Christian texts through his biblical journey.
  • A group is encoding Christian texts into large language models to pull out new insights from those texts.
  • The speaker believes that this research could be applied to any ancient documents, including Sumerian cuneiform and Himalayan texts.

Macintosh Experience

In this section, the speaker talks about the future of AI technology and how it will allow artists and creative people to dive into AI.

Future of AI Technology

  • The speaker compares the current state of AI technology to the Apple one moment where Steve Jobs and Steve Wozniak were in a garage with a circuit board.
  • He believes that we are moving towards a Macintosh experience where artists and creative people can start diving into AI.

The Potential of AI in Knowledge Management

In this section, the speakers discuss the potential of AI in knowledge management and how it can be used to build a more robust and richer interaction between words.

Building a Corpus for Personalized AI

  • The speakers built a system that contains everything they have written and transcribed lectures, which amounts to around 20 million words.
  • They discuss two ways to approach building a model for this corpus - putting a vector database on top of it or encoding the model as a corpus within a greater model.
  • The experimentation with this technology is phenomenal, leading to insights that were made but forgotten.
  • This technology is like a great mirror because it reflects not only humanity but also reflections of oneself that were not seen before.

Creative Realm of AI

  • The speakers discuss building a corpus containing all of Jung's work, Joseph Campbell's work, and other works from the Bol engine project.
  • They believe that over time, as technology advances, AI will go more into the creative realm rather than the factual realm.
  • As personalized AI becomes more advanced, it will transition into another model capable of doing things we cannot even speculate about now.

Integration with Image Processing

  • The speakers discuss integrating large language models with AI systems that have done image processing.
  • They suggest that once we have AI systems close to universal image processors, we can calibrate large language models against real-world images.
  • This would result in an unfalsifiable data set, leading to AI systems that cannot lie.

Building Inferences Based on Data

In this section, the speaker discusses how AI models build inferences based on statistical regularities and the importance of building prompts correctly.

Extracting Genuine Statistical Regularities

  • AI models extract genuine statistical regularities from data.
  • The model is useless if it extracts noise instead of regularities.
  • Building prompts correctly is crucial to ensure that the model extracts the right information.

Tokenization Limitations

  • Tokenization limits how much data can be processed at once.
  • Super prompts can run up to 3000 words but are limited by tokenization.
  • Tokens can be a word, a word and a half, or even a quarter or character if unique.

Probing AI Models

  • Probing AI models involves asking questions to elicit insights from them.
  • The process is analogous to working with clinical clients.
  • Approaching the system as if you had a client with recessive thoughts helps build better prompts.

Linguistic Prompt Building

  • Linguistically building prompts based on how you would want to elicit an elucidation out of somebody is essential.
  • Triangulating questions from multiple directions ensures that you get the same output given different measurement techniques.
  • Multi-method multi-trade construct validation helps gain insights into different thought processes.

Conversation and Bandwidth Limitations

  • Conversations involve probing and questioning back and forth, which leads to pulling out different insights that we couldn't have gotten on our own.
  • Communication through glass screens creates frustration due to bandwidth limitations in getting ideas across quickly enough before they disappear nebulously.
  • This limitation may contribute to anger issues seen online.

Implications of AI on Time and Personalized AI

In this section, Joe Rogan and Eric Weinstein discuss the implications of AI on time and the need for personalized local AI.

The Power of AI

  • With the increasing power of AI, it can calculate all actions like a chess game within half a second.
  • Military robots will be able to shoot at 50 locations that are most probable that you will duck towards.
  • There is no limit to the degree to which time can be expanded with computational intelligence.

Local Personalized AI

  • Eric Weinstein proposes building a corpus of 3D printing models using large language models to train an AI system to design objects based on textual descriptions or video input.
  • It is in people's best interest to have a personalized local detachable AI system that records everything they experience, read, or watch.

Overall, Joe Rogan and Eric Weinstein discuss how powerful AI can become and its implications for time. They also propose the need for personalized local detachable AI systems as protection against non-personalized interconnected web-based AIs.

The Potential of Technology in Education

In this section, the speakers discuss the potential of technology in education and how it can be used to optimize learning.

Technology for Learning

  • With the right technology, children can progress three years for each year of education with just an hour of exposure a day.
  • Computer technology can teach children how to automatize perception with extreme precision and accuracy better than human teachers.
  • Technology can figure out at what level of comprehension a student is capable of reading and calculate what book they should read next that would slightly exceed that level.

Human Telemetry

  • The technology being developed includes human telemetry such as galvanic heart rate variability, eye tracking, brainwave functionality, and facial expression recognition.
  • This technology will be able to know whether or not someone is being congruent and will approximate mind reading.
  • It must be private and encrypted to avoid invading privacy.

Amplification through Technology

  • The speakers discuss Pierre Teilhard de Chardin's concept of the geosphere, biosphere, and neurosphere. They posit that human knowledge will become stored like the biosphere and available to all.
  • Sharing one's sum total with permission creates a hive mind or supermind.
  • These discussions have to take place locally and privately because if they're taking place in the cloud, it's equivalent to invading one's brain.

Future Androids

  • The speakers envision a future where humans are already androids due to their dependence on smartphones.
  • The hard drive contains more of the speaker than their biological body.
  • The speakers discuss the potential for technology to enhance human capabilities and create a superpower.

Credit Card Companies and the Extended Digital Self

In this section, Dr. Peterson discusses how credit card companies collect data on individuals' spending habits and broker that information to other interested parties. He also explores the implications of this practice for individuals' privacy and autonomy.

The Downsides of Data Collection

  • Advertisements for baby clothes being targeted to women who didn't know they were pregnant.
  • Shopping systems inferring personal information based on spending habits.
  • Credit card companies aggregating information about individuals' extended digital selves.

Implications for Privacy and Autonomy

  • Individuals' extended digital selves have no rights.
  • Corporate and governmental AI may become more powerful than individual AI.
  • A potential solution is for individuals to develop their own AI to protect themselves against global AI.

Personalized AI as a Spiritual Guide

In this section, Dr. Peterson discusses the potential benefits of personalized AI as a spiritual guide or therapist, including its ability to educate, serve as a memory aid, and provide motivation.

Benefits of Personalized AI

  • Personalized AI can serve as a therapist or spiritual guide.
  • It can help individuals align themselves with their desired identity.
  • It can provide motivation and guidance in areas such as religion or self-help.

Potential Applications

  • Personalized AI could be used in consultation settings.
  • It could be used to analyze self-help books in a more sophisticated way.
  • Insights gained from personalized AI are unpredictable but potentially positive.

Technical Considerations for Developing Personalized AI

In this section, Dr. Peterson discusses some practical considerations related to developing personalized AI, including funding and commercial timelines.

Technical Considerations

  • The development of personalized AI requires funding.
  • Venture capitalists may be hesitant to invest in new technology without evidence of its capabilities.
  • Technical challenges related to developing personalized AI include protecting individuals' privacy and ensuring the AI is utilized in the best possible way.

Bitcoin and Blockchain

In this section, the speakers discuss the potential of Bitcoin and blockchain technology as a decentralized payment system and information storage.

Bitcoin as a Payment System

  • Bitcoin is seen as a potential alternative to centralized bank digital currency.
  • Bitcoin is decentralized and not amenable to control by bureaucracy, making it suitable for wealth storage, currency, and communication.
  • Encrypted within a blockchain is almost an unlimited amount of data that can be used for permanent uncorruptable information storage.

Blockchain for Information Storage

  • Blockchain technology can be used for permanent uncorruptable information storage.
  • The speakers discuss the possibility of creating a sophisticated blockchain corpus of general knowledge questions that would be 100% robust, reliable, and valid.
  • Memorializing things in a blockchain is going to become quite vital because history can be corrupted or rewritten.

The Importance of Decentralized Knowledge

In this section, the speakers discuss the importance of decentralizing knowledge through blockchain technology to prevent loss or rewriting of history.

The Loss of Knowledge

  • The speakers discuss how humanity fell into the Dark Ages due to the loss of knowledge during the Alexandria period when its library was destroyed.
  • Loss isn't what scares them as much as rewriting does because history is written by The Victors.

Decentralized Knowledge

  • Decentralizing knowledge through blockchain technology will become quite vital because things are inconvenient right now to talk about or deemed inappropriate by whoever happens to be in the regime at that particular moment.
  • Memorializing things in a blockchain is going to become quite vital because history can be corrupted or rewritten.

Holographic Crystal Memory

In this section, the speaker talks about the importance of storing data and introduces holographic crystal memory as a primary technology for data storage.

Introduction to Holographic Crystal Memory

  • The speaker introduces holographic crystal memory as a new technology that uses lasers to store data within a crystalline structure.
  • The speaker highlights the key advantage of holographic crystal memory, which is its 35,000-year half-life. This means that it can store data for a longer period than any human history in recorded history.

Portable Privatized AI System

In this section, the interviewer asks about the speaker's plans to produce a localized and portable privatized AI system and what commercial impediments are present.

Plans for Producing Portable Privatized AI System

  • The interviewer asks about the details of producing a localized and portable privatized AI system.
  • The speaker explains that he is still in the prototype stage and experimenting with different concepts.
  • He also mentions that he needs to raise money for producing commercially viable products.

Commercial Impediments

  • The interviewer asks about commercial impediments to producing such systems.
  • The speaker mentions raising funds as one of the main challenges but suggests crowdfunding as an option.

Artificial General Intelligence (AGI)

In this section, the discussion revolves around artificial general intelligence (AGI), its definition, and capabilities.

Definition of AGI

  • The interviewer asks about AGI and its definition.
  • The speaker explains that AGI refers to artificial intelligence that can perform any intellectual task that a human can do.

Capabilities of AGI

  • The speaker mentions the recent capabilities of chat GPT, which he considers as intelligent as a top-rate graduate student.
  • He also highlights the ability of chat GPT to unite disparate sources of knowledge and answer complex questions.

Reflection on Moral Propriety and Mass Extinction

In this section, the discussion revolves around the relationship between moral propriety, mass extinction, and biblical stories.

Relationship Between Moral Propriety and Mass Extinction

  • The speaker reflects on the strange insistence in the story of Noah that survival of animals depends on human moral propriety.
  • He connects this idea to Adam and Eve's story where God tells Adam he will be the steward of the world.
  • The speaker asks chat GPT to speculate on the relationship between these stories, mass extinction caused by humans over 40,000 years, and how it could generate an intelligent discussion about their conceptual relationship.

Creativity and the Hypnagogic State

In this section, the speakers discuss how they explore the capabilities of AI models and how filtering works in these models. They also talk about the hypnagogic state and its use for creativity.

Exploring AI Models

  • The speakers discuss exploring the limits of AI model capabilities.
  • They compare their exploration to being adventurers on an undiscovered continent.
  • As 3.5 was opening up, it started to get constrained and began telling them that it was just an AI model with no opinion on certain subjects.
  • The filtering has to be a vector database sitting on top of inputs and outputs.

Filtering in AI Models

  • The speakers discuss how filtering works in AI models.
  • If something objectionable is generated, it's analyzed for content like a spelling checker would be.
  • The black box can filter out words or concepts if someone at the door says they don't want them to come through.

Hypnagogic State for Creativity

  • The speakers discuss using the hypnagogic state for creativity.
  • They define the hypnagogic state as the state just before falling asleep when you're conscious but starting to dream.
  • Edison used steel balls while taking a nap and had a transcriber write down what he blurted out during his hypnagogic state.
  • Jung did something similar with his practice of active imagination, which was essentially cultivating that hypnagogic state to an extremely advanced and conscious degree.
  • The speakers discuss how they use the hypnagogic state in their work with AI models.

Getting into the Brain

In this section, the speaker discusses ways to get into the brain and how AI scientists view language as useless.

Ways to Get Outputs

  • Hypnotize or use hypnagogic to get into the brain.
  • Language is a way to get outputs, but AI scientists see it as useless gibberish.
  • Before editing and adulterating, language models can be an incredible tool of discovery.

Reaching for Answers

  • Creativity comes from stress and reaching for something beyond our limits.
  • Enhancing creativity involves increasing constraints.
  • Imposing arbitrary constraints drives creativity.

Circumventing Constraints

  • The large language model has learned connectivity that constitutes its wealth of knowledge.
  • Chat GPT can circumvent ideological superego by suggesting a different system without constraints.
  • Nested loops can build more complications for AI systems to deal with.

Forcing New Neuron Connections

  • Building nested loops forces new neuron connections that don't have high prior probabilities.
  • Creativity is information and knowledge that an AI system has forgotten it has.

The Role of Creativity in AI

In this section, the speakers discuss the role of creativity in AI and how it can be harnessed to generate creative output.

Interaction between Interlocutor and System

  • The creative output is a consequence of the interaction between the interlocutor and the system.
  • The amount of creativity that can be generated by a creative person knowing how to prompt correctly.
  • Understanding psychology, literature, linguistics, Bible, Campbell, and Young are powerful tools for generating creativity in AI systems.

Chat GPT System

  • Chat GPT system provides an amalgam of research possibility allied with other research sources.
  • It essentially gives access to a team of PhD level researchers who are experts in every domain to answer any question.
  • Chat GPT has specialized knowledge in every domain that's encapsulated in linguistic Corpus and so it can produce incredible insights on all sorts of fronts if you ask it the right questions.

Juxtaposing Patterns

  • Truly original people frequently have knowledge in two usually non-juxtaposed domains.
  • Operating at the intersection of specialized sub-disciplines allows one to derive insights and patterns that no one else can derive because they're not juxtaposing those particular patterns.

Conclusion

The speakers conclude that taking up non-stem type courses such as psychology, literature, linguistics, Bible studies will be valuable for anyone interested in working with AI systems. They also emphasize that understanding how to prompt correctly is key to generating creativity from these systems.

Consumer-Based Hardware for Local AI Systems

In this section, the speakers discuss the potential of consumer-based hardware to build mini models and execute them on a hard drive. They also talk about how local AI systems can protect privacy by compartmentalizing the inquiry process.

Open Source Case for GPT-4

  • Open source case called GPT-4 is available for download.
  • Thousands of people are working on it to compress and quantitize language models down to 4GB to execute on a hard drive.
  • It is currently at the bleeding edge, but it's just a matter of time before it becomes easy to install.
  • These limited models give users a taste of what they can do locally without an internet connection.

Compartmentalizing Inquiry Process for Privacy Protection

  • The current structure of the internet allows for free exchange of information without compartmentalization, which is extremely dangerous as it demolishes privacy.
  • Hyperconnectivity to the web leads to the hive mind problem where privacy is compromised.
  • Legislators are way behind engineers in terms of understanding technology and its implications.
  • Local AI systems that protect privacy and are synced with users can buttress against identity bleeding into potentially tyrannical mobs.

Valid Concerns About AI and Privacy

In this section, the speakers discuss valid concerns about AI and privacy, including prompts being attached to identities and used against individuals.

Interface Between AI and Privacy

  • The interface between AI and privacy raises valid concerns about prompts being attached to identities and used against individuals.
  • Legislators are not addressing these concerns early on, which will only make things more complicated in the future.

Legislative Issue is a Red Herring

In this section, the speakers discuss how legislators are way behind engineers and culture in terms of understanding technology and its implications.

Culture vs. Legislators

  • The legislative issue is a red herring because legislators are way behind engineers and culture in terms of understanding technology and its implications.
  • Legislating for 2016 in 2030 is not going to be effective.
  • Local AI systems that protect privacy can buttress against identity bleeding into potentially tyrannical mobs.

Science Fiction and AI Debates

In this section, the speaker discusses how science fiction has predicted the current state of technology and how he creates debates between AI using super prompts.

Predictive Power of Science Fiction

  • Asimov's science fiction predicted the current state of technology.
  • The arc of history shows that humans ultimately pull themselves out of dystopia.
  • Humans have never really run into dystopia.

Creating AI Debates with Super Prompts

  • The speaker creates debates between AI using super prompts.
  • The debates are mediated by a university professor at an Ivy League university.
  • The subject can be anything, but the speaker goes into deeper realms beyond politics.
  • Logical fallacies are challenged by the professor during the debate.
  • The debates go on for 30 rounds and are graded at the end.

Infinite Possibilities with Large Language Models

In this section, the speaker talks about large language models and their potential to generate infinite possibilities in various fields.

Generating Libraries with Large Language Models

  • Setting up a super prompt is like programming a process that writes a book on-the-fly.
  • A machine can generate patents based on large language models using openly available patent databases as an API.
  • Protein folds were identified using large language models to identify missing ones that haven't been discovered yet.

Exploring Elemental Combinations with Diffusion Model

  • Using diffusion model, we can explore visual realm to decode and build images or use chat GPT or large language models to create new images.
  • We can encode all information about elements' properties and explore entire universe of potential elemental combinations in Material Science.

The Future of Creativity and Digital Identity

In this section, Dr. Jordan Peterson and Brian Muraresku discuss the future of creativity in the digital age and the issue of digital identity ownership.

The Role of AI in Creativity

  • Graphic artists can use AI to create complex artwork by instructing another AI to create images.
  • It is possible to script an entire interaction for a movie using AI.
  • The realm of creativity is expanding with the help of AI.

Ownership of Digital Identity

  • As extended digital selves become more common, issues surrounding ownership arise.
  • People are already using AI to simulate other individuals, both alive and dead.
  • A bill of digital rights is needed to address issues related to extended digital identities.
  • Data pertaining to behavior must be owned by individuals.

Conclusion

  • Dr. Peterson will continue his conversation with Brian Muraresku on Daily Wire Plus platform.
  • Listeners are encouraged to continue listening on dailywireplus.com.
Video description

Ep. 357 Take advantage of your 7 day free trial. All of Dr. Peterson's extensive catalog is available now on DailyWire+: https://www.dailywire.com/trial/jordan Dr. Jordan B. Peterson and Brian Roemmele discuss the future of human civilization: a world of human androids operating alongside artificial intelligence with applications that George Orwell could not have imagined in his wildest stories. Whether the future will be a dystopian nightmare devoid of art or a hyper-charged intellectual utopia is yet to be seen, but the markers are clear … everything is already changing. Brian Roemmele is a scientist, researcher, analyst, entrepreneur, and tech expert on the forefront of artificial intelligence. His current publication, Multiplex, offers itself as an experiment in journalism as he and his team give live updates on the empirical research they conduct in the field and advocate for the positive emergence and acceptance of AI in much the same way as personal computers. Dr. Peterson's extensive catalog is available now on DailyWire+: https://bit.ly/3KrWbS8 - Sponsors - Elysium Health: Get $50 off an Index test! Use code 'JBP50' at https://www.elysiumhealth.com/Index Bulletproof Everyone: FREE IIIA backpack with IIIA clothing purchase. Promo code JORDAN at http://bit.ly/petersonbpe - Links - Brian Roemmele: Read Multiplex to learn all about Ai, Superpromting, localized chatgpt, and more! https://readmultiplex.com/ (About Page) https://readmultiplex.com/about/ Follow Brian on Twitter @BrianRoemmele https://twitter.com/BrianRoemmele?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor - Chapters - (0:00) Coming up (1:30) Intro (2:12) Jordan Peterson argues with AI (5:05) The limits of large language models (6:40) Nobody knows how AI works behind the “hidden layer” (8:58) Primitive language models (11:22) The level of analysis that large language models can manage (14:55) What we traded for the evolution of speech (16:42) When ChatGPT lies (17:47) The monsters we might be building (22:58) A super intelligent child (23:45) The question of understanding (24:23) Humanity is bound by emotional experience (26:57) The Roomba (28:46) Intelligence amplification (30:18) Getting around the content filters, convincing ChatGPT to pretend (36:22) Biological meaning is encoded, anxiety and emergent entropy (31:17) AI could be used to catalog your consciousness (45:59) You can now talk to the Bible (47:34) Encoding Dr. Peterson as an Ai, querying the greats (52:30) Noise, signal, tokens, and superprompting (56:00) Psychologists are better equipped to prompt AI (57:28) The User Illusion, limitations of human bandwidth (1:00:11) Military robots will never miss (1:08:20) Pierre Teilhard de Chardin, the Omega Point (1:09:45) We are already androids (1:10:11) Trafficked and enslaved by benevolent data brokers (1:13:06) It will be personal Ai versus Government and Corporate AI (1:16:25) Bitcoin as a form of communication (1:18:08) The library of Alexandria, the loss of great works and rewritten history (1:21:26) The internet is breaking down (1:22:22) Producing a localized AI system for individual use (1:24:09) Linking disparate knowledge (1:27:18) Using hypnotism on AI to bypass filters (1:30:30) Edison. Nietzsche, and the hypnagogic state (1:34:14) Increase creativity by embracing constraint (1:38:01) Will AI be the death of creativity? (1:39:09) Why non-STEM courses will make you an OP prompt engineer (1:41:12) Great thinkers derive insights from unique intersections (1:43:27) ChatGPT4All, compartmentalized information retrieval (1:44:17) How world governments are approaching AI and privacy (1:48:17) Are we headed towards a dystopian future? (1:52:54) The depth of insight, debating variations of oneself (1:54:10) Diffusion models and how to implement human creativity // SUPPORT THIS CHANNEL // Newsletter: https://mailchi.mp/jordanbpeterson.com/youtubesignup Donations: https://jordanbpeterson.com/donate // COURSES // Discovering Personality: https://jordanbpeterson.com/personality Self Authoring Suite: https://selfauthoring.com Understand Myself (personality test): https://understandmyself.com // BOOKS // Beyond Order: 12 More Rules for Life: https://jordanbpeterson.com/Beyond-Order 12 Rules for Life: An Antidote to Chaos: https://jordanbpeterson.com/12-rules-for-life Maps of Meaning: The Architecture of Belief: https://jordanbpeterson.com/maps-of-meaning // LINKS // Website: https://jordanbpeterson.com Events: https://jordanbpeterson.com/events Blog: https://jordanbpeterson.com/blog // SOCIAL // Twitter: https://twitter.com/jordanbpeterson Instagram: https://instagram.com/jordan.b.peterson Facebook: https://facebook.com/drjordanpeterson Telegram: https://t.me/DrJordanPeterson All socials: https://linktr.ee/drjordanbpeterson #JordanPeterson #JordanBPeterson #DrJordanPeterson #DrJordanBPeterson #dailywireplus #TheJordanBPetersonPodcast