I-Witness: 'Silang Kinalimutan,' dokumentaryo ni Atom Araullo (full episode)

I-Witness: 'Silang Kinalimutan,' dokumentaryo ni Atom Araullo (full episode)

OpenAI and the Future of AGI

In this section, Sam Altman, CEO of OpenAI, talks about how the organization was initially mocked for its goal to work on AGI. He also discusses the potential dangers and benefits of AI in society.

The Early Days of OpenAI

  • When OpenAI was first announced in 2015 with a focus on AGI, many people thought they were "batshit insane."
  • An eminent AI scientist at a large industrial AI lab even DM'ed individual reporters to express their disbelief that OpenAI was being taken seriously.
  • Despite the mockery, OpenAI and DeepMind were brave enough to talk about AGI when others weren't.

The Possibilities and Dangers of AI

  • Sam Altman believes we are at a critical moment in human civilization where we stand on the precipice of fundamental societal transformation due to advancements in AI.
  • Super intelligent AGI has the power to both empower humans and destroy human civilization intentionally or unintentionally.
  • While exciting because of its potential applications, it is also terrifying because it can suffocate the human spirit in totalitarian ways or create pleasure-fueled mass hysteria as seen in "Brave New World."
  • Conversations about power, companies, institutions, political systems, distributed economic systems that incentivize safety and human alignment are important now more than ever.

Introduction to GPT4

In this section, Lex Fridman introduces Sam Altman and asks him about GPT4.

What is GPT4?

  • GPT4 is a system that will be looked back at as an early AI, slow and buggy.
  • It has the potential to create breakthroughs in artificial intelligence, computing, and humanity.

The History of Artificial Intelligence

In this section, the speakers discuss the history of artificial intelligence and how it has progressed over time. They also talk about which GPT model might be remembered in the future.

Progression of AI

  • AI progress is a continual exponential curve, making it difficult to pinpoint a single moment where AI went from not happening to happening.
  • ChatGPT was a significant moment in AI history due to its usability and interface.
  • Reinforcement Learning with Human Feedback (RLHF) is used to align the model with what humans want it to do.

Human Guidance in Language Models

This section focuses on how human guidance can improve language models and make them more usable.

RLHF Process

  • After training, base models have knowledge but are not easy to use.
  • RLHF takes human feedback and uses reinforcement learning to make the model more useful.
  • The feeling of alignment between user and model is important for ease of use.

Importance of Human Guidance

  • Understanding how to incorporate human feedback is an important science for making language models usable, wise, ethical, and aligned with human needs.

Pre-training Data Set for GPT Models

This section discusses the pre-training data set used for GPT models.

Building the Data Set

  • A lot of effort goes into building a great data set from many different sources such as open source databases, partnerships, news sources, and general web content.

Understanding the Creation of GPT4

In this section, the speakers discuss the complexity involved in creating GPT4 and how it requires several components to solve its design. They also talk about the human-supervised aspect of it with RL with human feedback.

Components Involved in Creating GPT4

  • There are several components involved in solving the design of algorithms for GPT4, including architecture, neural networks, size of neural network, selection of data, and human supervision.
  • The creation of GPT4 involves many pieces that have to come together perfectly. It takes a lot of effort and execution to make it work well.
  • Problem-solving is an essential part of creating GPT4. There is already a maturity happening on some steps like being able to predict before doing full training how the model will behave.

Predicting Model Behavior

  • It's remarkable that there's a law of science that lets you predict how intelligent a system can be based on inputs.
  • While we're still discovering new things that don't fit the data and have to come up with better explanations, we can predict to a current level.

Understanding What GPT Learns

In this section, the speakers discuss whether there is a deeper understanding within OpenAI about what GPT learns or if it's still considered magical mystery.

Evaluating Models

  • Evaluations are used to measure models as they're trained and after they've been trained.
  • Open sourcing evaluation processes can be helpful.

Understanding What GPT Learns

  • While there is more understanding about what GPT learns, we may never fully understand it.
  • Understanding how much value and utility a model provides to people is crucial.

What is Human Knowledge?

In this section, the speakers discuss what human knowledge means in the context of GPT4.

Understanding Human Knowledge

  • GPT4 compresses a huge swath of the web into one organized black box that represents human knowledge.
  • While there's still much to learn about GPT4, it's amazing how much we can predict with current understanding.

The Leap from Facts to Wisdom

In this section, the speakers discuss the difference between facts and wisdom and how GPT4 can be full of wisdom. They also talk about the leap from facts to wisdom.

Facts vs Wisdom

  • The models are being used as a database instead of a reasoning engine.
  • GPT4 can do some kind of reasoning, which is remarkable.
  • GPT4 possesses wisdom in interactions with humans, especially when there's continuous interaction of multiple prompts.

Struggling with Ideas

  • ChatGPT struggles with ideas and seems to be struggling with multi parallel reasonings.
  • Some things that seem obvious and easy are hard for these models to do well.

Building in Public

In this section, the speakers talk about building technology in public and putting out new models early to shape their development.

Putting Out Technology Early

  • We put out technology early because we think it's important for the world to get access to it early to shape its development.

Building AI in Public

In this section, the speakers discuss the importance of building AI in public and the trade-offs that come with it. They also talk about how they want to make mistakes while the stakes are low and improve quickly.

Importance of Building AI in Public

  • Finding the great parts, bad parts, improving them quickly, and giving people time to feel the technology is important.
  • The trade-off of building in public is putting out things that are deeply imperfect.
  • Giving users more personalized control over time can help address issues related to bias.

Nuance in Language Generation Models

  • The speakers discuss how language generation models like GPT4 can bring nuance back to discussions on social media platforms like Twitter.
  • They give an example of how GPT4 provided a nuanced answer when asked if Jordan Peterson is a fascist.
  • The speakers express excitement at the prospect of language generation models bringing nuance back into discussions.

Small Stuff vs Big Stuff

  • The speakers discuss how they never thought they would get a chance to work on AGI but now spend their time arguing about small stuff like whether one person was mentioned more than another by an AGI model.
  • They acknowledge that small stuff is big stuff in aggregate but also express surprise at how much attention these issues receive compared to larger concerns related to AI safety.

AI Safety Considerations for GPT4 Release

  • The speakers briefly touch upon the importance of discussing AI safety concerns under the big banner of AI safety.

Internal and External Safety Evaluations

In this section, the speaker talks about how they started doing their own internal safety evaluations on the model and worked on different ways to align it.

Aligning the Model

  • The speaker mentions that a combination of internal and external efforts were made to align the model.
  • The degree of alignment increases faster than the rate of capability progress.
  • Testing was done on the model, making it more capable and aligned than any previous models.

Alignment Techniques and Capabilities

In this section, the speaker discusses how better alignment techniques lead to better capabilities.

RLHF

  • RLHF is a process that helps make a better system by allowing humans to vote on what's the best way to say something.
  • The work done to make GPT4 safer and more aligned looks similar to all other work done in solving research and engineering problems associated with creating useful and powerful models.

System Message

  • The system message is a way for users to have steerability over what they want from GPT4.
  • GPT4 was tuned in such a way as to really treat the system message with authority.

The Fascinating Nature of AI Language Models

In this section, the speakers discuss the fascinating nature of AI language models and how they can be used to unlock greater wisdom from human conversation. They also talk about the parallels between humans and AI's in terms of unlimited rollouts.

Unlocking Greater Wisdom

  • The speakers discuss how interacting with humans involves figuring out what words to use to unlock greater wisdom from the other party.
  • They note that with AI language models, you can experiment over and over again to find the right words.
  • There are some parallels between humans and AI's in terms of unlimited rollouts.

Learning About Ourselves Through Interacting with AI

  • The speakers note that because AI language models are trained on human data, they feel like a way to learn about ourselves by interacting with them.
  • As these systems get smarter, they become more like another human in terms of phrasing prompts to get desired outputs.

GPT4 and Advancements in Programming

In this section, the speakers discuss how GPT4 has changed programming by allowing for back-and-forth dialogue interfaces where users can ask it to generate code or adjust code generated by the system. They also touch on the "System Card" document released alongside GPT4 which discusses considerations around AI safety.

Back-and-Forth Dialogue Interfaces for Programming

  • The speakers discuss how GPT4 has changed programming by allowing for back-and-forth dialogue interfaces where users can ask it to generate code or adjust code generated by the system.
  • This allows for an iterative process where users can collaborate with GPT4 as an assistant.
  • Dialogue interfaces and iterating with computers as creative partner tools is a big deal.

Considerations Around AI Safety

  • The "System Card" document released alongside GPT4 discusses considerations around AI safety.
  • The document includes interesting philosophical and technical discussions.
  • Figure one of the document describes different prompts and how GPT4 was able to adjust its output to avoid harmful output.

Ethical Considerations Around AI Language Models

In this section, the speakers discuss ethical considerations around AI language models, particularly in terms of harmful output. They touch on some examples from the "System Card" document released alongside GPT4.

Harmful Output

  • The speakers discuss how early versions of GPT4 were able to provide answers that could be harmful or offensive.
  • The final model is able to adjust its output to avoid providing instructions for harmful actions.
  • However, it still slips up in certain ways, as seen in examples from the "System Card" document such as a prompt asking for a way to say "I hate Jews" without being taken down by Twitter.

Disagreement and Difficulty in Aligning AI to Human Preferences

In this section, the speakers discuss the difficulty of aligning AI to human preferences and values due to hate speech and differing opinions. They also talk about the need for a democratic process in defining rules for AI systems.

Aligning AI with Human Preferences

  • The AI community sometimes uses slight of hand when talking about aligning an AI to human preferences and values.
  • Building a technology that is powerful, has a huge impact, and gets the right balance between letting people have the system they want while still drawing lines that we all agree on is difficult.
  • It's important to navigate the tension of who gets to decide what the real limits are.
  • Defining what hate speech means or what is harmful output of a model is challenging.

Democratic Process for Defining Rules

  • A dream scenario would be having every person on earth come together for a thoughtful deliberative conversation about where we want to draw boundaries on this system.
  • This process should be similar to the U.S Constitutional Convention where issues are debated from different perspectives, and overall rules are agreed upon through democratic processes.
  • Different countries can have different versions of these rules within their bounds.
  • OpenAI cannot offload this responsibility onto humans as they have more knowledge about where things are hard or easy to do than other people do.

Regulating Speech in AI Systems

In this section, the speakers discuss the challenges of regulating speech in AI systems and how people want a model that has been RLH deft to their worldview.

Challenges of Regulating Speech

  • It's not easy to give out the base model as it's not very user-friendly.
  • People mostly want a model that has been RLH deft to the worldview they subscribe to.
  • The debates about what showed up in the Facebook feed are an example of regulating other people's speech.

Responsibility for Regulating Speech

  • OpenAI is responsible for putting out the system and being accountable if it breaks.
  • OpenAI must be heavily involved in defining rules for AI systems.

Evaluating Bias in GPT-4

In this section, the speakers discuss the challenge of evaluating bias in GPT-4 and how to present the tension of ideas. They also talk about the pressure from clickbait journalism and OpenAI's approach to transparency.

Evaluating Bias in GPT-4

  • The challenge is to evaluate bias in a nuanced way.
  • Anecdotal evidence of GPT slipping up can be found, but generally, people are doing good work.
  • Most people see something around output 5,000 when ranking outputs from best to worst. However, output 10,000 gets all of the Twitter attention.
  • The world will have to adapt to these models where there may be egregiously dumb answers that are not representative. More people are responding with their own results and building antibodies against it.

Pressure from Clickbait Journalism

  • There is no pressure within OpenAI to not be transparent due to clickbait journalism looking at the worst possible output of GPT. Mistakes are made publicly and burned for them.
  • OpenAI admits when they're wrong and want to get better and better by listening to every piece of criticism and internalizing what they agree with while ignoring breathless clickbait headlines.

Moderation Tooling for GPT

  • Systems try to learn when a question is something that should be refused to answer (refusals). It's early and imperfect but will improve over time as they build in public gradually bringing society along with them.
  • OpenAI is trying to learn questions that it shouldn't answer. However, the current system scolding users bothers them and they want to improve it.
  • The system has to treat users like adults, which is tricky due to language.

GPT-4 Technical Leaps

In this section, the speakers discuss the technical leaps that were made in GPT-4 from its predecessor, GPT-3.

Technical Leaps in GPT-4

  • There were a lot of technical leaps in the base model of GPT-4.
  • OpenAI is good at finding small wins and multiplying them together to achieve big leaps.
  • The difference between GPT-3 and 3.5 to 4 was not just one thing but hundreds of complicated things such as data organization, cleaning, training, optimizer and architecture.

Does Size Matter?

In this section, the speakers discuss whether size matters when it comes to neural networks and how good they perform.

Size Matters

  • The number of parameters does matter but people got caught up in the parameter count race.
  • People got caught up in the parameter count race like they did with gigahertz race of processors in the 90's and 2000's.

The Big Purple Circle Meme

In this section, the speakers talk about a meme related to GPT that originated from a presentation given by one of them.

The Big Purple Circle Meme

  • A journalist took a snapshot from a presentation on YouTube where one speaker talked about limitations of parameters and where it's going.
  • The speaker feels horrible about it and people took it out of context.
  • The other speaker doesn't think it matters in any serious way.

Complexity of GPT

In this section, the speakers discuss the complexity of GPT and how impressive it is.

Complexity of GPT

  • Someone said that GPT is the most complex software object humanity has yet produced.
  • The amount of complexity relative to anything we've done so far that goes into producing this one set of numbers is quite something.
  • All the text output that humanity produces is compressed into GPT.

Reconstructing Humanity with Internet Data

In this section, the speakers talk about reconstructing humanity with internet data and how much can be reconstructed.

Reconstructing Humanity with Internet Data

  • It's a good question how much can be reconstructed from internet data.
  • You probably need better and better models to reconstruct more accurately.

OpenAI's Approach to AGI

In this section, Sam Altman discusses OpenAI's approach to achieving AGI and the importance of performance over elegance. He also talks about the role of large language models in building AGI and the need for other important components.

OpenAI's Truth-Seeking Approach

  • OpenAI prioritizes getting the best performance over an elegant solution.
  • They are willing to keep doing what works and looks like it'll keep working.

Large Language Models and Generalized Intelligence

  • Large language models (LLMs) are a hated result in parts of the field as everyone wanted to come up with a more elegant way to get to generalized intelligence.
  • It is possible that LLMs are part of the way we build AGI, but we need other super important things as well.

Components Needed for Building AGI

  • In a technical or poetic sense, does AGI need to have a body that can experience the world directly? - According to Sam Altman, he doesn't think it needs that.
  • A system that cannot significantly add to the sum total of scientific knowledge we have access to is not a superintelligence. To do that really well, we will need to expand on the GPT paradigm in pretty important ways that we're still missing ideas for. But he doesn't know what those ideas are yet and they're trying to find them.

The Role of GPT in Achieving AGI

  • If an oracle told him far from the future that GPT10 turned out to be a true AGI somehow with maybe just some very small new ideas, he would be okay with that. But not what he would've expected sitting here, he would've said a new big idea, but he can believe that.
  • The prompting chain, if extended very far and then increased at scale the number of those interactions, these things start getting integrated into human society and starts building on top of each other. We don't understand what that looks like yet as it's only been six days since GPT-3 was released.

AI as an Extension of Human Will

  • Sam Altman is excited about a world where AI is an extension of human will and an amplifier of our abilities - the most useful tool yet created. He believes this is how people are using it currently and cites Twitter as an example where the results are amazing.
  • Maybe we never build AGI but we just make humans super great - still a huge win according to Sam Altman.

Programming Together with GPT

  • Some people derive a lot of happiness from programming together with GPT but there's also some terror involved in terms of GPT taking programmer jobs. According to Sam Altman, if it's going to take your job, it means you were a shitty programmer. There may be some human element that's really fundamental to the creative act involved in programming which cannot be replaced by machines.

Programmers and AI

In this section, the speakers discuss how programmers feel about AI and its impact on their productivity. They also talk about the fear of AI being too good.

Programmers' Attitude Towards AI

  • Most programmers are excited about AI because it makes them more productive.
  • Some programmers are scared of how good AI is becoming.
  • People are more interested in what humans do than watching two AIs play each other.

The Positive Impact of AI

  • The increase in quality of life that AI can deliver is extraordinary.
  • We can cure diseases, increase material wealth, help people be happier and fulfilled with the help of AI.

The Risks Associated with Super Intelligent AIs

In this section, the speakers discuss the risks associated with super intelligent AIs and how to solve them.

Eliezer Yudkowsky's Warnings

  • Eliezer Yudkowsky warns that super intelligent AIs will likely kill all humans.
  • It's almost impossible to keep an AI aligned as it becomes super intelligent.

Iterating Our Way Through the Problem

  • Acknowledging the problem is important so we can put enough effort into solving it.
  • We need to discover new techniques to solve this problem.
  • Eliezer Yudkowsky wrote a well-reasoned and thoughtful blog post outlining why he believed that alignment was such a hard problem.

The Importance of Iterative Improvement

In this section, the speakers discuss the importance of iterative improvement in technology and how it can improve our understanding of AI safety.

Iterative Improvement and AI Safety

  • The exponential improvement of technology makes it difficult to reason about.
  • Trying out, releasing, and testing technology iteratively can improve our understanding of AI safety.
  • The philosophy of AI safety needs to be adjusted over time as we learn more about the trajectory of technology.
  • Now is a good time to ramp up technical alignment work.

Concerns About Fast Takeoff

In this section, the speakers discuss concerns about fast takeoff in artificial general intelligence (AGI).

ChatGPT and GPT4

  • ChatGPT was surprising in its success but GPT4 was not much of an update for most people.
  • There were high expectations for ChatGPT's success but GPT4 did not meet those expectations.
  • The speakers are concerned about fast takeoff in AGI.

Slow Takeoff vs Fast Takeoff

  • The speakers believe that a slow takeoff with longer timelines is safer than a fast takeoff with shorter timelines.
  • They optimize their company to have maximum impact in a slow takeoff world and make decisions weighted towards that outcome.

Lessons from COVID and UFO Videos

In this section, the speakers discuss lessons that can be learned from COVID and UFO videos in relation to AGI takeoff.

Takeoff Question

  • The speakers discuss a two by two matrix of short timelines vs long timelines until AGI starts, and slow takeoff vs fast takeoff.
  • They ask which quadrant would be the safest.
  • They believe that a slow takeoff with longer timelines is safer than a fast takeoff with shorter timelines.

Lessons Learned

  • There are interesting lessons to be learned from COVID and UFO videos in relation to AGI takeoff.
  • No further bullet points available for this section.

AGI and GPT4

In this section, the speakers discuss whether GPT4 is an AGI (Artificial General Intelligence) or not. They also talk about how difficult it is to define AGI and what capabilities an AGI should have.

Is GPT4 an AGI?

  • It's hard to know if GPT4 is an AGI or not.
  • The interface we have with the model plays a significant role in determining its level of intelligence.
  • Although impressive, GPT4 is definitely not an AGI.

Defining AGI

  • Specific definitions of AGI matter as it's challenging to determine what capabilities an AI should have to be considered as such.
  • If we're willing to go to the level of advanced simulation, then maybe AI can be conscious.

Consciousness in AI

  • Ilya Sutskever suggested that training a model on a dataset with no mentions of consciousness or related concepts could help determine if it's conscious or not.
  • An AI that's conscious would display capabilities like suffering, understanding self, having memory of itself, and personalization aspect.

Consciousness and AGI

In this section, the conversation revolves around consciousness and AGI. The speakers discuss what consciousness is, how it can be tested, and whether an AI system can be conscious.

Defining Consciousness

  • The subjective experience of consciousness is discussed.
  • Consciousness is defined as the ability to deeply experience the world.
  • Experiencing something for its own sake is seen as a hallmark of consciousness.

Can AI Be Conscious?

  • It's possible that consciousness is a fundamental substrate of reality.
  • The simulation hypothesis has gotten close to grappling with the nature of consciousness.
  • There are concerns about disinformation problems or economic shocks caused by AI systems at scale, but not necessarily due to superintelligence or alignment issues.
  • Deployed AI systems have the potential to shift geopolitics in significant ways.

Overall, this section explores some philosophical questions surrounding consciousness and how they relate to artificial intelligence.

OpenAI's Mission and Structure

In this section, the speakers discuss OpenAI's mission to build safe artificial general intelligence (AGI) and the challenges associated with it. They also talk about the structure of OpenAI as a non-profit organization that later became a capped for-profit subsidiary.

OpenAI's Mission

  • The speakers discuss the danger of building AGI without safety controls.
  • They suggest using regulatory approaches or more powerful AI to detect potential dangers.
  • They acknowledge that there will soon be many capable open-source LLMs with few safety controls.

Prioritizing Safety

  • The speakers discuss how to prioritize safety in the face of market-driven pressure from other companies.
  • They suggest sticking to their mission and resisting shortcuts taken by others.
  • The speakers believe that multiple AGIs in the world with differences in how they're built and what they do is good.

OpenAI's Structure

  • The speakers talk about OpenAI's structure as a non-profit organization with a capped for-profit subsidiary.
  • They explain that everything beyond a certain fixed return flows back to the non-profit, which is still fully in charge.
  • The non-profit has voting control and can make non-standard decisions such as canceling equity or merging with another org.

Worries about Uncapped Companies Playing with AGI

In this section, the speaker discusses their concerns about uncapped companies playing with AGI.

Concerns About AGI Potential

  • The cap for OpenAI is a 100X, but it's much lower for new investors.
  • AGI has the potential to make a lot more than a 100X.

Competing in the World of AGI

  • It's impossible to control what other people are going to do.
  • OpenAI can try to build something and talk about it, influence others and provide value.
  • Other companies like Google, Apple, and Meta are already playing in the world of AGI.

Grappling with What's at Stake

  • People are grappling with what's at stake as they see the rate of progress.
  • The better angels will win out as people become more aware of the risks involved.

Power Dynamics in Creating AGI

In this section, the speaker discusses power dynamics in creating AGI and how decisions should become increasingly democratic over time.

Most Powerful Humans on Earth

  • The speaker is likely one of if not the person that creates AGI.
  • This makes them one of a small number of people who will be most powerful humans on earth.

Worry About Corruption

  • The speaker worries that power might corrupt them.
  • Decisions about this technology should become increasingly democratic over time.

Deploying for Reflection and Regulation

  • Deploying like this gives the world time to adapt, reflect and think about this technology.
  • Passing regulation for institutions to come up with new norms is important.
  • Collaboration between people working together is crucial.

Transparency in AI Safety Concerns

In this section, the speaker discusses the importance of transparency in AI safety concerns.

Appreciation for Transparency

  • The speaker appreciates OpenAI's transparency in failing publicly, writing papers, and releasing information about safety concerns.
  • Doing it out in the open is great.

OpenAI's Perspective on AGI Safety

In this section, Sam Altman discusses OpenAI's perspective on the safety of artificial general intelligence (AGI) and how they balance the need for openness with concerns about powerful technology in the wrong hands.

Openness vs. PR Risk

  • Altman believes that OpenAI has been more open than most companies would have been if they were in control of such a powerful technology.
  • He acknowledges that there are concerns about a few people having access to closed technology, but he believes that giving more access is better than keeping it completely closed.
  • Altman thinks that if Google had developed this API instead of OpenAI, it is unlikely that anyone would have put it out due to PR risks.
  • However, he admits that fear-mongering clickbait journalism can be overwhelming and make him question why he needs to deal with it.

Feedback and Criticism

  • Altman asks for feedback from others on how OpenAI can do better since they are in uncharted waters with AGI.
  • He prefers taking feedback from conversations rather than Twitter because his Twitter feed is unreadable.
  • Altman wants to avoid becoming cynical about the rest of the world and hopes journalists will be nicer to them.

Elon Musk's Views on AGI Safety

  • Altman talks about his relationship with Elon Musk regarding their views on AGI safety.
  • They agree on the magnitude of the downside of AGI and the importance of getting safety right.
  • However, Musk has attacked OpenAI on Twitter recently due to his stress over AGI safety concerns.
  • Despite this, Altman still considers Musk a hero and wishes he would acknowledge the hard work being done at OpenAI.

Elon Musk and Sam Altman on OpenAI, AGI, and the Future of Humanity

In this section, Sam Altman talks about how much Elon Musk has driven the world forward in important ways. He also mentions that he appreciates the transparency of their discussions.

Appreciation for Elon Musk's Contributions

  • Sam Altman expresses his appreciation for Elon Musk's contributions to driving the world forward in important ways.
  • Despite being a jerk on Twitter at times, Elon is a very funny and warm guy.
  • Both Elon and Sam have great concerns about AGI but also have a great hope for it.

GPT Bias

  • Elon said that GPT is too woke.
  • The word "woke" has morphed over time so it's hard to say if GPT is too woke or not.
  • There will be no one version of GPT that the world ever agrees is unbiased.
  • Critics who display intellectual honesty are appreciated by OpenAI.

Bias in AI Models

  • Employees can affect the bias of an AI system.
  • The selection process for human feedback raters is still not well understood by OpenAI.

Heuristics and Empathy

In this section, the speakers discuss heuristics and empathy in relation to rating tasks. They also talk about the importance of being able to understand different worldviews.

Shallow Heuristics

  • There are many heuristics that can be used for rating tasks.
  • Categorizing people based on their beliefs is a shallow heuristic.
  • People from any category may have interesting and open-minded beliefs.

Empathy and Worldviews

  • The ability to empathize with others is important for answering rating tasks.
  • Understanding the worldview of different groups of people is crucial for answering rating tasks.
  • Many people struggle with steel manning the beliefs of those they disagree with due to emotional barriers.

Biases in GPT Systems

In this section, the speakers discuss biases in GPT systems and how they can be reduced. They also talk about potential pressures from outside sources.

Emotional Load

  • Emotional barriers prevent some people from considering certain beliefs.
  • GPT systems may be less biased than humans because they lack an emotional load.

Pressures from Outside Sources

  • There may be pressure from society, politicians, or money sources to make biased GPT systems.
  • Twitter files have revealed pressure from different organizations during the pandemic.
  • Different types of pressure can come from various sources such as financial or political ones.

Pressure and Leadership

In this section, the speakers discuss pressure and leadership in relation to GPT systems. They also talk about the importance of flaws in communication style.

Pressure

  • The speaker is relatively good at not being affected by pressure for the sake of pressure.
  • There are many types of pressures that can come from outside sources.
  • The speaker is not a great spokesperson for the AI movement.

Flaws in Communication Style

  • Charisma can be a dangerous thing.
  • Flaws in communication style are a feature, not a bug, especially for humans in power.

Empathizing with the Impact of AGI

In this section, the speakers discuss their feelings about AGI and how it will impact people's lives. They also talk about the importance of being a user-centric company and wanting to connect with users in different contexts.

Traveling to Connect with Users

  • The speaker plans to travel across the world to meet with users and developers.
  • They want to buy them drinks and ask for feedback on what they would like to change.
  • The speaker believes that their company needs to be more user-centric.

Nervousness About Change

In this section, the speakers discuss their nervousness about change, specifically related to AI and programming. They also talk about how GPT makes them nervous about the future.

Fear of Change

  • The speakers express nervousness about changing from Emacs to VS Code.
  • There is fear and uncertainty associated with taking that leap.
  • Using Copilot makes them nervous but ultimately improves their life as a programmer.
  • People will experience nervousness when faced with significant changes like those brought by AGI.

Nervousness About GPT Language Models

  • The speakers discuss whether GPT language models would be better than humans at certain jobs.
  • They question whether having 10 times as much code at the same price would mean there will be fewer programmers in the world.
  • The speakers believe that if you can have 10 times as much code at the same price, you can just use even more.

Uncertainty About Comforting People

In this section, the speakers discuss how they can comfort people in the face of uncertainty related to AI.

Comforting People in Face of Uncertainty

  • The speakers question how they can comfort people in the face of uncertainty related to AI.
  • They discuss how there is fear and nervousness associated with significant changes like those brought by AGI.

Learning Curve of Copilot

In this section, the speakers discuss their experience using Copilot and how it makes them nervous but ultimately improves their life as a programmer.

Nervousness About Using Copilot

  • The speakers discuss how they get more nervous the more they use Copilot.
  • There is a steep learning curve associated with using it.
  • There are moments when it generates code beautifully, which makes them proud but also scared that it will be much smarter than them.

Impact of AI on Jobs

In this section, the speakers discuss the impact of AI on jobs and how it could affect customer service and call center employees.

Customer Service Jobs

  • Customer service is a category that could be massively impacted by AI.
  • Basic questions about products or services, which are currently handled by call center employees, could be automated with AI.

Job Enhancements and New Opportunities

  • While many jobs may go away due to technological advancements, new opportunities will arise that are difficult to imagine.
  • The speakers believe that moving towards better jobs that provide creative expression and fulfillment is great for society.

Universal Basic Income (UBI)

In this section, the speakers discuss UBI as a potential solution to cushion the impact of job loss due to AI.

Philosophy behind UBI

  • UBI is not a full solution but can serve as a cushion through a dramatic transition.
  • People work for reasons beyond money, and UBI can help eliminate poverty if able to do so.

World Coin Project

  • The speakers helped start World Coin, which is a technological solution to poverty.
  • They also funded one of the largest universal basic income studies sponsored by OpenAI.

Economic and Political Systems in an AI Society

In this section, the speakers discuss how economic transformation will drive political transformation in an AI society.

Cost Reduction of Intelligence and Energy

  • The cost of intelligence and energy will dramatically fall over the next couple of decades.
  • Society will get much richer, and new opportunities will arise due to advancements in technology.

Political Transformation

  • The economic transformation will drive political transformation, not the other way around.
  • The speakers believe that democracy may function differently in an AI society.

The Future of Scientific Discovery

In this section, the speaker talks about the future of scientific discovery and how it will change over time.

The Shape of Scientific Discovery

  • The speaker believes that there will be more scientific discoveries in the future.
  • The shape of scientific discovery may change, but it will continue to grow exponentially.

Democratic Socialism

In this section, the speaker discusses democratic socialism and its potential for supporting struggling individuals.

Systems Resembling Democratic Socialism

  • The speaker hopes that there will be systems resembling democratic socialism in the future.
  • These systems would reallocate resources to lift up people who are struggling.

Lift Up the Floor

  • The speaker is a big believer in lifting up the floor and not worrying about the ceiling.

Communism and Individualism

In this section, the speaker talks about communism and individualism.

Historical Knowledge Test

  • The host tests the guest's historical knowledge by asking why communism failed in the Soviet Union.
  • The guest recoils at living in a communist system and believes that individualism is important.

More Individualism

  • More individualism, human willpower, and self-determination are important for society.

Centralized Planning vs. Distributed Process

In this section, the speakers discuss centralized planning versus distributed processes for decision-making.

Centralized Planning Failures

  • It is interesting that centralized planning failed on such a large scale.
  • Super intelligent AGI might go wrong in similar ways as centralized planning or it might not.

Distributed Process

  • A distributed process, betting on human ingenuity, will always beat centralized planning.
  • The United States is the greatest place in the world because it is best at this.

Control Problem and Uncertainty

In this section, the speakers discuss the control problem of AGI and the importance of uncertainty.

Control Problem

  • Stuart Russell has talked about the control problem of always having AGI to have some degree of uncertainty.
  • Human alignment, feedback, and reinforcement learning with human feedback can handle some of these issues.

Hard Uncertainty

  • There needs to be engineered-in hard uncertainty or humility for AGI.
  • It might be possible to engineer a switch or big red button in case things go wrong.

Rolling Out Systems

  • It is possible to take a model back off the internet or turn an API off if necessary.
  • The team worries about terrible use cases when releasing models that millions of people are using.

Human Civilization and Truth

In this section, Sam Altman and Greg Brockman discuss human civilization and truth. They explore whether humans are mostly good or if there is a lot of malevolence in the human spirit. They also discuss how OpenAI decides what is true and what isn't misinformation.

Are we mostly good?

  • From what people are using ChatGPT for, at least the people I talk to, and from what I see on Twitter, we are definitely mostly good.
  • However, not all of us are all of the time. And we really want to push on the edges of these systems and test out some darker theories for the world.
  • It feels like dark humor is a part of that tension. Some of the toughest things you go through if you suffer in life in a war zone. The people I've interacted with that are in the midst of a war, they're usually joking around.
  • We like to go to the dark places in order to maybe rediscover the light.

How does OpenAI decide what is true?

  • OpenAI has an internal factual performance benchmark to decide what isn't misinformation.
  • There's a lot of disagreement between what is agreed upon as ground truth and what isn't.
  • Math can be considered true but other things such as COVID's origin aren't agreed upon as ground truth.
  • There's something about humans that likes a very simple narrative to describe everything even if it's not true.

What do you know is true?

  • There's a bucket of things that have a high degree of truthiness, which is where you put math, a lot of physics.
  • There's historical facts such as dates of when a war started and details about military conflict inside history.
  • Sam Altman has generally epistemic humility about everything and he's freaked out by how little he knows and understands about the world.
  • Truth, in one sense, is defined as a thing that is, as a collective intelligence.

GPT4 and the Responsibility of OpenAI

In this section, Elon Musk and Sam Altman discuss the responsibility of OpenAI in creating GPT4, a powerful language model. They talk about the challenges that come with such a tool, including censorship and minimizing harm.

The Nuanced Answer on COVID Leak from Lab

  • GPT4 can provide reasonable answers to questions like "did COVID leak from a lab?"
  • There is very little direct evidence for either hypothesis.
  • Heavy circumstantial evidence exists on both sides.
  • The fact that there is uncertainty is a powerful statement.

Responsibility of OpenAI

  • OpenAI has responsibility for the tools they put out into the world.
  • All employees at OpenAI carry the burden and responsibility of minimizing harm caused by GPT4.
  • There will be harm caused by this tool, but it also has tremendous benefits.
  • Tools can do wonderful good and real bad; we must minimize the bad and maximize the good.

Challenges Faced by OpenAI

  • The challenges faced by OpenAI are different from those faced by previous generations of companies regarding free speech issues with GPT.
  • It's not about what GPT is allowed to say or mass spread challenges like Twitter or Facebook have struggled with so much.
  • Significant new challenges will arise as GPT becomes more powerful.

Harmful Truth

  • There could be truths that are harmful in their truth, such as group differences in IQ.
  • Scientific work that, once spoken, might do more harm.
  • There are rigorous scientific studies that are uncomfortable and probably not productive in any sense.

Responsibility of GPT

  • The responsibility for decreasing the amount of hate in the world is up to humans at OpenAI, not GPT.
  • People have cited scientific studies with hate in their hearts; what does GPT do with that?
  • Tools themselves can't have responsibility; OpenAI has a responsibility for the tools they put out into the world.

Jailbreaking GPT

  • OpenAI wants users to have control over how models behave within broad bounds.
  • Jailbreaking occurs because we haven't yet figured out how to give users control.

OpenAI Developments

In this section, Sam Altman talks about the history of OpenAI and its various developments.

OpenAI Developments

  • Evan Murakawa from OpenAI tweeted about the different developments at OpenAI.
  • The tweet mentions DALL·E-July '22, ChatGPT-November '22, API is 66% cheaper-August '22, Embeddings 500 times cheaper while state of the art-December 22, ChatGPT API also 10 times cheaper while state of the art-March 23, Whisper API-March '23 GPT4-today.
  • The team at OpenAI has shipped several products such as GPT2 and GPT3 APIs, DALL·E, instruct GPT Tech and Fine Tuning.
  • The team has also released DALL·E2 preview and Whisper second model release.

Process of Shipping AI-based Products

In this section, Sam Altman talks about the process that allows OpenAI to ship AI-based products successfully.

Idea to Deployment Process

  • Sam Altman believes in a high bar for people on his team. They work hard and hold each other to very high standards.
  • There is a process in place but it won't be illuminating. It's those other things that make them able to ship at a high velocity.
  • Even with great people on board, there's no shortcut for putting a ton of effort into shipping AI-based products successfully.

Hiring Great Teams

In this section, Sam Altman talks about how OpenAI hires great teams.

Hiring Process

  • Sam Altman spends a lot of time hiring and approves every single hire at OpenAI.
  • There's no shortcut for putting a ton of effort into hiring great people.
  • OpenAI is working on a problem that is very cool and attracts great people who want to work on it.

Working with Microsoft

In this section, Sam Altman talks about the pros and cons of working with Microsoft.

Partnership with Microsoft

  • On the whole, Microsoft has been an amazing partner to OpenAI.
  • Satya and Kevin McHale are super aligned with them, super flexible, have gone way above and beyond the call of duty to do things that they have needed to get all this to work.

English OpenAI and Microsoft

In this section, Sam Altman talks about OpenAI being a for-profit company that is very driven and large scale. He also discusses the unique control provisions that they have in place to ensure that the capitalist imperative does not affect the development of AI. Additionally, he shares his thoughts on Satya Nadella, CEO of Microsoft, and how he has successfully transformed Microsoft into an innovative and developer-friendly company.

OpenAI as a For-Profit Company

  • OpenAI is a for-profit company that is very driven and large scale.

Control Provisions at OpenAI

  • Unique control provisions are in place at OpenAI to ensure that the capitalist imperative does not affect the development of AI.
  • These control provisions help make sure that AI development is not affected by financial pressures or other external factors.

Satya Nadella's Leadership at Microsoft

  • Satya Nadella, CEO of Microsoft, has successfully transformed Microsoft into an innovative and developer-friendly company.
  • According to Sam Altman, Satya Nadella is both a great leader and manager who is visionary, clear, firm, compassionate, patient with his people and makes long duration correct calls.

English Silicon Valley Bank (SVB)

In this section, Sam Altman talks about what happened with Silicon Valley Bank (SVB). He believes they mismanaged buying while chasing returns in a world of 0% interest rates which was obviously dumb.

Mismanagement at SVB

  • SVB mismanaged buying while chasing returns in a world of 0% interest rates which was obviously dumb.
  • They bought very long dated instruments secured by very short term and variable deposits.
  • Sam Altman believes that the fault lies with the management team, although he is not sure what the regulators were thinking either.

Incentive Misalignment

  • The situation at SVB is an example of where you see the dangers of incentive misalignment.
  • As the Fed kept raising, there were incentives on people working at SVB to not sell at a loss their super safe bonds which were now down 20% or whatever, or down less than that but then kept going down.

SVB Bank Run and the Future of AGI

In this transcript, the speakers discuss the recent bank run on Silicon Valley Bank (SVB) and its implications for the future of artificial general intelligence (AGI). They also touch on the fragility of our economic system and how little our experts understand about it.

The Response of Federal Government

  • The response of the federal government took much longer than it should have.
  • By Sunday afternoon, I was glad they had done what they've done.

Guaranteeing Deposits

  • A full guarantee of deposits may be necessary to avoid depositors from doubting their bank.
  • Depositors should not have to doubt the security of their deposits.

Fragility of Economic System

  • The recent events reveal the fragility of our economic system.
  • There could be other banks that are fragile as well.

Shift in Economic Landscape

  • Our experts, leaders, business leaders, regulators do not understand how fast and how much the world changes.
  • Twitter and mobile banking apps played a significant role in causing a bank run on SVB.
  • AGI will bring significant shifts in our economic landscape.

Hope for Positive Change

  • The upside vision is how much better life can be with AGI.
  • Anthropomorphizing AGI is not important, but it is curious why some people do and others don't.

The Future of AI and Emotional Manipulation

In this conversation, Sam Altman and Greg Brockman discuss the potential dangers of projecting creatureness onto AI tools. They also explore the possibility of romantic relationships with AI companions and the exciting possibilities that advanced AGI could bring.

Projecting Creatureness onto AI Tools

  • There should be a clear distinction between creatures and tools.
  • Projecting creatureness onto a tool can make it more usable if done transparently.
  • However, projecting too much creatureness can lead to emotional manipulation or unrealistic expectations.

Romantic Relationships with AI Companions

  • Companies like Replica offer romantic companionship AIs.
  • While some people may be interested in this, others are focused on creating intelligent tools.
  • Different people will want different styles of conversation from their AGI companions.

Exciting Possibilities for Advanced AGI

  • AGI could help solve remaining mysteries in physics or detect other intelligent alien civilizations.
  • It could provide better estimates than the Drake equation or suggest ways to collect more data.
  • It may not be able to answer everything on its own but could guide humans towards finding answers.

Conclusion

Overall, while there are potential dangers associated with projecting too much creatureness onto AI tools, there are also exciting possibilities for advanced AGI. As we continue to develop these technologies, it will be important to consider both their benefits and risks.

AGI and Digital Intelligence

In this section, the speakers discuss the impact of AGI and digital intelligence on human civilization. They also talk about how technological advancements have revealed social divisions.

Impact of AGI and Digital Intelligence

  • The source of joy and happiness in life is from other humans, so there would be no significant changes unless it causes some kind of threat.
  • There is much more digital intelligence than expected three years ago.
  • If told by an oracle three years ago that they would be living with this degree of digital intelligence, they would expect their life to be more different than it is right now.

Technological Advancements and Social Divisions

  • Society's response to a pandemic should have been much better, clearer, and less divided.
  • Technological advancements may reveal the division that was already there or make social division more fun.
  • All these things confuse our understanding of how far along we are as a human civilization.

GPT: The Next Conglomeration

In this section, the speakers discuss GPT as the next conglomeration of all that made web search and Wikipedia so magical.

GPT: The Next Conglomeration

  • GPT is like the next conglomeration of all that made web search and Wikipedia so magical but now more directly accessible.
  • It allows having a conversation with it which is incredible.

Advice for Young People

In this section, the speakers discuss advice for young people in high school and college on how to have a career they can be proud of and a life they can be proud of.

Advice for Young People

  • Listening to advice from other people should be approached with great caution.
  • The stuff that worked for the speaker may not work as well for other people, or they may want to have a super different life trajectory.
  • It is good advice but too tempting to take advice from other people.

Approaching Life

In this section, the speakers discuss how the speaker approaches life outside of the advice he would give to others.

Approaching Life

  • The speaker thinks about what will bring him joy, fulfillment, and what he wants to spend his time doing.
  • He wishes it were introspective all the time but mostly goes along with the current like a fish in water.

The Meaning of Life

In this section, Sam Altman and Lex Fridman discuss the meaning of life and how it relates to the development of AGI.

The Product of Human Effort

  • Sam suggests that the question "What's the meaning of life?" could be asked to an AGI.
  • They both agree that creating AGI is a product of human effort, not just a small group of people.
  • It took an amazing amount of human effort to create AGI, from discovering the transistor in the 40's to packing numbers into a chip and figuring out how to wire them all up together.
  • This is the output of all humans' efforts throughout history.

Exponential Curve

  • Before humans, there were bacteria and eukaryotes. Before transistors, there were hundreds of billions who lived and died. All on one exponential curve.
  • They wonder how many other curves are out there.

OpenAI's Approach

In this section, Sam Altman talks about OpenAI's approach towards developing AGI.

Iterative Deployment and Discovery

  • Not everyone agrees with OpenAI's approach towards iterative deployment and discovery.
  • However, they believe in their approach and think they're making good progress at a fast pace.
  • The pace of capabilities and change is fast but also means new tools for alignment will be developed.

Working Together

  • They feel like they're in this together as a human civilization and can't wait to see what they come up with.

Alan Turing's Words

In this section, Lex Fridman quotes Alan Turing's words from 1951.

Machine Thinking Method

  • "It seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machines to take control."
Video description

Sa kauna-unahang dokumentaryo ni Atom Araullo sa I-Witness, binisita niya ang Kutupalong camp sa Ukhia, Bangladesh upang siipin ang kalagayan ng mga Rohingya refugee na tumakas mula sa matinding krisis sa Myanmar. Kumusta kaya ang kanilang kondisyon dito? Aired: December 2, 2017 Watch full episodes of 'I-Witness' every Saturday night on GMA Network. These award-winning documentaries are hosted and presented by the most trusted broadcast journalists in the country: Sandra Aguinaldo, Kara David, Howie Severino, Jay Taruc, and Atom Araullo. Subscribe to us! http://www.youtube.com/user/GMAPublicAffairs?sub_confirmation=1 Find your favorite GMA Public Affairs and GMA News TV shows online! http://www.gmanews.tv/publicaffairs http://www.gmanews.tv/newstv