StrictlyVC in conversation with Sam Altman, part two (OpenAI)

StrictlyVC in conversation with Sam Altman, part two (OpenAI)

Introduction

In this section, the speaker introduces the topic of AI and reflects on how people were surprised by the success of Cha GPT and Dolly.

People's Surprise with Cha GPT and Dolly

  • The speaker is unsure why people were so surprised by Cha GPT and Dolly.
  • The speaker believes that dialogue is the best way for people to interact with AI models.
  • The speaker notes that there was a belief that AI would first come for physical labor before moving onto cognitive labor. However, this has not been the case.
  • The speaker believes that society should gradually adapt to new technology rather than dropping a super powerful AGI all at once.

Releasing Technology Responsibly

In this section, the speaker discusses their approach to releasing technology responsibly.

Releasing Technology Gradually

  • The speaker believes in starting societal changes early when stakes are still relatively low rather than waiting until it's too late.
  • The speaker notes that society can update to massive changes faster than they thought but still believes in releasing technology slowly and gradually improving it over time.
  • The release date for GPT4 is uncertain as they want to ensure it can be done safely and responsibly.

The GBT4 Rumor Mill

In this section, the speaker talks about the rumors surrounding GBT4 and how they have been going on for six months. They also express their frustration with people's obsession with speculation.

The GBT4 Rumor Mill

  • The speaker expresses their frustration with the ongoing rumors surrounding GBT4.
  • People are begging to be disappointed as the hype continues to grow.
  • It has been going on for six months at this volume.
  • The speaker wonders where all these rumors come from and why people don't have better things to speculate on.

Monetizing AI

In this section, the speaker talks about monetizing AI and how they hope that access to AI will become democratized in the future. They also discuss competition and capitalism in relation to offering the best service at the lowest price.

Soft Promise to Investors

  • The speaker talks about a soft promise made to investors that once they built a generally intelligent system, they would ask it to figure out a way to generate an investment return.
  • This was before announcing their partnership with Microsoft.
  • Although deeply imperfect, they put it out into the world via an API and other people figured out ways of using it.
  • However, they haven't quite figured out how to monetize it yet.

Licensing Model

  • Currently, licensing is done mostly with startups.
  • While early on, there are concerns about commoditization and competition from other companies like Google.
  • However, the speaker hopes that access to AI will become democratized in the future through competition.

Democratizing Access To AI

  • The speaker hopes that access to AI will become democratized in the future.
  • They believe that having several AGIs in the world will help allow for multiple viewpoints and prevent any one entity from becoming too powerful.
  • The cost of intelligence and energy will trend down as it gets commoditized, leading to a surplus of access to these systems.
  • Governance of the systems will eventually benefit everyone.

Capitalism And Competition

  • The speaker deeply believes in capitalism and competition to offer the best service at the lowest price.
  • However, they acknowledge that this may not be great from a business standpoint.

Introduction

In this section, the speaker talks about their vision for AI and how it should be designed to serve individuals. They also discuss their partnership with Microsoft.

Designing AI to Serve Individuals

  • The speaker believes that AI should be designed to serve individuals and do things that align with their beliefs.
  • They prefer a system where users have control over how the AI operates.
  • The speaker thinks that AGI is important and wants to build products and services in service of that goal.

Partnership with Microsoft

  • The speaker has a positive view of Microsoft as a values-aligned company.
  • They believe that Microsoft is good at building large supercomputers and infrastructure, while they are good at research.
  • The partnership has been great so far.

Google's Approach to AI

In this section, the interviewer asks the speaker about Google's approach to AI and its impact on society.

Google's Decision Not to Launch Imperfect AI

  • Google has decided not to launch imperfect AI because it could harm their reputation.
  • The interviewer wonders if they will hold them accountable for this decision when they eventually launch something.

Suspension of Responsible AI Organization Employee

  • An employee from Google's responsible AI organization was suspended after claiming that the chatbot he was working on had become sentient.
  • The speaker does not know enough about the situation to comment but thinks that Chad GBT is amazing.

Concerns About AGI in Education

In this section, the interviewer asks the speaker about concerns regarding AGI in education.

Educators' Concerns About AGI

  • Educators are concerned about how AGI will impact education and what skills students will need in the future.
  • The New York Public School System recently restricted access to GPT due to concerns about its impact on students.
  • The speaker understands why educators are concerned and wants to build AGI that is useful to people.

Introduction

In this section, the speaker talks about how generated text is something we all need to adapt to and that it's an evolving world. He also mentions that relying on a GPT-like system long-term may not be possible.

Adapting to Generated Text

  • The speaker believes that generated text is something we all need to adapt to.
  • Teachers are nervous about the impact of generated text on homework, but some see it as an unbelievable personal tutor for each kid.
  • The speaker has used Chachi BT to learn things himself and found it much more compelling than other ways he's learned things in the past.

Limitations of GPT-Like Systems

  • The speaker doesn't think relying on a GPT-like system long-term is possible or should be relied upon by society.
  • It's impossible to make a GPT-like system perfect, and people will figure out how much of the text they have to change.

Ethical Considerations for Language Models

In this section, the speaker discusses whether language models should adopt a common code of principles and whether they should be regulated.

Society Should Regulate Wide Bounds

  • Society should regulate what the wide bounds are for language models.
  • There are asterisks on free speech rules, and society has decided that free speech isn't quite absolute. Similarly, society will decide that language models aren't quite absolute either.

Individual Users Should Have Liberty

  • Individual users should have a huge amount of liberty to decide how they want their experience with language models.
  • Responsibility for deciding what speech is legal and distasteful should be left to individual users and groups, not one company or government.

Video Capabilities

In this section, the speaker talks about the possibility of video capabilities in language models.

Uncertainty About When It Will Come

  • The speaker cannot make a confident prediction about when video capabilities will come.
  • Video capabilities are a legitimate research project that people are interested in.

Best and Worst Case Scenarios for AI

In this section, Sam discusses his best and worst-case scenarios for AI.

Best Case Scenario

  • Sam believes that the best-case scenario for AI is so unbelievably good that it's hard to imagine. He thinks that progress in discovering new knowledge with these systems could be made much faster than humanity has done so far.
  • He imagines a future where we launch probes out to the whole universe, find out everything going on out there, have unbelievable abundance, and systems that can help us resolve deadlocks and improve all aspects of reality.

Worst Case Scenario

  • Sam is more worried about an accidental misuse case in the short term where someone gets a super powerful AI system. He thinks it's impossible to overstate the importance of AI safety and alignment work.
  • He believes that traditional AI safety thinkers reveal more about themselves than they mean to when they talk about what they think AGI will be like. None of the sound bite easy answers work.

How Far Away is AGI?

In this section, Sam talks about how far away he thinks AGI is.

  • The closer we get, the harder time he has answering because he thinks it's going to be much blurrier and much more of a gradual transition than people think.
  • He believes that people are going to have hugely different opinions about when we declare victory on the AGI thing.

San Francisco and Silicon Valley

In this section, Sam shares his thoughts on San Francisco and Silicon Valley.

  • Sam loves San Francisco but thinks it's a real shame that we put up with treating people poorly and continue to elect leaders who don't fix the problem.
  • He believes that unlike other tech people who blame tech companies for the current state of San Francisco, it is a solvable problem, and other cities have managed to do better than this.

Expectations and Hype

In this section, the speaker talks about their expectations for the reaction to Chat GPT and whether they prefer less hype.

Expectations

  • The speaker expected one order of magnitude less hype and users.
  • They expected people to have a false impression of how good these technologies are because they are impressive but not robust.

Hype

  • The speaker thinks that less hype is probably better as a general rule.
  • Critics who point out weaknesses in the technology are equally wrong.

Use Cases for Chat GPT

In this section, the speaker talks about how they use Chat GPT.

Use Cases

  • The speaker uses Chat GPT to summarize super long emails.
  • They also use it for translation and learning things quickly.

AGI's Impact on Google and UBI

In this section, the speaker talks about AGI's impact on Google and their thoughts on Universal Basic Income (UBI).

AGI's Impact on Google

  • Whenever someone talks about a technology being the end of some other giant company, it's usually wrong.
  • There will be a change for search that will probably come at some point, but not as dramatically as people think in the short term.

UBI

  • The speaker thinks UBI is good and important but very far from sufficient.
  • It is an enabling technology but not a plan for society.
  • As AGI participates more in the economy, wealth and resources should be distributed much more than we have.

Preparing for AGI

In this section, the speaker talks about what people should be preparing for as AGI becomes more prevalent.

Preparing for AGI

  • People should focus on resilience, adaptability, and the ability to learn new things quickly.
  • Creativity will still be important, although it will be aided by AGI.
  • Memorizing facts was important before Google came along.

Future of the Workplace

In this section, Sam Altman discusses his thoughts on the future of the workplace for tech workers. He believes that people will do different things and there won't be one answer. Some people will want to work fully in-person, while others will prefer remote work. Many people will likely opt for a hybrid approach.

Hybrid Work

  • Sam Altman is a fan of going to the office a few days a week and working at home a day or two a week.
  • Companies that are the wrong kind of hybrid can make it difficult for employees who are not physically present to participate in meetings.
  • Sam Altman believes that many important companies of this decade will still be pretty heavily in-person.

AI Safety and Autonomous Vehicles

In this section, Sam Altman talks about safety issues related to new technologies, particularly narrow vertical AI like autonomous vehicles. He believes we have learned how to do good safety engineering over time but AGI safety is different and requires its own set of safety processes and standards.

Safety Issues with New Technologies

  • We have learned how to build safe systems and processes over time.
  • AGI safety is different because the stakes are so high and irreversible situations are easy to imagine.

Best Time to Start a Company

In this section, Sam Altman discusses why he thinks now is one of the best times to start a company despite capital being tough to raise. He believes everything else is much easier than before, including hiring talent and rising above noise thresholds.

Why Now Is A Good Time To Start A Company

  • Capital is still reasonable at the seed stage.
  • It's easier to concentrate talent and rise above noise thresholds.
  • Sam Altman would rather have a hard time raising capital but an easier time doing everything else.

AI for Verticals

In this section, Sam Altman talks about his interest in AI for verticals and shares a story about Jasper, a customer of OpenAI that uses AI language models.

Interest in AI for Verticals

  • Sam Altman would go do AI for some vertical if he were starting out now.
  • Jasper is a customer of OpenAI that relies on their AI language models.

GPT-3 and AI Startups

In this section, the speaker discusses the impact of GPT-3 being available for free on AI startups. He also shares his thoughts on how to build a successful AI startup.

Building a Successful AI Startup

  • To build a successful AI startup, it's important to differentiate by building deep relationships with customers, creating a product they love, and having some sort of moat.
  • OpenAI views themselves as a platform company but will likely do something to show people what is possible with their models.
  • The speaker believes that the key to building a successful AI startup is to have deep value on top of the fundamental language model.

Impact of GPT-3 Being Available for Free

  • The availability of GPT-3 for free is causing problems for some AI startups.
  • The speaker believes that there will be way more new value created in the next few years than people who should just stop what they're doing. He thinks that this is going to be an amazing year for AI startups.
Video description

OpenAI cofounder and CEO Sam Altman sat down for a wide-ranging interview with us late last week, answering questions about some of his most ambitious personal investments, as well as about the future of OpenAI. This second clip is focused exclusively on artificial intelligence, including how much of what OpenAI is developing Altman thinks should be regulated, whether he's worried about the commodification of AI, his thoughts about Alphabet's reluctance to release its own powerful AI, and worst- and best-case scenarios as we move toward a future where AI is ever-more central to our lives. There was much to discuss (and he was generous to stay and talk about it). You can find the first part our sit-down -- focused in part on Helion Energy, a nuclear fusion company that has become Altman's second-biggest project -- here: https://youtu.be/57OU18cogJI