OpenAI CEO Sam Altman testifies at Senate artificial intelligence hearing | full video
# Oversight of Artificial Intelligence
In this section, the speaker introduces the hearing on the oversight of artificial intelligence and explains its purpose.
Introduction to AI Oversight
- The hearing is intended to write the rules of AI.
- The goal is to demystify and hold accountable new technologies to avoid past mistakes.
- Technology has outpaced regulation in the past, leading to exploitation of personal data, disinformation, and societal inequalities.
- Algorithmic biases can perpetuate discrimination and prejudice while lack of transparency undermines public trust.
# Opening Remarks
In this section, the speaker discusses how AI voice cloning software was used to create introductory remarks for the hearing.
Use of AI Voice Cloning Software
- An AI voice cloning software was used to create introductory remarks for the hearing.
- The voice was not that of the speaker but a clone trained on their floor speeches.
- Chat GPT wrote the remarks based on Blumenthal's record in advocating for consumer protection and civil rights.
- While impressive, there are concerns about what could happen if such technology were used maliciously.
# Advancements in AI
In this section, the speaker discusses advancements in AI and their potential benefits as well as harms.
Advancements in AI
- Examples like homework done by chat GPT or articles it can write feel like novelties but are more than just research experiments.
- Promises include curing cancer, developing new understandings of physics and biology, modeling climate and weather.
Potential Harms
- Weaponized disinformation
- Housing discrimination
- Harassment of women
- Impersonation fraud
- Voice cloning deep faith
New Industrial Revolution
- Displacement of millions of workers
- Loss of huge numbers of jobs
- Need to prepare for this new Industrial Revolution in skill training and relocation that may be required.
Sensible Safeguards
- Accountability is not a burden but the foundation of how we can move ahead while protecting public trust.
- They are how we can lead the world in technology and science while promoting our democratic values.
# The Risks and Limitations of AI
In this section, the speaker discusses the risks and limitations of AI, including the need for restrictions or even bans on its use in certain areas, accountability for harm caused by companies and their clients, and the importance of reliability.
Risks of AI
- There are places where the risk of AI is so extreme that we ought to impose restriction or even ban their use.
- We should be aware of the garbage going into these platforms or coming out of them.
Accountability for Harm
- Companies and their clients should be held liable if they cause harm.
- Forcing companies to think ahead and be responsible for the ramifications of their business decisions can be a powerful tool.
Importance of Reliability
- Trustworthiness limitations on use are important.
- Voluntary action from industry leaders is necessary to ensure reliability.
# Harnessing Technological Innovation for Good
In this section, Senator Hawley discusses how rapidly technology is changing and evolving. He questions whether generative AI will lead to greater liberty or more severe consequences like those seen with the atom bomb. He emphasizes that it's up to us as Americans to write what kind of innovation it will be.
Rapidly Changing Technology
- Technology is changing rapidly and transforming our world right before our very eyes.
- Generative AI could potentially be one of the most significant technological innovations in human history.
Potential Consequences
- It's unclear whether generative AI will lead to greater liberty or more severe consequences like those seen with the atom bomb.
- It's up to us as Americans to write what kind of innovation it will be.
Striking a Balance
- We need to strike a balance between technological innovation and our ethical responsibility.
- Our capacity for technological revolution has far outpaced our ethical and moral ability to apply and harness the technology we develop.
# Senate Judiciary Committee Meeting on AI and Emerging Technologies
In this Senate Judiciary Committee meeting, the committee discusses the impact of artificial intelligence (AI) and emerging technologies on society. The committee members introduce the witnesses and discuss the potential benefits and dangers of AI.
Introduction to the Meeting
- The Senate Judiciary Committee has passed four bills related to social media's abuse of children with unanimous roll calls.
- The committee is discussing whether AI is a quantitative or qualitative change in technology.
Potential Benefits and Dangers of AI
- Experts suggest that AI is a game-changer and fundamentally different from other forms of innovation.
- Congress has not been designed to deal with innovation, technology, and rapid change effectively.
- The positive potential of AI is enormous, including generating functioning code for websites or identifying new candidates to treat diseases. However, there are also profound dangers associated with it.
Introduction to Witnesses
- Sam Altman, co-founder and CEO of OpenAI, an AI research company behind chat GPT and Dally
- Christina Montgomery, IBM's Vice President Chief Privacy and Trust Officer overseeing the company's Global privacy program policies compliance strategy
- Gary Marcus, a leading voice in artificial intelligence who founded Robust Ai and Geometric Ai acquired by Uber
# Swearing in of Witnesses
The Judiciary Committee swears in the witnesses before they testify.
- The Judiciary Committee requires witnesses to swear in before testifying.
- Witnesses are asked to raise their right hand and swear that they will tell the truth, the whole truth, and nothing but the truth.
# Introduction by Sam Altman
Sam Altman introduces himself and talks about OpenAI's mission.
About OpenAI
- OpenAI is a non-profit organization founded on the belief that AI has the potential to improve nearly every aspect of our lives.
- OpenAI is committed to working towards ensuring broad distribution of benefits of AI while maximizing its safety.
Benefits of AI
- AI has immense potential to help us make new discoveries and address some of humanity's biggest challenges like climate change and curing cancer.
- Current AI systems have already helped people create, learn, be more productive, and improve lives such as Be My Eyes using multimodal technology in GPT4 to help visually impaired individuals navigate their environment.
Safety Measures
- Before releasing any new system, OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves model behavior, implements robust safety and monitoring systems.
- GPT4 is more likely to respond helpfully and truthfully than any other widely deployed model of similar capability due to six months spent conducting extensive evaluations external red teaming and dangerous capability testing.
Regulatory Intervention
- Regulatory intervention by governments will be critical to mitigate risks associated with increasingly powerful models such as licensing and testing requirements for development and release of AI models above a threshold of capabilities.
- Companies like OpenAI can partner with governments to ensure the most powerful AI models adhere to a set of safety requirements, facilitate processes to develop and update safety measures, and examine opportunities for global coordination.
Responsibility
- Companies have their own responsibility in ensuring that powerful AI is developed with democratic values in mind.
- It is essential that U.S leadership is critical in developing powerful AI with democratic values in mind.
# The Role of Government in Regulating AI
In this section, the speaker discusses the need for precision regulation approach to AI and how it can mitigate potential risks without hindering innovation.
Precision Regulation Approach to AI
- Precision regulation approach involves establishing rules to govern the deployment of AI and specific use cases.
- Different rules should be applied to use cases with the greatest risks to people and society.
- There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk.
- Consumers should know when they're interacting with an AI system and have recourse to engage with a real person if they so desire.
- Companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public.
# The Role of Businesses in Ensuring Responsible Deployment of AI
In this section, the speaker emphasizes the critical role businesses play in ensuring responsible deployment of AI.
Internal Governance
- Companies active in developing or using AI must have strong internal governance, including designating a lead AI ethics official responsible for an organization's trustworthy AI strategy.
- Establishing an Ethics board or similar function as a centralized clearinghouse for research resources is crucial.
IBM's Approach
- IBM has taken steps towards responsible deployment of AI by creating reasonable guardrails through its AIFX board, which oversees internal governance processes while still being flexible enough to support decentralized initiatives across IBM's global operations.
# Balancing Innovation and Public Trust
In this section, the speaker emphasizes the need for clear, reasonable policy and sound guardrails to mitigate potential risks of AI without hindering innovation.
Precision Regulation Approach
- Congress can mitigate the potential risk of AI without hindering innovation by adopting a precision regulation approach.
- The era of AI cannot be another era of "move fast and break things," but we don't have to slam the brakes on innovation either.
Business Community's Role
- The business community must take meaningful steps towards responsible deployment of AI to match guardrails set by Congress.
- Choices about data sets that AI companies use will have enormous unseen influence, those who choose the data will make the rules shaping society in subtle but powerful ways.
# The Risks of AI and the Need for Independent Scientists
In this section, the speaker discusses the risks associated with AI and the need for independent scientists to participate in addressing these problems.
Risks Associated with AI
- Poor medical advice from an open-source language model led to a person's decision to take their own life.
- A system rushed out and made available to millions of children allowed a person posing as a 13-year-old to lie to her parents about a trip with a 31-year-old man.
- Criminals may create counterfeit people using AI, which could have drastic and difficult-to-predict security consequences.
Current Issues with AI Systems
- Current systems are not transparent, do not adequately protect privacy, perpetuate bias, and even their makers don't entirely understand how they work.
- Big tech companies' preferred plan boils down to "trust us," but we cannot remotely guarantee that they're safe.
- OpenAI's original mission statement proclaimed its goal is to advance AI in the way that most benefits humanity as a whole unconstrained by a need to generate financial return. Seven years later, they're largely beholden to Microsoft embroiled in part in epic battle of search engines that routinely make things up.
The Need for Independent Scientists
- We need independent scientists involved in addressing problems and evaluating solutions before products are released.
- Allowing independent scientists access to these systems before they are widely released as part of clinical trial-like safety evaluation is vital first step.
- Ultimately we may need something like CERN Global International and neutral but focused on AI safety rather than high energy physics.
- We need government involved, big and small tech companies involved, and independent scientists involved in holding the company's feet to the fire.
# Discussion of Risks Associated with AI
In this section, the speaker discusses the risks associated with AI and how they compare to past technological advancements.
Risks Associated with AI
- The perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability make AI among the most world-changing technologies ever.
- Current choices will have lasting effects for decades or even centuries.
- The very fact that we are here today in bipartisan fashion to discuss these matters gives hope.
# Ensuring Accuracy and Trustworthiness of AI Models
The speaker discusses the importance of having reliable information about the accuracy and trustworthiness of AI models, as well as creating competition and consumer disclosures that reward greater accuracy.
Independent Testing Labs
- Mr. Altman suggests considering independent testing labs to provide scorecards and nutrition labels or the equivalent of nutrition labels packaging that indicates to people whether or not the content can be trusted.
- Companies should put their own results of tests on their model before releasing it, highlighting its weaknesses and strengths.
- Independent agencies or companies should also provide information about how models behave, where inaccuracies are, and other important disclosures.
Competition for Accuracy
- The National Institutes of Standards and Technology has an AI accuracy test called the Face Recognition Vendor Test which provides useful information about capabilities and flaws of facial recognition systems.
- Consumer disclosures that reward greater accuracy could create competition among companies to improve their models.
# Impact on Jobs
The speaker discusses the impact of technological revolutions on jobs, specifically in regards to superhuman machine intelligence.
Automation vs. Tasks
- GPT4 is a tool that people have control over in terms of how they use it. It is good at doing tasks, not jobs.
- People are already using GPT4 to do their job more efficiently by helping them with tasks.
- GPT4 will entirely automate away some jobs but will create new ones that will be much better.
Technological Revolution
- Like all technological revolutions, there will be significant impact on jobs but exactly what that impact looks like is difficult to predict.
- There will be far greater jobs on the other side of this and the jobs of today will get better.
- As our quality of life raises and as machines and tools that we create can help us live better lives, the bar raises for what we do and our human ability goes after more ambitious, satisfying projects.
# Preparing the Workforce for AI Technologies
In this section, the speaker discusses the importance of preparing the workforce for partnering with AI technologies and using them. The focus is on skills-based hiring and educating for future skills.
Importance of Preparing Workforce
- IBM believes that AI will change every job, creating new jobs, transforming many more jobs, and transitioning some jobs away.
- It is important to prepare today's and tomorrow's workforce for partnering with AI technologies by focusing on skills-based hiring and educating for future skills.
- IBM has been involved in this process by providing a SkillsBuild platform with seven million learners and over a thousand courses worldwide focused on skills. They have pledged to train 30 million individuals by 2030 in the skills needed for society today.
# Building Proper Nutrition Labels Goes Hand in Hand with Transparency
In this section, the speaker talks about building proper nutrition labels that go hand in hand with transparency. He also discusses scientific challenges in understanding these models.
Challenges in Building Proper Nutrition Labels
- There are technical challenges in building proper nutrition labels that go hand in hand with transparency.
- The biggest scientific challenge is understanding how these models generalize - what do they memorize, and what new things do they do?
- Scientists need to be part of the process of building proper nutrition labels to ensure greater transparency about what goes into these systems.
# Artificial General Intelligence Will Replace a Large Fraction of Human Jobs
In this section, the speaker talks about artificial general intelligence (AGI), its impact on labor, and the need for transparency.
Impact of AGI on Labor
- In the long run, AGI will replace a large fraction of human jobs.
- However, we are not that close to AGI yet. What we have right now is just a small sampling of the AI that we will build in 20 years.
- When we get to AGI, which may take 50 years or more, it will have profound effects on labor.
- Transparency is crucial in understanding what goes into these systems and how well they perform.
# The Magnitude of Risks in AI
In this section, the speaker talks about his worst fears regarding AI and its impact on the world.
Worst Fears Regarding AI
- The magnitude of risks in AI is significant. The field, technology, and industry could cause significant harm to the world.
- While jobs and employment matter, his worst fears are related to causing harm to the world.
# The Significance of Large Language Models
In this section, the speakers discuss the significance of large language models and their ability to predict public opinion. They also explore the potential for these models to be used in manipulating voters during elections.
Large Language Models' Ability to Predict Public Opinion
- Large language models can predict public opinion with remarkable accuracy.
- These models can adapt to sub-population specific media diets.
- This raises concerns about how entities may use this information to fine-tune strategies that elicit certain behavioral responses from voters.
Concerns About Manipulation During Elections
- The effect of Google search on undecided voters during an election is significant.
- Large language models have far more power and significance than Google search in terms of manipulation.
- There is a need for policies and regulations around disclosure and guidelines for companies providing these models.
Overall, the speakers express concern about the potential for large language models to be used in manipulating public opinion during elections. They suggest that policies and regulations are needed to ensure transparency and accountability around the use of these models.
# Concerns about AI Manipulation
In this section, the speakers discuss concerns about potential manipulation by AI systems and the importance of transparency in understanding what data these systems are trained on.
Potential for Manipulation
- The risk is that AI systems may directly manipulate people, even unintentionally.
- AI systems absorb a lot of data, and what they say reflects that data. Depending on what data is trained on, these systems might lead people differently.
- An AI system trained on personal data could know each individual better than themselves and be able to elicit responses from them in ways never before imagined.
Importance of Transparency
- There is concern about not knowing what GPT4 is trained on and how it may reflect biases in the system.
- We need transparency about what data these systems are trained on to understand their political influences and potential for manipulation.
# Hyper Targeting with AI Models
In this section, the speakers discuss hyper-targeting with AI models and its potential implications for corporate applications, monetary applications, and manipulation.
Implications of Hyper Targeting
- An AI system could supercharge the war for attention by allowing individual targeting like never before.
- Other companies already use or will use AI models to create very good ad predictions of what a user will like.
OpenAI's Stance
- OpenAI does not have an ad-based business model and is not trying to build up profiles of users.
- OpenAI would love it if users used their system less because they do not have enough GPUs.
Concerns
- We should be concerned about the potential for manipulation with hyper-targeting.
# The Importance of Transparency and Regulation in AI
In this section, the speakers discuss the importance of transparency and regulation in AI. They talk about how companies like IBM are calling for precision regulation to ensure that AI is deployed in a responsible and clear way.
The Need for Precision Regulation
- IBM is an enterprise technology company, not consumer-focused, but they recognize the importance of transparency and regulation in AI.
- IBM has been developing technology to ensure transparency in AI models, including data sheets and model cards.
- Senator Durbin notes that it's historic to have private sector entities come before Congress and plead with them to regulate their industry.
- He asks how we can achieve this given past inclinations to get out of the way of new industries.
- He brings up Section 230 as an example where liability was absolved for a period of time as the industry came into being.
- Mr. Alderman agrees that there needs to be a new framework for regulating AI, but he doesn't know what the right answer is yet.
- He believes that companies like IBM bear responsibility for their tools, but tool users do as well. There needs to be a liability framework established between them and end consumers.
Precision Regulation vs. Getting Out of the Way
- Senator Durbin notes that when it came to online platforms, government inclination was to get out of the way and give them breathing space. However, he's not happy with the outcome given problems like child exploitation and cyberbullying.
- He questions why major companies like IBM are now advocating for precision regulation and establishing liability standards.
- IBM believes that trust is their license to operate, and they've been calling for precision regulation of AI for years.
- They believe that AI should be regulated at the point of risk where technology meets society.
# Need for a Cabinet Level Organization to Address AI Risks
In this section, the speaker discusses the need for a cabinet-level organization within the United States to address AI risks. He suggests that there are many agencies that can respond in some ways, but given the large number of risks and amount of information to keep up on, a lot of technical expertise and coordination is needed.
Agencies That Can Respond
- The FTC and FCC are examples of agencies that can respond to AI risks.
- My view is that we probably need a cabinet level organization within the United States in order to address this.
Importance of Technical Expertise and Coordination
- The number of risks associated with AI is large, and the amount of information to keep up on is so much that we need a lot of technical expertise.
- We also need a lot of coordination among these efforts.
- There is one model where we stick to only existing law and try to shape all of what we need to do, but I think that AI is going to be such a large part of our future and is so complicated and moving so fast.
International Agency for AI
- The speaker suggests having an agency whose full-time job is addressing AI risks.
- He personally suggested having an international agency for AI as well.
- This would involve the whole world, not just the US, working together properly.
Europe's AI Act
The European Parliament is ahead of the US in regulating AI, particularly on social media. There is a need for the US to establish foundational elements for online privacy and data security.
Regulation of AI
- The European Parliament has already acted on an AI act.
- Europe is ahead of the US in regulating AI, particularly on social media.
- The US needs to establish foundational elements for online privacy and data security.
Congress Regulating AI Chat GPT
Senator Blackburn asks whether Congress should regulate AI chat GPT. OpenAI models are not trained using consumer data, but there are concerns about who owns the rights to AI-generated material.
Pros and Cons of Regulating Chat GPT
- Senator Blackburn asked whether Congress should regulate AI chat GPT.
- An account was created that gave four pros and four cons, ultimately stating that the decision rests with Congress.
- OpenAI models are not trained using consumer data.
- Concerns were raised about who owns the rights to AI-generated material.
Concerns About Utilizing AI Across Industries
Various industries have expressed concerns about utilizing AI, including healthcare, logistics, and financial services.
Industry-Specific Concerns
- Healthcare professionals are looking at disease analytics and predictive diagnosis to improve patient outcomes.
- Logistics companies want to save time and money by yielding efficiencies through the use of AI.
- Financial services companies want to know how they can use blockchain technology with quantum computing.
Need for Foundational Elements for Online Privacy and Data Security
There is a need for federally preemptive measures regarding online privacy and data security.
Foundational Elements for Online Privacy and Data Security
- The US needs to establish federally preemptive measures regarding online privacy and data security.
- The Commerce Committee and Judiciary Committee need to decide how to move forward with this issue.
Control Over Virtual Information
Senator Blackburn emphasizes the importance of letting people control their virtual information, particularly in regards to music and content creation.
Music and Content Creation
- Songwriters, artists, and musicians should be able to decide if their copyrighted works are used to train AI models.
- OpenAI's jukebox offers renditions in the style of Garth Brooks, which suggests that it is trained on Garth Brooks songs.
- There are concerns about who owns the rights to AI-generated material.
- Creators deserve control over how their creations are used beyond the point of release into the world.
# Copyright Protection and Privacy
In this section, the discussion revolves around copyright protection for content generators and creators in generative AI. The conversation also touches on how to account for the collection of voice and other user-specific data through AI applications while protecting individual privacy rights.
Copyright Protection
- Content creators and owners need to benefit from generative AI technology.
- The economic model is still being discussed with artists and content owners.
- Content owners deserve control over how their likenesses are used and should benefit from it.
- Compensation should be given to artists for utilization of their work in generative AI applications.
Privacy Concerns
- People should have the option to opt-out of having their personal data trained on.
- A strong national privacy law is needed.
- Protecting individual privacy rights while accounting for user-specific data collected through AI applications is important.
# Election Misinformation
This section focuses on concerns about election misinformation, particularly during primary elections. The discussion centers around what can be done to prevent misinformation about polling locations, election rules, and candidates.
Preventing Election Misinformation
- Political advertisements bill introduced by Representative Yvette Clark aims to address election misinformation.
- Misinformation about polling locations, election rules, and candidates is a concern during primary elections.
- Industry and government need to work together quickly to address this issue.
Importance of Policies and Monitoring
In this section, the speaker discusses the importance of policies and monitoring in preventing the generation of fake tweets. They also mention that there are things that the model refuses to generate.
Policies and Monitoring
- There are policies in place to prevent the generation of fake tweets.
- At scale, monitoring can detect someone generating a lot of fake tweets even if generating one tweet is okay.
Impact on Intellectual Property
In this section, the speaker talks about their concerns regarding intellectual property. They discuss a bill they have with Senator Kennedy that would allow news organizations to negotiate better rates with Google and Facebook.
Negotiating Better Rates for News Organizations
- The speaker has serious concerns about intellectual property.
- A bill with Senator Kennedy would allow news organizations to negotiate better rates with Google and Facebook.
- Without compensation for news content, we risk losing realistic content producers.
- The speaker hopes that tools like what they're creating can help news organizations do better.
Importance of Local News Content
In this section, the speaker emphasizes the importance of local news content. They discuss how local newspapers need to be compensated for their content so they can continue producing it.
Compensating Local Newspapers
- Having a vibrant national media is critically important.
- Local newspapers need to be compensated for their content so they can continue producing it.
- The speaker hopes that tools like what they're creating can help local news organizations do better.
Platform Accountability and Transparency
In this section, the speaker discusses the need for more transparency on social media platforms. They mention a bill they have with Senator Coons and Senator Cassidy that would give researchers access to information about algorithms and social media data.
Transparency on Social Media Platforms
- The speaker believes that transparency is critical to understanding political and bias ramifications.
- More transparency is needed about how models work, and scientists should have access to them.
- A lot of news will be generated by systems, which are not reliable.
# Liability for Social Media Companies
The discussion is about the liability of social media companies when users generate harmful content on their platforms.
Liability for Harmful Content
- Social media companies should not be too high to avoid liability for harmful content generated by users.
- If a company fails to comply with its terms of use and a user is harmed, the company should be held liable.
- IBM advocates for conditioning liability on a reasonable care standard.
- The tool created by Mr. Almond's company is not protected under Section 230, and they are calling for a new approach to address this issue.
Need for Licensing and Regulation
- There needs to be clear responsibility by companies in creating tools that could harm people.
- A license should be required to produce these tools, similar to how nuclear power plants require licensing from the Nuclear Regulatory Commission.
- An agency that is more nimble and smarter than Congress should oversee the regulation of these tools.
Transformative Technology
- The conversation highlights that social media is a transformative technology that can disrupt life as we know it, both positively and negatively.
- It's important to define risks associated with social media technology.
# Upsides and Downsides of AI
In this section, the speakers discuss the benefits and drawbacks of AI technology. They also talk about the need for a system to regulate AI.
Benefits and Drawbacks of AI
- Users enjoy and get value from AI technology.
- There are standards for making ladders, so there should be standards for creating new technologies like AI.
- An agency that issues licenses and can take them away would incentivize companies to create safe and effective AI.
- Generative AI has immense promise but also substantial risks, including delivering incorrect information, impersonating loved ones, encouraging self-destructive behaviors, shaping public opinion, and impacting elections.
Regulating AI
- China is doing significant research on AI, which could impact global regulation efforts.
- There needs to be some sort of standard or set of controls with global effect to regulate the use of AI in military applications.
- Congress has failed to responsibly regulate social media companies with serious harms resulting. We cannot afford to make the same mistake with generative AI.
# Assessing Risk and Role of International Regulation
In this section, the speakers discuss how we assess risk associated with generative AI models. They also talk about international regulation's role in regulating these models.
Assessing Risk
- OpenAI assesses safety through iterative deployment processes.
- One way to prevent harmful content is by having humans identify it and then training algorithms to avoid it.
- Constitutional AI gives models a set of values or principles to guide decision-making, which could be more effective than training on all potential harmful content.
International Regulation
- There needs to be international regulation for generative AI.
- The consequences of not regulating generative AI will exceed those of social media by orders of magnitude.
# Building Safe AI Systems
In this section, the speaker discusses the importance of building safe AI systems and getting people to have experience with them before releasing them to the world.
Importance of Interaction with Reality
- It is important to find ways for people to have experience with AI systems while they are still relatively weak and imperfect.
- Before putting something out, it needs to meet a bar of safety. The speaker spent over six months going through all the different things and deciding what standards were going to be before putting GPT4 out there.
- Interaction with the world is very important in order to figure out what needs to be done to make AI safer and better.
Giving Models Values Up Front
- Giving models values up front is an extremely important set. RLHF is another way of doing that same thing but somehow or other you are saying here are the values here's what I want you to reflect or here are the wide bounds of everything that Society will allow.
- Multiple technical approaches can be used, but we need to give policy makers and the world as a whole tools for implementing values.
# Promoting AI That Reinforces Democratic Values
In this section, the speakers discuss concerns about generative AI technologies undermining democratic values and institutions. They also discuss regulating social media after its harmful impacts on recent elections.
Regulating Use of Technology in Context
- The EU's approach of precision regulation where you're regulating the use of technology in context makes sense.
- Different rules for different risks should be implemented. For example, any algorithm being used in the context of elections should be required to have disclosure around the data being used and the performance of the model.
Need for Independent Agency
- Existing regulatory bodies and authorities are under-resourced and lack many of the statutes or regulatory powers that they need.
- An independent agency may not be necessary, but we don't want to slow down regulation to address real risks right now.
# International Conversation on AI
In this section, the speakers discuss the need for an international conversation on AI and who should be involved in it.
The Right Body for International Conversation
- Mr. Marcus suggests that he is not qualified to say what the right model is for an international conversation on AI.
- The UN UNESCO and the OACD are mentioned as organizations that could be involved in the conversation.
- Senator Coons mentions that hearings will be held on the impact of AI on patents and copyrights.
# Three Hypotheses About Congress' Understanding of AI
In this section, three hypotheses about Congress' understanding of AI are presented, and the speakers discuss potential regulations to implement.
Three Hypotheses
- Hypothesis 1: Many members of Congress do not understand artificial intelligence.
- Hypothesis 2: The absence of understanding may not prevent Congress from trying to regulate this technology in a way that could hurt it.
- Hypothesis 3: There is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us while we are dying.
Regulations to Implement
- Ms. Montgomery emphasizes transparency and explainability in AI, including disclosure of data used to train models and continuous governance over these models.
- Regulations should focus on high-risk uses of AI, with impact assessments and transparency required. Data used to train AI should also be protected.
- Professor Marcus suggests implementing a safety review process similar to the FDA's prior to widespread deployment of AI, as well as a nimble monitoring agency with authority to call things back.
Funding for AI Safety Research
The speakers discuss the need for funding to focus on both short-term and long-term safety in AI research. They propose creating a new agency to license efforts above a certain scale of capabilities, creating safety standards, and requiring independent audits.
Proposals for AI Safety Research
- Form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards.
- Create a set of safety standards focused on dangerous capability evaluations, such as testing if a model can self-replicate or self-exfiltrate into the wild.
- Require independent audits by experts who can say whether the model is in compliance with these state and safety thresholds.
Regulating AI Tools
The speakers discuss the need to regulate AI tools properly to protect against harmful content. They consider licensing schemes as potential solutions.
Harmful Requests
- GPT4 can refuse harmful requests, such as violent content, content encouraging self-harm, or adult content.
- There are other types of statements attributed to anyone that may not rise to the level of harmful content but could still be problematic.
Licensing Schemes
- The speakers consider licensing schemes as potential solutions for regulating AI tools. However, they acknowledge that it would be challenging given the vastness of this game-changing tool.
# Generated Image
In this section, the speakers discuss the need for a regulatory scheme for artificial general intelligence (AGI) and the challenges in building AI systems that understand harm.
Need for Regulatory Scheme
- The licensing scheme is needed to address the potential harms of AGI.
- As we head towards AGI, there will be major harms that can occur through its use.
- A safety case needs to be made to ensure that benefits outweigh harms before granting a license.
Challenges in Building AI Systems
- The central scientific issue in building AI is understanding harm in its full breadth of meaning.
- Current AI systems only gather examples and compare them with labeled data, which is not broad enough.
- New technology may be required for AI to understand harm fully.
Example: Auto GPT
- Auto GPT allows systems to access source code and the internet, posing potential cybersecurity risks.
- An external agency should review products like Auto GPT to ensure there are no cybersecurity problems or ways of addressing them.
# Need for Comprehensive Regulatory Scheme
In this section, the speakers discuss how a comprehensive regulatory scheme is necessary due to the vastness and complexities involved in using AI tools.
Use Model Insufficient
- The use model of regulating AI tools is insufficient as it cannot distinguish between different uses of an AI tool.
- A comprehensive regulatory scheme needs to be put in place that provides future reference on how to regulate AI tools effectively.
# Conclusion
In this section, the speakers conclude their discussion on the need for a regulatory scheme for AI and how developments in generative AI tools have made it necessary to regulate AI effectively.
Need for Regulatory Scheme
- The development of generative AI tools has made it necessary to regulate AI effectively.
- A comprehensive regulatory scheme is needed due to the vastness and complexities involved in using AI tools.
# AI Innovation and Language Inclusivity
The speakers discuss the importance of ensuring equitable treatment of diverse demographic groups in the development and use of AI tools, particularly with regards to language inclusivity.
Ensuring Language and Cultural Inclusivity
- OpenAI and IBM are focused on bias and equity in technology, including diversity in the development and deployment of tools.
- Both companies are actively involved in ensuring that their large language models are available in many languages, including lower resource languages.
- They are committed to working with people who have particular data sets to collect a representative set of values from around the world to draw wide bounds of what the system can do.
Benefits of AI Systems for Underrepresented Groups
- The speakers believe that these systems will have lots of positive impact on underrepresented groups in technology, particularly those who have not had as much access to technology around the world.
# Lack of Diversity in AI Workforce and the Need for Regulation
In this section, the speakers discuss the lack of diversity in the AI workforce and how it can contribute to bias and inequities. They also talk about the emergence of generative AI and its impact on society.
Importance of Diversity in AI Workforce
- The AI workforce lacks racial and gender diversity reflective of the United States.
- This lack of diversity can lead to tools and approaches that exacerbate existing biases and inequities.
- It is important to consider possible regulations for AI's broader impact on society.
Generative AI Technology
- Generative AI has a different opportunity and risk profile than other AI tools.
- These applications have felt very tangible for the public due to their user interface and outputs they produce.
- Generative AI systems create new issues around potential content generation that could be misleading or deceptive.
Need for Appropriate Safeguards
- It is important not to ignore that AI is a tool with capabilities beyond just generative capabilities.
- A regulatory framework needs to be defined before regulating any technology, tool, or product.
- Any new approach or law should not stop innovation from happening with smaller companies, open-source models, researchers doing work at a smaller scale.
- A threshold of compute could be used as a way to define which systems need intense licensing requirements.
- Models that can persuade, manipulate, influence person's behavior or beliefs should be considered for regulation.
The Impact of AI on Human Behavior
In this section, the speaker discusses the potential impact of AI on human behavior and how it could be used by law enforcement agencies. They also discuss the need for a national privacy law.
AI and Predicting Human Behavior
- The accuracy with which technology can predict future human behaviors is potentially significant at the individual level.
- It may be possible for law enforcement agencies to use modeled predictions about an individual's behavior as a basis for police action, but this is different from the evidentiary predicate normally required to obtain a warrant.
National Privacy Law
- There is currently no national privacy law in the US, unlike Europe where one has been rolled out to mixed reviews.
- A minimum requirement for such a law would be that users should have the ability to opt-out of having their data used by companies like social media companies and that it should be easy to delete your data.
- Users should also have the right to prevent their data from being used for training AI systems.
Implementing Safety Measures for AI Systems
In this section, the speaker discusses implementing safety measures and regulations around AI systems.
Restricting Capabilities of Deployed Models
- There should be limits on what a deployed model is capable of doing in order to mitigate risk.
- Certain capabilities or functionalities themselves could potentially be forbidden under federal laws.
Children's Use of AI Products
- Users must be 18 or older or have parental permission if they are 13 or older to use the product.
- The company designs a safe product, but they are aware that children may find ways around safeguards.
- Companies whose revenues depend on volume of use and screen time intensity design AI systems to maximize these factors, which can be problematic when used in education.
Technology and Children
In this section, the speakers discuss the importance of designing technology that does not harm children and how regulation can help ensure that values are set for these systems.
Designing Systems for Safety
- The speakers agree that technology harming children is a serious issue.
- They suggest designing systems that do not maximize engagement to prevent harm.
- Regulation can help ensure values are set for these systems and how they respond to questions that can cause influence.
Importance of Regulation
- The speakers emphasize the importance of regulating emerging technologies like AI.
- They use the example of automobile regulation to illustrate the need for regulation in new technologies.
- Multiple federal agencies were created specifically to regulate cars, and similarly, there should be tailored agencies to regulate AI.
- Congress should have the skills and resources in place to impose regulatory requirements on technology uses and understand emerging risks.
Challenges and Opportunities with AI
This section discusses the challenges and opportunities presented by AI, including social media regulation.
Challenges with Emerging Technologies
- The speaker notes that technology has been rapidly advancing, presenting both challenges and opportunities.
- Social media regulation has been destructive by allowing harmful practices to go unchecked.
Tailored Agency for Regulation
- There is a need for a tailored agency to deal with current risks associated with AI.
- Congress should encourage understanding of emerging risks as well as imposing regulatory requirements on technology uses.
Forming an Agency for AGI
In this section, the speaker discusses the need to form an agency for AGI and emphasizes the importance of science in building such an agency.
Importance of International Meetings
- The speaker suggests that international meetings with experts in growing agencies are necessary at both federal and international levels.
Importance of Science
- The speaker emphasizes that science should be a crucial part of building an agency for AGI. He gives examples such as detecting and labeling misinformation and cybercrime, which require new technologies.
OpenAI's Non-Profit Model
In this section, the speaker explains why OpenAI started as a non-profit organization and their revenue model.
Why Non-Profit?
- OpenAI started as a non-profit organization focused on building AGI with humanity's best interests at heart. They believed that if they could build AGI with these values, it could transform the world positively.
Revenue Model
- OpenAI has a subscription-based model where API developers pay them. The speaker prefers this model over ads or other models because he is concerned about corporate concentration in AI space. He believes that having fewer companies controlling AI can lead to oligarchy and technocracy, which can influence people's beliefs through these systems.
Risks of Corporate Concentration in AI Space
In this section, the speakers discuss their concerns about corporate concentration in AI space.
Risk of Technocracy Combined with Oligarchy
- There is a real risk of technocracy combined with oligarchy where a small number of companies influence people's beliefs through the nature of these systems. The speaker is worried about the concentration of power in a few companies, which can affect people's lives significantly.
Benefits and Dangers
- There are benefits and dangers to having a small number of providers that can make models at the cutting edge of capabilities. While it is beneficial to have fewer players to keep an eye on, there needs to be enough competition so that consumers have different ideas.
Aligning AI with Society's Values
In this section, the speaker discusses the need for society to set bounds and values that align with AI systems. He suggests creating an alignment dataset or an "AI Constitution" that comes broadly from society.
Importance of Aligning AI with Society's Values
- The bounds and values of AI systems should be set by society as a whole.
- Creating an alignment dataset or "AI Constitution" is necessary to align AI with society's values.
- The dataset should come broadly from society.
Introduction by Senator Booker
In this section, Senator Booker thanks the witnesses for their testimony and highlights the importance of the hearing on AI technology.
Importance of Hearing on AI Technology
- The hearing on AI technology is important because it is a transformative new technology.
- It is unknown what will happen with this technology, but there are fears about what bad actors can do without rules in place.
- Congress cannot keep up with the speed of technology, so there needs to be an agency to address questions related to social media and AI.
Major Questions Congress Needs to Answer About AI
In this section, Senator Booker outlines three major questions that Congress needs to answer regarding AI technology.
Three Major Questions Congress Needs to Answer About AI
- What are the bounds and values of AI systems?
- What will happen if there are no rules in place for bad actors?
- How can Congress keep up with the speed of technology?
Need for an Agency to Address Social Media and AI Issues
In this section, Senator Booker discusses his belief that an agency is necessary to address questions related to social media and AI. He introduces the Digital Commission Act and asks the witnesses about the perils of creating such an agency.
Need for an Agency to Address Social Media and AI Issues
- An agency is necessary to address questions related to social media and AI.
- The Digital Commission Act was introduced last year, and it will be reintroduced this year.
- Two of the three witnesses believe that an independent commission is needed, but there are concerns about regulation being too cumbersome.
- Perils of creating an agency include ensuring that its goals protect privacy, bias, intellectual property, and disinformation.
# The Perils of Regulation
In this section, the speakers discuss the potential perils of regulation in the tech industry.
Potential Risks of Regulation
- The need to avoid slowing down smaller startups and open-source efforts while still ensuring compliance with regulations.
- The danger of regulatory capture, where regulations are put in place but nothing really changes, and only big players can comply with them.
- The risk of not holding companies accountable for harms caused by their AI systems, such as misinformation in electoral systems.
- The importance of pre-deployment and post-deployment testing and licensing for AI systems to prevent harm.
# Monopolization Danger and National Security Implications
This section covers monopolization dangers in the tech industry as well as national security implications.
Monopolization Danger
- Antitrust laws may be inadequate to deal with challenges posed by monopolies in social media and other industries that inhibit or prevent innovation.
National Security Implications
- There are significant national security implications related to AI deployment that need to be addressed. Threats from adversaries like China are real and urgent.
# Challenges with Creating a New Agency
This section discusses some challenges associated with creating a new agency for regulating AI.
Resource Allocation Challenges
- Simply creating new agencies is not enough; they must also be given adequate resources, including scientific expertise, to effectively regulate AI.
Hard Decision Making Required
- There are many hard questions that need to be grappled with when it comes to regulating AI, including how to frame rules that fit the risks and make them enforceable.
# Protecting Privacy and Identifying High-Risk Areas
The speakers discuss the steps they take to protect privacy, including not training on submitted data, retaining data for trust and safety enforcement purposes only, filtering language models for personal information, and providing opt-out options. They also identify high-risk areas such as misinformation, medical advice, internet access for AI tools, and long-term risks.
Protecting Privacy
- OpenAI does not train on any data submitted to their API but retains it for 30 days solely for trust and safety enforcement purposes.
- Users can opt-out of OpenAI training on their data or delete their conversation history or account.
- EleutherAI filters its large language models for content that includes personal information pulled from public datasets.
Identifying High-Risk Areas
- Misinformation is a high-risk area that needs regulation.
- Medical advice generated by AI systems is another area of concern due to the potential for good or bad advice.
- Internet access for AI tools raises concerns about requests made by these systems beyond search capabilities.
- Long-term risks associated with machines having a larger footprint on the world need monitoring and regulation.
Note that there was no timestamp provided in the transcript between 2h25m58s and 2h26m59s.
# Principles for AI Development
In this section, the speakers discuss the importance of transparency, accountability, and limits on use in AI development. They also emphasize the need to enforce these principles.
Three Principles for AI Development
- Transparency, accountability, and limits on use are important principles for AI development.
- The industry should not wait for Congress to enforce these principles.
- There is a large consensus around what is needed in AI development. The challenge is enforcing it.
# Confronting Challenges in AI Development
In this section, the speakers discuss the importance of addressing current risks associated with AI development as well as preparing for future challenges.
Addressing Current Risks
- It's appropriate to spend time discussing current risks associated with AI development.
- Loss of jobs, invasion of privacy, manipulation of personal behavior and opinions are some potential downsides or harms of generative AI even in its current form.
Preparing for Future Challenges
- As these systems become more capable, it's important to prepare for future challenges that may be closer than people appreciate.
# Moratorium on Further AI Development?
In this section, the speakers discuss whether there should be a moratorium on further AI development.
Potential Downsides or Harms of Generative AI
- Loss of jobs, invasion of privacy, manipulation of personal behavior and opinions are some potential downsides or harms of generative AI even in its current form.
Moratorium on Further AI Development
- An eclectic group of about a thousand technology and AI leaders recently called for a six-month moratorium on any further AI development.
- The letter did not call for a ban on all AI research, but only on very specific systems like gpt5. It specifically called for more research on trustworthy and safe AI.
- The emphasis should be on focusing more on AI safety and trustworthy reliable AI before deployment at scale without external review.
# Audits, Red Teaming, and Safety Standards
The team discusses the safety standards that need to be passed before training a new model. They consider whether they should pause development for six months or come up with new rules and standards to build on.
Prioritizing Ethics and Responsible Technology
- Miss Montgomery suggests using the time to prioritize ethics and responsible technology instead of pausing development.
- She believes that a pause in development could help develop protocols for safety standards and ethics.
Liability in Court
- Mr. Altman proposes making OpenAI liable in court by creating a federal right of action that allows private individuals who are harmed by generative AI technology to sue.
- He suggests defining a broad right of action for private citizens, including class actions, to present evidence of harm caused by the technology.
- Mr. Altman believes this would be faster than waiting for Congress to pass laws but acknowledges that litigation can take a long time.
# Laws and Gaps in Consumer Protection
The team discusses gaps in consumer protection laws related to artificial intelligence.
Current Laws Insufficient
- The current laws were designed before artificial intelligence existed and do not provide enough coverage.
- Clearer laws about the specifics of this technology and consumer protection are needed.
Copyright Law
- There are areas like copyright where there are no laws or ways of thinking about wholesale misinformation as opposed to individual pieces of it.
- It is difficult to know which laws apply in situations where there is uncertainty about what constitutes harm caused by generative AI technology.
# Senate Hearing on Artificial Intelligence
The transcript is a recording of a Senate hearing on artificial intelligence. The speakers discuss the implications of AI and its impact on society, as well as potential regulations to ensure its safe and ethical use.
Implications of AI
- Section 230 does not apply to generative AI.
- Companies are responsible for discrimination caused by their reliance on AI tools in decision-making processes.
- There is a difference between research and deployment at massive scales, which can help identify risks before they become widespread.
Moratorium on AI Development
- A moratorium on AI development is not realistic or enforceable.
- The focus should be on developing trustworthy and safe AI rather than making bigger versions of unreliable technology.
Corporate Power in AI Development
- Concerns about corporate power and concentration in the realm of AI development have been raised.
- Microsoft's release of Sydney highlights the need for temporary withdrawal when problems arise with new technologies.
Future Implications of AI
- The transformative nature of technology raises concerns about its destructiveness.
- Companies that want to keep users' attention on screens raise concerns about corporate intentions.
Concerns about AI and Corporate Power
In this section, Sam Harris and Greg Brockman discuss their concerns about the power of AI systems to shape our lives and views. They also talk about the risks of a few players with extraordinary resources and power influencing Washington.
Worries About AI Systems
- The amount of power that AI systems have to shape our views and lives is significant.
- Greg stopped writing much about technical issues in AI because he wanted to work on policy due to his fears.
- There are concerns about reinforcing bias through algorithms and failure to advertise certain opportunities in certain zip codes.
Risks of Corporate Power
- Sam asks if there are concerns about a few players with extraordinary resources and power influencing Washington.
- The free market is not what it should be when large corporate powers can influence the game.
- There needs to be incredible scrutiny on companies that can train true Frontier models due to the resources required.
- Greg believes it's important to democratize the inputs to these systems, align values, and give people wide use of these tools.
Democratizing Potential of Technology
- OpenAI's API strategy lets people put safeguards in place while building on top of their models.
- People building on top of OpenAI's API do incredible things, putting AI everywhere using their API which democratizes technology.
# Importance of Industry Participation in Consumer Protection
In this section, the importance of industry participation in consumer protection is discussed.
Genuine and Authentic Willingness to Participate
- The industry's participation in consumer protection is important.
- There is a genuine and authentic willingness to participate from the industry.
# Importance of Industry Participation in Rulemaking
In this section, the importance of industry participation in rulemaking is discussed.
Industry Opposition to Rules
- Many industries claim to be in favor of rules but oppose every rule that comes up.
- The industry's participation in rulemaking is important for real progress.
# Need for New Agency and Recognition of Global Changes
In this section, the need for a new agency and recognition of global changes are discussed.
Pace of Technology vs. Congress
- Congress doesn't always move at the pace of technology.
- A new agency may be necessary due to this reason.
Rest of World Moving Forward
- It's important to recognize that the rest of the world will also be moving forward with technology advancements.
# Closing Remarks and Record Submission Encouragement
In this section, closing remarks are made and record submission encouragement is given.
Record Open for One Week
- The hearing will be closed.
- The record will remain open for one week if anyone wants to submit anything.
Manuscripts or Observations Submission Encouraged
- Anyone who has manuscripts or observations from their companies should submit them.