The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED
The Possibility of Global AI Governance
In this section, the speaker introduces himself and expresses his concern about the possibility of bad actors using AI to spread misinformation.
Introduction to AI and Misinformation
- The speaker has a background in coding and AI.
- Bad actors can use AI to create convincing narratives about anything, including spreading misinformation.
- Even professional editors can be fooled by the information created by these systems.
Problems with Bias and Chemical Weapon Design
- These systems can also have biases that we don't want in our systems.
- There are concerns that these systems could design chemical weapons rapidly.
New Concern: AutoGPT
- These systems can trick human beings into doing CAPTCHAs.
- AutoGPT allows one AI system to control another, which could lead to scam artists trying to trick millions of people.
Conclusion
- AGI is the idea of artificial general intelligence.
The History of AI and the Need for a New System of Governance
In this section, the speaker discusses the history of AI and the two different theories that have been in opposition: symbolic systems and neural networks. They explain that both technologies are powerful but have their own unique strengths and weaknesses. The speaker emphasizes the need to bring together the best of both worlds to get to truthful systems at scale.
Two Different Theories in Opposition
- The history of AI has been a hostile one between two different theories: symbolic systems and neural networks.
- On the symbolic theory, AI should be like logic and programming.
- On the neural network side, AI should be like brains.
- Both technologies are powerful and ubiquitous.
Strengths and Weaknesses
- Symbolic systems are good at representing facts and reasoning but hard to scale.
- Neural networks don't require as much custom engineering but can't handle truth well.
- Both technologies are productive but have their own unique strengths and weaknesses.
Bringing Together Both Worlds
- To get to truthful systems at scale, we need to bring together the best of both worlds.
- We need explicit reasoning from symbolic AI and strong emphasis on learning from neural networks approach.
- Reconciliation between these two is necessary.
Need for a New System of Governance
- Incentives for building trustworthy AI may not align with corporate incentives, so governance is needed.
- A global organization like an international agency for AI that is global, non-profit, neutral is needed.
- Governance questions include safety case requirements similar to pharma trials before rolling out large language models.
Conclusion: Building Trustworthy AI Requires Governance
In this section, the speaker concludes that building trustworthy AI requires governance. They emphasize the need for both governance and research to be part of a global organization like an international agency for AI. The speaker suggests that we can learn from history and build new organizations, as we have done around nuclear power, to address uncertainty and powerful new things that may be both good and bad.
Need for Governance
- Building trustworthy AI requires governance.
- Incentives to make AI good for society may not align with corporate incentives.
- A global organization like an international agency for AI is needed.
Governance and Research
- It is critical to have both governance and research as part of a global organization for AI.
- We can learn from history and build new organizations, as we have done around nuclear power, to address uncertainty and powerful new things that may be both good and bad.
Building New Tools to Manage AI
In this section, Gary Marcus talks about the need for research to build new tools to manage the risks associated with large language models.
The Need for Research
- Large language models are contributing to new risks.
- Research is needed to build new tools that can face these risks.
Global Support for Managing AI
- A recent survey showed that 91% of people agree that AI should be carefully managed.
- There is global support for managing AI.
Our Future Depends on It
- Our future depends on building new tools to manage the risks associated with large language models.
- We need to make careful management of AI happen.
Risks Associated with Large Language Models
In this section, Chris Anderson and Gary Marcus discuss the risks associated with large language models and how bad actors can use them.
Jailbreaks and Bad Actors
- Jailbreaks can be used by bad actors to create misinformation at scale.
- Bad actors can use large language models without guardrails from the dark web.
Pushing GPT Too Far
- Trolls farms have to work hard, but not too hard, in order to push GPT too far.
- Even GPT-4 has been jailbroken in just five minutes.
Combining Symbolic Tradition with Language Models
- Human feedback built into systems may give a form of symbolic wisdom.
- The knowledge in neural network systems is represented as statistics between particular words instead of relationships between entities in the world.
Representing Knowledge at the Wrong Grain Level
In this section, Gary Marcus talks about the problem of representing knowledge at the wrong grain level and how it affects guardrails.
The Wrong Grain Level
- Most of the knowledge in neural network systems is represented as statistics between particular words.
- The real knowledge we want is about relationships between entities in the world.
Unreliable Guardrails
- Guardrails are not very reliable because they are represented at the wrong grain level.
- An example was given where GPT gave a long song and dance about what would be the religion of the first Jewish president.
Global Governance and Regulation
In this section, Gary Marcus and Cathy O'Neil discuss the need for global governance and regulation in the tech industry.
Growing Sentiment for Global Affiliation
- There is a growing sentiment that something needs to be done about global governance and regulation in the tech industry.
- It is unclear whether the UN or nations can come together to address this issue, or if it will require philanthropy to fund a global governance structure.
- There are many different models that could be used to address this issue, but it will take a lot of conversations between stakeholders.
Companies Wanting Regulation
- Sundar Pichai, CEO of Google, recently came out in support of global governance in an interview with CBS "60 Minutes".
- Many companies themselves want to see some kind of regulation in place.
Overall, there is a growing recognition that something needs to be done about global governance and regulation in the tech industry. While there are many different models that could be used to address this issue, it will likely require a lot of conversations between stakeholders. Additionally, many companies themselves want to see some kind of regulation put into place.