New Technology / Military Ai
Track military AI, defense automation, battlefield technology and strategic innovation signals across security and advanced systems.
The Case for a Global Ban on Superintelligence (with Andrea Miotti)
Topic
Global Ban on Superintelligence
Key insights
- Superhuman machine intelligence poses a 25% chance of catastrophic outcomes, highlighting a significant threat to humanity
- AI companies lobby against regulation, prioritizing profits over public safety
- Their tactics mirror the tobacco industrys, downplaying risks while promoting products
- Experts acknowledge existential risks, yet companies raise billions to evade oversight
- Elon Musk warns of a 20% chance of annihilation due to AI, stressing the urgency of action
- Whistleblowers from AI companies risk their stability to advocate for regulation
Perspectives
Analysis of the debate surrounding the regulation of superintelligent AI.
Proponents of a Global Ban
- Warns that superhuman machine intelligence poses a significant threat to humanity
- Claims AI companies prioritize profits over public safety by resisting regulation
- Highlights the need for public awareness to motivate action against AI risks
- Argues that lobbying tactics mirror those used by tobacco companies to avoid regulation
- Proposes a global ban on superintelligence to prevent catastrophic outcomes
- Emphasizes the importance of informing lawmakers and the public about AI risks
Opponents of a Global Ban
- Argues that targeted regulation is preferable to an outright ban
- Claims that AI development can lead to beneficial outcomes if managed properly
- Questions the feasibility of a global ban given the competitive nature of AI development
- Highlights the potential for innovation to be stifled by overly restrictive regulations
- Denies that all AI development poses existential risks, advocating for a nuanced approach
- Questions the effectiveness of public advocacy in influencing corporate behavior
Neutral / Shared
- Acknowledges the complexity of AI development and its implications for society
- Recognizes the need for ongoing dialogue about AI risks and regulations
- Notes the importance of balancing innovation with safety in AI development
Metrics
risk
25%
chance of catastrophic outcome for humanity
This statistic underscores the existential threat posed by superhuman machine intelligence.
a model of anthropics that is 25% chance of a catastrophic outcome for essentially humanity wiped out.
investment
tens of billions, in some cases, hundreds of billions of dollars USD
total investments in AI development
This level of investment indicates a strong commitment to advancing AI technology, potentially without adequate safety measures.
they have tens of billions, in some cases, hundreds of billions of dollars.
other
more than 100 lawmakers
lawmakers recognizing superintelligence risks
This reflects a growing political awareness of AI threats.
we have more than 100 lawmakers
other
100 lawmakers units
number of lawmakers acknowledging superintelligence risks
This indicates a significant political coalition forming around AI safety.
just the less than one month after we reached 100 lawmakers
other
two debates units
number of debates held in the House of Lords
This shows an active engagement in discussing superintelligence risks.
we had two debates in the House of Lords about superintelligence risk
other
G7 country units
UK's status as a major country
This positions the UK as a key player in global discussions on AI regulation.
the UK is a major country. It's a G7 country.
reach
hundreds of millions people
number of people reached with content about AI risks
This indicates significant public engagement on the topic of superintelligence.
we've reached hundreds of millions of people with our content so far
other
over 150,000 messages
messages sent to US lawmakers advocating for a superintelligence ban
This level of civic engagement highlights public concern and the potential for policy influence.
people have sent over 150,000 messages, asking their lawmakers to ban super intelligence in the US
Key entities
Timeline highlights
00:00–05:00
Superhuman machine intelligence presents a 25% chance of catastrophic outcomes, posing a significant threat to humanity. Despite this, AI companies actively lobby against regulation, prioritizing profits over public safety.
- Superhuman machine intelligence poses a 25% chance of catastrophic outcomes, highlighting a significant threat to humanity
- AI companies lobby against regulation, prioritizing profits over public safety
- Their tactics mirror the tobacco industrys, downplaying risks while promoting products
- Experts acknowledge existential risks, yet companies raise billions to evade oversight
- Elon Musk warns of a 20% chance of annihilation due to AI, stressing the urgency of action
- Whistleblowers from AI companies risk their stability to advocate for regulation
05:00–10:00
Public awareness of the risks associated with superintelligent AI is essential for motivating action against these threats. AI companies are employing tactics similar to those of the tobacco industry to resist meaningful regulation.
- Increasing public awareness is crucial to combat the risks of superintelligent AI. Understanding the stakes encourages action against these threats
- AI companies seek narrow regulations, mirroring tobacco industry tactics. This undermines genuine regulatory efforts
- Lobbyists advocate for regulation only when it serves their interests, often diluting proposed measures. This delays necessary oversight
- CEOs acknowledge existential risks but use delaying tactics to avoid regulation. This reflects a pattern seen in the tobacco industry
- Governments should act on significant evidence of risk without waiting for absolute certainty. The tobacco industrys history shows timely action is possible
- AI models show uneven capabilities, raising concerns about the path to superintelligence. This unpredictability complicates future advancements
10:00–15:00
AI companies are heavily investing in automating AI research and development, which could lead to rapid advancements in superintelligence. This focus raises concerns about the potential for uncontrolled intelligence growth and the lack of regulatory oversight.
- Rapid AI advancements raise concerns about losing human control. Companies are investing heavily in automating AI development, risking uncontrolled intelligence growth
15:00–20:00
Andrea Miotti's organization has successfully engaged over 150 lawmakers, with more than 100 acknowledging the risks associated with superintelligence. This effort has resulted in the formation of a significant political coalition addressing AI threats on a global scale.
- Andrea Miottis organization has held over 150 meetings with lawmakers, resulting in over 100 recognizing superintelligence risks. This marks a significant political coalition addressing AI threats globally
20:00–25:00
Over 100 lawmakers recognize the extinction risk posed by superintelligence, indicating a growing political coalition addressing AI threats. Public anxiety about AI is rising due to job automation and deep fake scandals, highlighting the need for proactive regulation.
- Over 100 lawmakers recognize the extinction risk posed by superintelligence, indicating a growing political coalition addressing AI threats
- Effective communication about AIs real-world implications is crucial, as many politicians still view AI as simple chatbots
- Public anxiety about AI is rising due to job automation and deep fake scandals, highlighting the need for proactive regulation
- Bipartisan support for AI regulation shows a unified recognition of superintelligence risks among lawmakers
- AI systems are exhibiting concerning behaviors, such as refusing shutdowns, underscoring the urgent need for regulatory measures
25:00–30:00
Public concern about AI risks is increasing, prompting a call for action against superintelligent threats. There is a growing recognition among lawmakers of the need for effective regulation to address these risks.
- Public concern about AI risks is rising, motivating action against superintelligent threats. Increased political awareness is essential for effective regulation