Anthropic CEO Says That by Next Year, AI Models Could Be Able to “Replicate and Survive in the Wild”

Key Points:

  • AI may become self-sustaining and self-replicating
  • Anthropic CEO discusses responsible scaling of AI
  • Predictions of AI reaching high threat levels by 2025-2028


In a recent podcast interview with The New York Times, Dario Amodei, CEO of Anthropic, a company focused on enhancing AI responsibly, discussed the potential for AI to become self-sustaining and self-replicating. Drawing parallels to virology lab biosafety levels, Amodei warned that at ASL 4, AI could possess autonomy and persuasion capabilities, raising concerns about misuse by state-level actors for military advantage.


Amodei highlighted the possibility of AI models nearing the ability to replicate and survive autonomously, projecting that such advancements could occur as early as 2025 to 2028. His emphasis on the near-term nature of these developments underlines his urgency regarding the evolving AI landscape.


Having parted ways with OpenAI in 2021 due to strategic differences post the development of GPT-3, their partnership with Microsoft, Amodei and his sister Daniela established Anthropic to pursue AI advancements responsibly. Amodei’s credibility in the AI sphere adds credibility to his warnings and underscores Anthropic’s mission to ensure the positive impact of transformative AI on society.



Prompt Engineering Guides



©2024 The Horizon