A Closer Look at Strategic Affairs and the AI Factor
April 18, 2025

Context

  • The emergence of artificial intelligence (AI) as a powerful and rapidly evolving technology has sparked global conversations about its potential risks, especially in the context of strategic affairs.
  • Among these, the speculation around the development of artificial general intelligence (AGI), AI capable of surpassing human cognitive abilities and solving problems beyond its training, has intensified concerns about a looming AI arms race.
  • A recent high-profile paper by Eric Schmidt, Dan Hendrycks, and Alexandr Wang attempts to bridge this gap by proposing mechanisms to manage AI’s potential threats.
  • While their contribution is valuable for initiating a necessary conversation, some of their arguments, particularly those drawing analogies with nuclear deterrence, warrant critical scrutiny.

Flawed Analogies: AI vs. Nuclear Weapons

  • One of the central proposals in Schmidt, Hendrycks, and Wang’s paper is the concept of Mutual Assured AI Malfunction (MAIM), inspired by the Cold War-era doctrine of Mutual Assured Destruction (MAD).
  • MAD posits that the use of nuclear weapons by one power would lead to devastating retaliation, ensuring mutual annihilation and thereby deterring initial aggression.
  • MAIM, by contrast, is framed as a deterrent against the development or deployment of rogue super-intelligent AI by threatening reciprocal technological sabotage.
  • However, this analogy is fundamentally flawed. Nuclear deterrence relies on the destructive certainty of physical retaliation, something that does not directly translate to the digital, decentralised, and often opaque nature of AI development.
  • Unlike nuclear weapons, AI systems are not confined to centralised or easily identifiable physical infrastructure.
  • AI projects are often distributed across networks of developers and institutions, making the idea of pre-emptively “destroying” them problematic and potentially escalatory.
  • Moreover, relying on sabotage or pre-emptive strikes as a deterrent raises ethical and legal questions, and may justify aggressive state behaviour based on speculative threats.

The Challenge of Non-Proliferation and Surveillance

  • The paper also proposes controlling the distribution of AI chips in the same way nuclear materials are regulated, such as enriched uranium.
  • While the intent is to limit access to computing power necessary for training powerful AI models, this approach oversimplifies the nature of AI development.
  • Unlike nuclear materials, once trained, AI models do not require continued physical inputs to function.
  • Moreover, the hardware required for training is becoming increasingly commodified, making strict supply chain control difficult to implement effectively.
  • The suggestion that states can pre-emptively dismantle rogue AI initiatives assumes near-perfect surveillance and intelligence capabilities, an assumption far removed from current reality.
  • Even the most sophisticated intelligence networks struggle with the detection of clandestine technological projects.
  • The risk of escalation due to misidentification or miscalculation cannot be ignored. This weakens the feasibility and safety of strategies like MAIM or pre-emptive sabotage.

Speculative Risks and Oversight Dynamics

  • The authors of the paper also speculate on worst-case scenarios such as AI-powered bioweapons and cyberattacks becoming inevitable.
  • While these are valid areas of concern, there is currently insufficient evidence to suggest that such uses are imminent or that AI meets the threshold of being considered a weapon of mass destruction.
  • The narrative of inevitability can lead to premature or disproportionate policy responses, driven more by fear than grounded risk assessment.
  • Furthermore, the assumption that AI development will be primarily state-driven overlooks the dominant role of the private sector in AI innovation.
  • In today’s landscape, governments often rely on private companies for access to cutting-edge AI technologies.
  • This dynamic complicates the state's ability to fully control or regulate AI in the same manner as nuclear programs, which are traditionally state-run and tightly secured.

The Way Forward

  • The Need for Better Frameworks
    • Given these complexities, historical analogies to nuclear deterrence offer limited value for formulating AI policy.
    • Instead, more appropriate frameworks are required—ones that reflect AI’s unique characteristics as a dual-use, general-purpose technology.
    • One such alternative is the General-Purpose Technology (GPT) framework, which considers technologies that have broad applications across sectors and significantly impact a state's economic and strategic capabilities.
    • Although AI has the potential to evolve into a true GPT, current limitations, such as the constrained utility of large language models (LLMs), suggest that it has not yet achieved the level of diffusion or generality required to fully meet this definition.
  • Expansion of Scholarship
    • What is urgently needed is an expansion of rigorous scholarship that examines AI within the context of strategic affairs.
    • Such work must avoid sensationalism and instead prioritise practical, well-grounded policy recommendations.
    • Only through sustained research, interdisciplinary collaboration, and informed debate can policymakers develop strategies that are both realistic and responsive to the evolving nature of AI technology.

Conclusion

  • The strategic implications of AI, particularly the possibility of AGI, are too significant to be guided by flawed historical analogies or speculative thinking.
  • While efforts to draw attention to AI's risks are important, proposals like MAIM and chip-based non-proliferation must be critically evaluated for feasibility, ethics, and long-term consequences.
  • As AI continues to evolve, so too must the frameworks used to understand and regulate it.
  • Future policymaking must be rooted in nuanced understanding, tailored approaches, and robust intellectual foundations that reflect the distinctive nature of AI rather than retrofitting outdated strategic doctrines.

Enquire Now