Context
- The rapid expansion of Artificial Intelligence, particularly Generative AI, has intensified global debates about technological power, national security, and governance.
- Recent tensions involving Anthropic and Chinese AI firms such as DeepSeek, MoonshotAI, and MiniMax reveal how AI development is increasingly shaped by geopolitical rivalry and corporate competition.
- Disputes over model distillation, the military use of AI, and technological restrictions illustrate a struggle for technological dominance.
- Sustainable solutions require international governance frameworks rather than unilateral restrictions.
AI Competition and National Security Concerns
- Concerns emerged when Anthropic urged policymakers to classify certain Chinese AI laboratories as national security threats, alleging large-scale model distillation.
- Distillation allows a weaker model to learn from the outputs of a stronger system. The activity reportedly involved fraudulent accounts, deceptive access methods, and millions of interactions with Anthropic’s Claude model.
- Such actions violated terms of service and raised questions about intellectual property protection and technological access controls.
- At the same time, AI systems developed by American firms have reportedly been used by the United States military to accelerate the kill chain, linking target identification, legal approval, and military strikes.
- This highlights the dual-use technology nature of AI: tools designed for civilian applications can easily be adapted for military operations.
- Even Anthropic faced scrutiny when the Pentagon reportedly labelled it a supply chain risk, demonstrating the tensions between corporate autonomy, defence partnerships, and government oversight.
The Limits of the Nuclear Non-Proliferation Analogy
- Comparisons between AI and nuclear weapons have encouraged calls for strict technology containment.
- However, the analogy is flawed. Nuclear non-proliferation works because fissile material is scarce, traceable, and controlled by governments.
- AI models, by contrast, are mathematical systems that can be copied, modified, and distributed with relative ease.
- Unlike nuclear research, historically driven by government programs such as the Manhattan Project, advanced AI development occurs primarily in private companies focused on commercial innovation.
Model Distillation and the Debate over Guardrails
- Arguments that distilled models will lack safety guardrails are weakened by the reality that frontier models themselves may support controversial applications.
- Leading firms including OpenAI, Google, and xAI possess technologies capable of enabling surveillance systems, cyberwarfare, and even autonomous weapons.
- Competitive pressure for lucrative defence contracts creates incentives for companies to adopt more permissive policies regarding military use.
- While some firms express concern over the ethical implications of these applications, others accept broader agreements with government agencies.
- This environment risks a race to the bottom, where ethical safeguards weaken in response to market competition
The Difficulty of Controlling AI Diffusion
- Efforts to restrict AI development face significant structural barriers. Talent mobility across borders ensures that expertise circulates globally.
- Many researchers currently employed by Chinese AI laboratories received education or professional experience in American universities and technology firms, illustrating the interconnected nature of the global research ecosystem.
- Restrictions on technological inputs such as advanced AI chips have repeatedly encountered circumvention strategies and partial policy reversals.
- Model distillation represents another pathway that is even harder to regulate because it relies on analysing model outputs rather than accessing proprietary code or architecture.
- Each new restriction tends to produce new technical solutions, limiting the effectiveness of input-based controls.
Power, Intellectual Property, and Market Dominance
- Debates surrounding distillation also raise complex questions about data ownership and market concentration.
- Frontier AI companies argue that distillation amounts to large-scale intellectual property theft.
- However, these same models are trained on enormous datasets composed of web content, creative works, and publicly available texts created by millions of individuals who did not provide explicit consent or receive compensation.
- From this perspective, learning from model outputs may not be fundamentally more extractive than training models on publicly produced knowledge.
- Although violating a company’s terms of service is legally problematic, framing distillation purely as theft overlooks deeper structural issues about data ethics and digital labour.
The Way Forward: Toward Global Governance of Military AI
- The integration of AI into military systems appears increasingly inevitable as states seek advantages in strategic competition.
- Corporate guardrails alone cannot regulate such developments because companies can be pressured, replaced, or compelled by governments.
- Effective regulation requires plurilateral agreements among states that define responsible military uses of AI.
- Key commitments should include meaningful human control over lethal decisions, prohibitions on mass civilian surveillance, and auditable technical standards governing AI-enabled systems.
- These rules must apply universally to avoid selective enforcement driven by geopolitical interests.
Conclusion
- The intersection of Artificial Intelligence, national security, and corporate competition is reshaping global technological politics.
- Attempts to treat AI like nuclear technology underestimate its decentralized innovation structure and the speed of knowledge diffusion.
- Restrictive policies may slow competitors but cannot prevent technological spread and may reinforce corporate monopolies.
- A balanced approach requires international cooperation, transparent standards, and shared commitments to responsible military use.