Arshad Mehmood Awan
Every technological revolution reaches a pivotal moment when the question shifts from what a machine can do to what it should be allowed to do. That moment has arrived for artificial intelligence in warfare. Today, the world watches two global powers confront this question in ways that are starkly divergent—and profoundly consequential.
China’s Ministry of Defence issued a stark warning this week. Spokesperson Jiang Bin explicitly criticized the United States for moving toward a military future that humanity has long envisioned and feared. He invoked the imagery of The Terminator—the 1984 film in which autonomous machines seize control of military systems and turn against humanity. Once dismissed as fiction, Jiang Bin’s warning suggested that this scenario is no longer implausible.
The warning directly targeted the Trump administration’s rapid integration of AI startups into the U.S. military with minimal restrictions. The Pentagon recently confirmed that Elon Musk’s Grok system has been cleared for use in classified environments—technology developed by a private individual now operating in the most sensitive military systems, without any publicly stated limitations. The implications are immediate, not hypothetical.
Yet the more significant story is not Grok itself, but Anthropic, and the consequences it faced for ethical restraint.
Anthropic’s Claude model had become the Pentagon’s most widely deployed frontier AI system, the only advanced AI operating on classified networks. This reflects deep institutional reliance: the military trusted Claude with sensitive information and integrated it into operational planning. Then, the company drew a line. It refused to permit its AI to be used for mass surveillance or fully autonomous weapons, citing ethical principles and responsible AI practices emphasizing meaningful human oversight in matters of life and death.
The U.S. administration’s response was swift and severe. President Trump ordered all federal agencies to cease using Anthropic’s technology. Pentagon leadership declared the company a supply-chain risk, barring contractors and partners from commercial engagement. Compliance, not conscience, had become the operative requirement.
This sequence is revealing. Every major power is exploring military AI, but the U.S. is now actively penalizing firms that insist on ethical guardrails. The demand is not for responsible AI—it is for unconditional subservience. This marks a profound and dangerous shift in the governance of emerging technologies.
China’s warnings merit careful attention rather than dismissal. Jiang Bin highlighted four alarming developments: the unrestricted use of AI in military operations, employing AI to violate national sovereignty, permitting AI to influence decisions about war, and granting algorithms authority over life and death. Each of these is already under consideration or implementation in the United States, without clear ethical frameworks, legal accountability, or international agreement.
The question of algorithmic decision-making in life-and-death scenarios is not rhetorical; it is the central moral dilemma of our time. Unlike human soldiers, whose decisions are embedded in chains of command, conscience, and legal responsibility, autonomous systems lack clear accountability. When an algorithm makes a lethal decision, no existing legal or institutional framework can reliably assign responsibility. This is not a minor flaw—it represents a potential collapse of the principle of accountability in warfare.
The “Terminator scenario” is not about sentient robots turning on humanity. The real threat is far more immediate: autonomous systems operating at speeds beyond human comprehension, making irreversible decisions across complex, multi-domain environments, with errors and escalations unfolding faster than any human can intervene. Accidents have started wars before, but never at machine speed across multiple theaters, without any human control.
Pakistan and China have called for restraint in the face of accelerating U.S. military AI deployment. This is not a call for naïveté, but for preemptive international agreement on the limits of AI in warfare. Certain thresholds, once crossed, cannot be reversed.
Anthropic understood this risk—and paid a steep price for it. The international community should take note: the tension between ethical conscience and governmental demand for unconditional compliance will determine not only the future of artificial intelligence but the future of warfare, accountability, and ultimately, human survival.









