Are military AIs more ethical than biological soldiers?
By Peter Byrne, Project Censored
In 1757, British philosopher David Hume observed:
There is a universal tendency among mankind to think of all things as being like themselves, and to transfer to every object the qualities they are familiarly acquainted with. … We find human faces in the moon, armies in the clouds; and, if not corrected by experience and reflection, our natural tendency leads us to ascribe malice or good-will to everything that hurts or pleases us.

In 2024, two and a half centuries later, military AI scholar James Johnson referenced Hume’s insights, observing,
AI technology is already being infused into military machines… designed to look and act like humans. … Anthropomorphic terms like “ethical,” “intelligent,” and “responsible” in the context of machines can lead to false tropes implying that inanimate AI agents are more capable of moral reasoning, compassion, empathy, and mercy—and thus might act more ethically and humanly in warfare than humans.
This is not science fiction
In April, Secretary of Defense Pete Hegseth, the tattooed Christian Nationalist, ordered the armed forces to project “lethal force … at an accelerated pace” in preparation for attacking China with AI-enabled weapons. In accord with Sinophobic demands of Trump-bribing AI industrialists, Hegseth is instructing the historically unauditable necro bureaucracy he sits atop to procure untold billions of dollars for AI-driven weapons systems despite the host of technologically structural flaws afflicting such instruments of death, as Military AI Watch has extensively reported this year.
In his April 2025 memo to senior Pentagon leadership, Hegseth unilaterally ordered the armed forces to expeditiously “enable AI-driven command and control at Theater, Corps, and Division headquarters by 2027.” The controversial theocrat’s edict is supercharging the profit-seeking agendas of Silicon Valley militarists and revolving door Beltway AI “experts” who endorse waging war remotely with inherently insecure software platforms.
Concurrent to Hegseth’s order, Government Accountability Office auditors released a report highlighting the irrationality of integrating military AI into command structures. Ignoring informed concerns by its own experts, the Pentagon is operationalizing AI systems marketed by the systems’ developers as capable of autonomously fighting “smart” battles at light speeds.
Prototypes of human-AI hybrids, operating in a globalized “kill web” touted as capable of annihilating human threats as determined by predictive algorithms, are now all the rage.
To advance the use of artificial intelligence in military AI command and control systems, and to create designs for human-AI combat teams and AI-augmented soldiers, the Pentagon has awarded $500 million to the Applied Research Laboratory for Intelligence and Security (ARLIS) located at the University of Maryland in College Park.
Recent Posts
How Democrats Can End Qualified Immunity for ICE Agents
January 28, 2026
Take Action Now Democrats have a rare moment of leverage to pass legislation ending qualified immunity for Immigration and Customs Enforcement…
Save New START- Nuclear Arms Treaties Must Not Expire
January 27, 2026
Take Action Now Letting New START expire would end more than a treaty — it would end the last remaining restraint on nuclear escalation.By Leah…
The Senate Must Not Fund ICE
January 26, 2026
Take Action Now The money fueling ICE’s abuses comes directly out of the pockets of working Americans who are already struggling.By Sonali…
Despite Authoritarian Warnings, 149 House Democrats Vote to Hand Trump $840 Billion for Military
January 26, 2026
Take Action Now “If an opposition party votes like this, it’s not in opposition. It may not even be a party.”By Jon Queally, Common Dreams Despite…




