Are military AIs more ethical than biological soldiers?
By Peter Byrne, Project Censored
In 1757, British philosopher David Hume observed:
There is a universal tendency among mankind to think of all things as being like themselves, and to transfer to every object the qualities they are familiarly acquainted with. … We find human faces in the moon, armies in the clouds; and, if not corrected by experience and reflection, our natural tendency leads us to ascribe malice or good-will to everything that hurts or pleases us.

In 2024, two and a half centuries later, military AI scholar James Johnson referenced Hume’s insights, observing,
AI technology is already being infused into military machines… designed to look and act like humans. … Anthropomorphic terms like “ethical,” “intelligent,” and “responsible” in the context of machines can lead to false tropes implying that inanimate AI agents are more capable of moral reasoning, compassion, empathy, and mercy—and thus might act more ethically and humanly in warfare than humans.
This is not science fiction
In April, Secretary of Defense Pete Hegseth, the tattooed Christian Nationalist, ordered the armed forces to project “lethal force … at an accelerated pace” in preparation for attacking China with AI-enabled weapons. In accord with Sinophobic demands of Trump-bribing AI industrialists, Hegseth is instructing the historically unauditable necro bureaucracy he sits atop to procure untold billions of dollars for AI-driven weapons systems despite the host of technologically structural flaws afflicting such instruments of death, as Military AI Watch has extensively reported this year.
In his April 2025 memo to senior Pentagon leadership, Hegseth unilaterally ordered the armed forces to expeditiously “enable AI-driven command and control at Theater, Corps, and Division headquarters by 2027.” The controversial theocrat’s edict is supercharging the profit-seeking agendas of Silicon Valley militarists and revolving door Beltway AI “experts” who endorse waging war remotely with inherently insecure software platforms.
Concurrent to Hegseth’s order, Government Accountability Office auditors released a report highlighting the irrationality of integrating military AI into command structures. Ignoring informed concerns by its own experts, the Pentagon is operationalizing AI systems marketed by the systems’ developers as capable of autonomously fighting “smart” battles at light speeds.
Prototypes of human-AI hybrids, operating in a globalized “kill web” touted as capable of annihilating human threats as determined by predictive algorithms, are now all the rage.
To advance the use of artificial intelligence in military AI command and control systems, and to create designs for human-AI combat teams and AI-augmented soldiers, the Pentagon has awarded $500 million to the Applied Research Laboratory for Intelligence and Security (ARLIS) located at the University of Maryland in College Park.
Recent Posts
Black Votes Jeopardized by the SAVE Act
April 2, 2026
Take Action Now The SAVE Act would require proof of U.S. citizenship to be presented in person in order to register to vote in this country and would…
ICE and War Funding Can Now Become the Latest Excuse To Gut the Social Safety Net
April 2, 2026
Take Action Now Republicans don’t need to gut the social safety net again in order to pass Trump’s latest series of priorities. But…
Israel Is Stepping Up Its Ethnic Cleansing in the West Bank
April 1, 2026
Take Action Now Even as Israel attacks Iran and Lebanon, it is also intensifying ethnic cleansing of Palestinians. The military and settler militias…
Building Beyond ‘No Kings’
April 1, 2026
Take Action Now Why there is cause for both celebration and concern.By Christopher D. Cook, Common Dreams It’s easy to both celebrate and…




