Are military AIs more ethical than biological soldiers?
By Peter Byrne, Project Censored
In 1757, British philosopher David Hume observed:
There is a universal tendency among mankind to think of all things as being like themselves, and to transfer to every object the qualities they are familiarly acquainted with. … We find human faces in the moon, armies in the clouds; and, if not corrected by experience and reflection, our natural tendency leads us to ascribe malice or good-will to everything that hurts or pleases us.

In 2024, two and a half centuries later, military AI scholar James Johnson referenced Hume’s insights, observing,
AI technology is already being infused into military machines… designed to look and act like humans. … Anthropomorphic terms like “ethical,” “intelligent,” and “responsible” in the context of machines can lead to false tropes implying that inanimate AI agents are more capable of moral reasoning, compassion, empathy, and mercy—and thus might act more ethically and humanly in warfare than humans.
This is not science fiction
In April, Secretary of Defense Pete Hegseth, the tattooed Christian Nationalist, ordered the armed forces to project “lethal force … at an accelerated pace” in preparation for attacking China with AI-enabled weapons. In accord with Sinophobic demands of Trump-bribing AI industrialists, Hegseth is instructing the historically unauditable necro bureaucracy he sits atop to procure untold billions of dollars for AI-driven weapons systems despite the host of technologically structural flaws afflicting such instruments of death, as Military AI Watch has extensively reported this year.
In his April 2025 memo to senior Pentagon leadership, Hegseth unilaterally ordered the armed forces to expeditiously “enable AI-driven command and control at Theater, Corps, and Division headquarters by 2027.” The controversial theocrat’s edict is supercharging the profit-seeking agendas of Silicon Valley militarists and revolving door Beltway AI “experts” who endorse waging war remotely with inherently insecure software platforms.
Concurrent to Hegseth’s order, Government Accountability Office auditors released a report highlighting the irrationality of integrating military AI into command structures. Ignoring informed concerns by its own experts, the Pentagon is operationalizing AI systems marketed by the systems’ developers as capable of autonomously fighting “smart” battles at light speeds.
Prototypes of human-AI hybrids, operating in a globalized “kill web” touted as capable of annihilating human threats as determined by predictive algorithms, are now all the rage.
To advance the use of artificial intelligence in military AI command and control systems, and to create designs for human-AI combat teams and AI-augmented soldiers, the Pentagon has awarded $500 million to the Applied Research Laboratory for Intelligence and Security (ARLIS) located at the University of Maryland in College Park.
Recent Posts
White House Refuses To Rule Out Summary Executions Of People On Its Secret Domestic Terrorist List
December 15, 2025
Take Action Now The Trump administration ignored questions about whether it would order the killings of those on its NSPM-7 list — even while…
Koch Network Fuels Republican Push To Kill ACA Subsidies
December 15, 2025
Take Action Now As millions face higher premiums, Koch‑funded groups are pressuring Republicans to oppose Obamacare subsidy extensions.By Donald…
We’re Making ‘Tax the Rich’ More Than a Slogan
December 14, 2025
Take Action Now “Workers Over Billionaires” was the slogan on Labor Day. It should be the slogan every day.By Max Page, Labor Notes Taxing the…
‘Total Amateur Hour’: FBI Official Says Antifa Is #1 Threat in US—But Can’t Say Where, Who, or What It Is
December 13, 2025
Take Action Now “Just a complete admission here that the entire ‘antifa’ threat narrative is totally manufactured by this administration,” said one…




