Are military AIs more ethical than biological soldiers?
By Peter Byrne, Project Censored
In 1757, British philosopher David Hume observed:
There is a universal tendency among mankind to think of all things as being like themselves, and to transfer to every object the qualities they are familiarly acquainted with. … We find human faces in the moon, armies in the clouds; and, if not corrected by experience and reflection, our natural tendency leads us to ascribe malice or good-will to everything that hurts or pleases us.

In 2024, two and a half centuries later, military AI scholar James Johnson referenced Hume’s insights, observing,
AI technology is already being infused into military machines… designed to look and act like humans. … Anthropomorphic terms like “ethical,” “intelligent,” and “responsible” in the context of machines can lead to false tropes implying that inanimate AI agents are more capable of moral reasoning, compassion, empathy, and mercy—and thus might act more ethically and humanly in warfare than humans.
This is not science fiction
In April, Secretary of Defense Pete Hegseth, the tattooed Christian Nationalist, ordered the armed forces to project “lethal force … at an accelerated pace” in preparation for attacking China with AI-enabled weapons. In accord with Sinophobic demands of Trump-bribing AI industrialists, Hegseth is instructing the historically unauditable necro bureaucracy he sits atop to procure untold billions of dollars for AI-driven weapons systems despite the host of technologically structural flaws afflicting such instruments of death, as Military AI Watch has extensively reported this year.
In his April 2025 memo to senior Pentagon leadership, Hegseth unilaterally ordered the armed forces to expeditiously “enable AI-driven command and control at Theater, Corps, and Division headquarters by 2027.” The controversial theocrat’s edict is supercharging the profit-seeking agendas of Silicon Valley militarists and revolving door Beltway AI “experts” who endorse waging war remotely with inherently insecure software platforms.
Concurrent to Hegseth’s order, Government Accountability Office auditors released a report highlighting the irrationality of integrating military AI into command structures. Ignoring informed concerns by its own experts, the Pentagon is operationalizing AI systems marketed by the systems’ developers as capable of autonomously fighting “smart” battles at light speeds.
Prototypes of human-AI hybrids, operating in a globalized “kill web” touted as capable of annihilating human threats as determined by predictive algorithms, are now all the rage.
To advance the use of artificial intelligence in military AI command and control systems, and to create designs for human-AI combat teams and AI-augmented soldiers, the Pentagon has awarded $500 million to the Applied Research Laboratory for Intelligence and Security (ARLIS) located at the University of Maryland in College Park.
Recent Posts
Utah Leaders Are Hindering Efforts To Develop Solar Despite A Goal To Double The State’s Energy Supply
December 12, 2025
Take Action Now Utah Governor Spencer Cox signed bills that will make it more difficult and expensive to develop and produce solar energy, ending tax…
Report of the Independent Democratic Task Force on U.S. Policy Toward Israel
December 12, 2025
Take Action Now For release in connection with the winter meeting of the Democratic National Committee convening on December 11, 2025 in Los Angeles……
U.S. Realizes It Can Seize Boats After All
December 11, 2025
Take Action Now After months of extrajudicial killings in the waters off Venezuela, the Trump administration opted instead to capture an oil tanker.……
Wrong voters, wrong message: progressives’ autopsy lays bare Kamala Harris failures
December 10, 2025
Take Action Now RootsAction report finds Harris courted moderates instead of working-class Democrats – and Gaza stance did not helpBy David Smith,…




