Are military AIs more ethical than biological soldiers?
By Peter Byrne, Project Censored
In 1757, British philosopher David Hume observed:
There is a universal tendency among mankind to think of all things as being like themselves, and to transfer to every object the qualities they are familiarly acquainted with. … We find human faces in the moon, armies in the clouds; and, if not corrected by experience and reflection, our natural tendency leads us to ascribe malice or good-will to everything that hurts or pleases us.

In 2024, two and a half centuries later, military AI scholar James Johnson referenced Hume’s insights, observing,
AI technology is already being infused into military machines… designed to look and act like humans. … Anthropomorphic terms like “ethical,” “intelligent,” and “responsible” in the context of machines can lead to false tropes implying that inanimate AI agents are more capable of moral reasoning, compassion, empathy, and mercy—and thus might act more ethically and humanly in warfare than humans.
This is not science fiction
In April, Secretary of Defense Pete Hegseth, the tattooed Christian Nationalist, ordered the armed forces to project “lethal force … at an accelerated pace” in preparation for attacking China with AI-enabled weapons. In accord with Sinophobic demands of Trump-bribing AI industrialists, Hegseth is instructing the historically unauditable necro bureaucracy he sits atop to procure untold billions of dollars for AI-driven weapons systems despite the host of technologically structural flaws afflicting such instruments of death, as Military AI Watch has extensively reported this year.
In his April 2025 memo to senior Pentagon leadership, Hegseth unilaterally ordered the armed forces to expeditiously “enable AI-driven command and control at Theater, Corps, and Division headquarters by 2027.” The controversial theocrat’s edict is supercharging the profit-seeking agendas of Silicon Valley militarists and revolving door Beltway AI “experts” who endorse waging war remotely with inherently insecure software platforms.
Concurrent to Hegseth’s order, Government Accountability Office auditors released a report highlighting the irrationality of integrating military AI into command structures. Ignoring informed concerns by its own experts, the Pentagon is operationalizing AI systems marketed by the systems’ developers as capable of autonomously fighting “smart” battles at light speeds.
Prototypes of human-AI hybrids, operating in a globalized “kill web” touted as capable of annihilating human threats as determined by predictive algorithms, are now all the rage.
To advance the use of artificial intelligence in military AI command and control systems, and to create designs for human-AI combat teams and AI-augmented soldiers, the Pentagon has awarded $500 million to the Applied Research Laboratory for Intelligence and Security (ARLIS) located at the University of Maryland in College Park.
Recent Posts
Unfettered And Unaccountable: How Trump Is Building A Violent, Shadowy Federal Police Force
October 27, 2025
Take Action Now The administration gutted guardrails and offices meant to rein in abusive actions. Some families say they have no idea where their…
Israel And “The Big Lie”
October 27, 2025
Take Action Now The Global Nonviolent Action Database details some 40 cases of mass movements overcoming tyrants through strategic nonviolent…
The Business of Killing
October 27, 2025
Take Action Now Newly Released Data Reveals Air Force Suicide Crisis After Years of ConcealmentBy Austin Campbell, The Intercept Staff Sgt. Quinte…
Can Nonviolent Struggle Defeat a Dictator? This Database Emphatically Says Yes
October 26, 2025
Take Action Now The Global Nonviolent Action Database details some 40 cases of mass movements overcoming tyrants through strategic nonviolent…




