DOD sought to weaponize AI against Americans
MesoscaleNews exists to explain the forces shaping our future — from climate systems to political systems to the rapidly accelerating world of artificial intelligence.
Reporting like this takes time and research, and it’s only possible because readers support the work directly.
If you want more independent investigations like this, please consider upgrading to a paid subscription.
The U.S. Department of Defense asked an artificial-intelligence company to remove safeguards preventing its software from being used for mass surveillance and autonomous weapons inside the United States.
The company said no.
Anthropic — one of the world’s largest AI developers — confirmed that its contracts with the Pentagon explicitly prohibit two uses of its models: mass domestic surveillance and fully autonomous weapons systems. When the Department of Defense asked the company to remove those guardrails, Anthropic refused.
Anthropic CEO Dario Amodei explained the decision bluntly, saying the company could not agree to changes that might enable those applications “in good conscience.”
That refusal matters.
Because stripped of bureaucratic language, the request amounts to something stark:
The United States government asked a private AI developer to provide software capable of population-scale surveillance systems and autonomous weapons to be used against US citizens.
I warned that this moment was coming two years ago.
In an earlier analysis, I wrote about the rise of automated targeting systems, describing how modern military AI is increasingly capable of producing algorithmically generated lists of potential targets — what are effectively machine-generated kill lists.
Investigations into modern AI-assisted warfare have already revealed systems capable of ingesting vast amounts of surveillance data and rapidly identifying potential targets. Once those systems exist, the distance between surveillance and violence collapses.
This was the exact plot to Captain America: The Winter Soldier.
Anthropic’s guardrails are designed to prevent exactly that outcome.
Company leaders have argued that frontier AI systems should not be used to power tools capable of identifying and targeting people without meaningful human oversight.
The Pentagon has been pushing multiple AI companies to allow their models to be used for “all lawful purposes,” including weapons development and intelligence operations, but Anthropic has resisted those terms.
The company’s position reflects a growing alarm among researchers studying autonomous weapons.
Human Rights Watch has warned that systems capable of selecting and engaging targets without meaningful human control raise “ethical, moral, legal, accountability, and security concerns.”
Autonomous weapons are typically defined as systems that, once activated, “select and engage targets without further human intervention.”
If a machine selects a target and kills someone, who is responsible?
The engineer who wrote the code?
The commander who deployed the system?
The government that authorized it?
Or the algorithm itself?
Legal scholars refer to this dilemma as the “responsibility gap” — the possibility that lethal decisions could be delegated to machines in ways that leave no clear human accountable for the outcome.
That gap is one of the central reasons many academics and technologists have signed international pledges stating that “the decision to take a human life should never be delegated to a machine.”
Even experts who believe autonomous weapons are inevitable warn that the technology could fundamentally change the nature of warfare.
Paul Scharre, a former Pentagon official and leading scholar on autonomous weapons, has argued that machines can process information at extraordinary speed, but they cannot determine what humans value or what moral choices should be made.
Artificial intelligence is extraordinarily good at classification.
It can sort people into categories, detect patterns in enormous datasets, and identify associations across surveillance systems.
That capability is what makes AI so useful, and it’s also what makes it dangerous.
AI can analyze the communications, movements, and social networks of millions of people.
And once those systems begin generating lists of “persons of interest,” the infrastructure for automated targeting is already halfway built.
Anthropic’s CEO, Dario Amodei, has warned about how easily such systems could enable authoritarian surveillance.
As Amodei explained when describing the risks of AI-driven monitoring, governments could theoretically deploy cameras everywhere and record conversations across entire populations — something that is technologically impractical today but could become feasible with AI systems capable of transcribing and analyzing everything.
This is why researchers studying AI governance warn that the most immediate danger from artificial intelligence is not some distant super-intelligence but the integration of AI into existing systems of state power — intelligence agencies, policing infrastructures, and military targeting systems.
Recent academic work on autonomous weapons warns that such systems introduce profound technical risks, including unpredictable behavior, opaque decision-making, and the possibility that algorithms could act in ways human operators cannot fully control.
Other researchers warn that as AI systems become embedded in military and surveillance infrastructures, they risk amplifying authoritarian forms of control by enabling unprecedented monitoring and automation of enforcement.
Anthropic’s refusal to remove its guardrails therefore represents more than a dispute over a government contract.
It is one of the first major confrontations over the limits of military AI.
For now, the company has drawn a line.
But that line is already under pressure.
Defense officials have threatened to terminate Anthropic’s contract or take other measures if the company refuses to comply.
Meanwhile, other AI firms appear more willing to accommodate military demands.
The incentives are enormous.
Governments want faster targeting, faster intelligence analysis, and weapons that can operate without risking soldiers. Companies want lucrative defense contracts.
History shows what happens when powerful technologies converge with powerful institutions.
The surveillance states of the twentieth century relied on paper files, informants, and primitive computers.
Artificial intelligence would give those same systems something far more powerful: A digital nervous system capable of watching entire populations.
And, if the guardrails disappear, the fascist regime in power will be deciding which types of people are its enemies through AI, with fully autonomous weapons behind it.



Wow! Wonderful reporting. Thank you for your good information as we continue to challenge this Brave New/old World.
Hegseth is a certified fool. But he has Trumps ear and this is just the kind of thing Trump hates...a "NO". As a retired Air Force Vet, I know without a doubt that bombing those boats {as our military is currently doing} is murder. If the military, under Hegseth, or anyone else for that matter, gets this AI capability, we are truly doomed. God help us all if this comes to pass.