[Tech Thoughts] Uniting against removing AI safeguards for military purposes
2026-03-01 - 05:04
In the United States, there’s a deadline for Anthropic that carries much weight in the divided world today. Anthropic is refusing to remove safeguards that would prevent its technology from being used to target weapons autonomously and to conduct surveillance in the US. It has until 5:01 pm of Friday, February 26 (February 27, Philippine time), to accede or face the wrath of the US government. The US Department of War is threatening to invoke a law, the Defense Production Act, that would force Anthropic to tailor its model to suit the military’s needs or else it will label Anthropic a “supply chain risk” that would potentially hurt the company financially as it would be treated like a US adversary. Here’s why Anthropic CEO Dario Amodei and the employees of some competing companies are joining together in saying no to the Department of War. The reasoning of Anthropic’s Dario Amodei Anthropic CEO Dario Amodei released a statement on February 26 outlining what it’s already done and why it intends to say “No” to the Department of War. While he said Anthropic has “worked proactively to deploy our models to the Department of War and the intelligence community,” it still had some scruples as regards autonomous weaponry and US surveillance matters, and admitted that AI can “undermine, rather than defend, democratic values.” Amodei said that while Anthropic does support the use of its AI for foreign intelligence and counterintelligence operations, it said that the use of said AI systems for mass domestic surveillance would be “incompatible with democratic values.” Amodei added that applicable laws have not caught up with the current and growing capabilities of artificial intelligence. Current laws allow the government to surveil, to some extent, the movements, web browsing, and associations from public sources without getting a warrant — and this has already seen bipartisan opposition in US Congress, he said. “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale,” Amodei warned. As regards autonomous weaponry, Amodei said such would need human oversight as fully autonomous weapons can’t be relied upon to have the critical judgment of a human soldier. He explained, “They need to be deployed with proper guardrails, which don’t exist today.” Finding common ground In a petition released Friday, Google and OpenAI employees joined hands to say they do not support what the Department of War wants, even as Elon Musk’s xAI signed an agreement to allow the military to use its model, Grok, in classified systems. According to the 261 signatories of the petition: “They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.” The signatories urged the leaders of their respective companies to “put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.” Stand united Pentagon spokesperson Sean Parnell said on X that it has no interest in using AI to conduct mass surveillance of Americans nor does it want to use AI to develop autonomous weapons that operate without human involvement. He added that all the Pentagon wants is to “use Anthropic’s model for all lawful purposes.” The problem is that Trump’s government has little regard for the formalities of law, and has shown it is willing to use lawfare make people — and companies — bend to its will. While AI can be helpful in crunching numbers and data at scale to help people make better decisions, leaving everything — from collating the information of US citizens en masse to gunning people down — to the autonomy of an AI seems bound to hurt democracy and pave the way for harsher means of imposing order. I may not support AI in its entirety, but given AI’s limitations, I support those more knowledgeable than I regarding AI who want to impose safeguards and protections against AI abuse especially as it relates to something as volatile as war. – Rappler.com Must Read [Tech Thoughts] In time of war, tension, will tech giants ever sort out their moral compass?