Weird How ‘The Worst Kept Secrets’ Are Always About Democrats, Isn’t It?
A Quick Bible Study Vol. 316: The Meaning of Rain in the Eyes...
The Enigma of JD Vance
When 'Just a Game' Isn’t Just a Game Anymore
Two Moments in Annapolis Reveal a Deeper Cultural Drift
The Pope, Iran, and My Being Sentenced to Death As a Christian in...
Grace and Truth: Navigating Conversion Therapy and a Client’s Faith-Based Rights
DEI Over Duty: How the Secret Service Put Identity Politics Above Operational Competence
Leftists Use Russia As an Excuse to Censor Right Wing Media in US...
'No Threat Was Present': Walz's Iran Claim Collides With the Facts
Twice-Deported Illegal Alien Gets 14 Years for Flooding Wisconsin With Cocaine
Washington D.C. Homicides Plunge 52 Percent As National Guard Deployment Changes City's Cr...
Milwaukee Grocery Owner Pleads Guilty to $1.6M SNAP Fraud Scheme
Trump Signs Executive Order to Fast-Track Psychedelic Treatments for Mental Illness
This Radio Chatter From the Iranian Attack on an Oil Tanker Is Crazy
Tipsheet
Premium

Lawmaker Introduces Measure to Restrict Military Artificial Intelligence Tech

Lawmaker Introduces Measure to Restrict Military Artificial Intelligence Tech
AP Photo/Leo Correa

Sen. Elissa Slotkin (D-MI) has introduced a bill that would regulate the Pentagon’s use of artificial intelligence technology.

The rise of AI has sparked national debate over its use in several different areas. But when it comes to military use, the national conversation has intensified amid concerns that the technology could be misused.

From NBC News:

The bill seeks to codify two existing Defense Department guidelines into law: that AI cannot autonomously decide to kill a target and that the technology cannot be used to help the military conduct mass surveillance on Americans. It would also ban the use of the technology for launching or detonating a nuclear weapon.

“We’re unhealthy as a political system, and so we focus more on things like Greenland than we do on the use of AI in matters of legal force. And it’s our responsibility to legislate this,” Slotkin told NBC News.

The first two tenants of the bill were at the center of the U.S. military’s acrimonious split with AI giant Anthropic in recent weeks. While the Pentagon has insisted that it regards conducting mass surveillance of Americans as illegal already and that its policy mandates that a human be responsible for lethal decisions, Anthropic worried that loopholes could allow for that surveillance anyway and that future administrations could revoke those guidelines.

The feud boiled over into President Donald Trump's decreeing that all federal agencies have six months to stop using Anthropic models and Defense Secretary Pete Hegseth's declaring the company a supply chain risk, despite the fact that the technology has still helped the U.S. identify military targets in its ongoing war with Iran.

The debate over this issue centers on how far the Pentagon should go in using AI to choose or attack targets and how much control humans should retain. The Pentagon’s chief technology officer clashed with AI company Anthropic after it refused to allow its systems to be used for “all lawful use” because the technology is not reliable enough for fully autonomous weapons. The company expressed concerns about mass surveillance if the government removes its safeguards.

Current policy requires military leaders to independently check AI-generated targeting suggestions. But experts have cautioned that these rules might not be easy to enforce in fast-moving combat scenarios, according to the Brennan Center for Justice.

Conversely, supporters argue that AI is a necessary tool for defending against modern threats — especially as rivals develop their own systems. A senior U.S. defense official told Reuters that overly strict limitations on AI contracts could “threaten military missions.” He suggested the Pentagon requires flexible access to AI to keep up with China, Russia, and the fast-changing nature of drone warfare.

Lawmakers have been split on the issue, with some members of Congress advocating for tighter rules and even full-on bans of certain autonomous weapons systems. Others contend that slowing down the development of AI for military use could place American forces at a disadvantage.

According to a February 2026 newsletter from Semafor, members of Congress are split, with some lawmakers pushing for tighter rules or even bans on certain autonomous weapons, while others argue that slowing U.S. military AI could leave American forces and allies at a dangerous disadvantage if adversaries race ahead.

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement