The Cultural Vanguard Is Now the Cultural Rearguard
Here's That Ex-CIA Spook Who Thought Iran Could Sink Our Aircraft Carriers
Watch a Caller Take Chris Cuomo to the Cleaners Regarding Left-Wing Violence
Why This GOP Rep Tore Into Trump's Former Intel Official Who Resigned Over...
A Lot of People Are Whipping Out the Receipts Shredding the Reason Why...
Watch a Liberal Fox Host Get Roundly Mocked Over Her Rant About the...
New Ad Brutally Mocks Ken Paxton's Long List of Scandals
The Democrats' Shutdown Continues to Wreak Havoc at U.S. Airports
Minneapolis Auto Thefts Surge 35 Percent Under Tim Walz, Jacob Frey
A Judge Ordered the Release of a Career Criminal. This Las Vegas Judge...
Millionaire Developer Murdered by 'Trans Woman' in Los Angeles
Israel Just Eliminated Iran’s Intelligence Minister Esmaeil Khatib
God’s Design: The Left’s Nightmare
The Things We Take for Granted
Tipsheet
Premium

Lawmaker Introduces Measure to Restrict Military Artificial Intelligence Tech

Lawmaker Introduces Measure to Restrict Military Artificial Intelligence Tech
AP Photo/Leo Correa

Sen. Elissa Slotkin (D-MI) has introduced a bill that would regulate the Pentagon’s use of artificial intelligence technology.

The rise of AI has sparked national debate over its use in several different areas. But when it comes to military use, the national conversation has intensified amid concerns that the technology could be misused.

From NBC News:

The bill seeks to codify two existing Defense Department guidelines into law: that AI cannot autonomously decide to kill a target and that the technology cannot be used to help the military conduct mass surveillance on Americans. It would also ban the use of the technology for launching or detonating a nuclear weapon.

“We’re unhealthy as a political system, and so we focus more on things like Greenland than we do on the use of AI in matters of legal force. And it’s our responsibility to legislate this,” Slotkin told NBC News.

The first two tenants of the bill were at the center of the U.S. military’s acrimonious split with AI giant Anthropic in recent weeks. While the Pentagon has insisted that it regards conducting mass surveillance of Americans as illegal already and that its policy mandates that a human be responsible for lethal decisions, Anthropic worried that loopholes could allow for that surveillance anyway and that future administrations could revoke those guidelines.

The feud boiled over into President Donald Trump's decreeing that all federal agencies have six months to stop using Anthropic models and Defense Secretary Pete Hegseth's declaring the company a supply chain risk, despite the fact that the technology has still helped the U.S. identify military targets in its ongoing war with Iran.

The debate over this issue centers on how far the Pentagon should go in using AI to choose or attack targets and how much control humans should retain. The Pentagon’s chief technology officer clashed with AI company Anthropic after it refused to allow its systems to be used for “all lawful use” because the technology is not reliable enough for fully autonomous weapons. The company expressed concerns about mass surveillance if the government removes its safeguards.

Current policy requires military leaders to independently check AI-generated targeting suggestions. But experts have cautioned that these rules might not be easy to enforce in fast-moving combat scenarios, according to the Brennan Center for Justice.

Conversely, supporters argue that AI is a necessary tool for defending against modern threats — especially as rivals develop their own systems. A senior U.S. defense official told Reuters that overly strict limitations on AI contracts could “threaten military missions.” He suggested the Pentagon requires flexible access to AI to keep up with China, Russia, and the fast-changing nature of drone warfare.

Lawmakers have been split on the issue, with some members of Congress advocating for tighter rules and even full-on bans of certain autonomous weapons systems. Others contend that slowing down the development of AI for military use could place American forces at a disadvantage.

According to a February 2026 newsletter from Semafor, members of Congress are split, with some lawmakers pushing for tighter rules or even bans on certain autonomous weapons, while others argue that slowing U.S. military AI could leave American forces and allies at a dangerous disadvantage if adversaries race ahead.

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement