So, That's How the Old Dominion University Terrorist Was Able to Obtain a...
Yes, This NYT Headline Is Real...and They Appear to Have a Muslim Terrorist...
We Got Some More Manpower Heading to the Middle East
CNN's Kaitlin Collins Set Up Scott Jennings Perfectly to Torch the Biden Administration
My Word, Ms. Spanberger, What Fresh Hell Is This Tweet?
Victory for President Trump’s DOGE – ACLJ Amicus Brief Affirmed
Did We Avoid Another Terrorist Attack This Week? This Arrest in Texas Makes...
Globalize the Intifada? Authorities in the Netherlands Are Investigating Fire at Synagogue
Does Retaliation Against the United States Mean We Shouldn't Wage War Against Our...
Pete Hegseth Blasts Reports That the United States Did Not Plan on Iran...
All Six American Crewman Aboard Refueling Aircraft That Crashed in Iraq Confirmed Dead
Ex-Top Gun Pilot Says The Threat of Iranian Sleeper Cells 'Is Not a...
VICTORY: Jury Reaches Shocking Verdict in Texas Antifa Terrorism Case
Jury Convicts 9 Antifa Operatives in Texas Riot, Shooting at ICE Facility
Former Nevada County Commissioner Indicted in Alleged $500K COVID Relief Fraud
OPINION

Bureaucratic Luddites Are Coming for AI

The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com.
Bureaucratic Luddites Are Coming for AI
AP Photo/Mark Schiefelbein

AI is the future. Over the next decade, the application of artificial intelligence will fundamentally change how we live, make our decisions, and interact with the world around us. In the future, one key question for policymakers will be how to apply oversight to this new and exciting technology without inhibiting innovation. 

Advertisement

Unfortunately, recent moves by Congress and the Department of Justice point to a need for more governmental understanding of how AI functions.

Congressional leaders, led by Senate Majority Leader Chuck Schumer, are working on an AI regulatory bill that they plan to pass in the lameduck session of Congress.

Their work is piggybacking off recent actions by the DOJ, which have included going after hotel companies for using AI algorithms as well as landlords for using AI-based property management pricing software. Time and time again, the DOJ has demonstrated that it believes algorithmic AI is generally used as a tool for price-gouging rather than a tool for fostering price efficiency. 

Antitrust experts and economists across the political spectrum have criticized Congress and the DOJ for blaming algorithmic AI for price increases in these industries instead of today’s inflationary economic environment. 

In the hotel case, a U.S. District Court even went so far as to recently throw the suit out, stating that the software enabling price-fixing is implausible because the hotels began using the AI at different times and never exchanged pricing information with one another. In September, a federal court in New Jersey threw out a similar case for the same reasons. Yet, on October 24, the DOJ filed an amicus brief sharply criticizing the U.S. District Court’s opinion. 

In the amicus brief, the DOJ made it clear that it sees challenging these early algorithmic AI cases as vital as they could set a legal precedent that determines just how the technology can — and cannot — be used in the U.S. economy moving forward. “This case is the first of its kind to reach a U.S. Court of Appeals, [and] it will establish precedent that will affect similar cases going forward,” it wrote.

Advertisement

Jay Ezrielev, a former advisor to FTC Chair Joseph Simons, fears the long-term implications these early challenges to algorithmic AI could have. “Applied more broadly, the [DOJ’s] theory would raise considerable obstacles for the commercial use of algorithms, proprietary data, and artificial intelligence, resulting in significant harm to innovation and efficient operation of markets.” 

I agree. That is why technology, AI, and legal experts must demystify algorithmic AI to government executives and decisionmakers and demonstrate its many benefits across the economy, both in the private and public sectors.  

Algorithmic AI is used to create better quality products, services, and decisions at reduced costs. It also makes technology more capable and scalable. 

For decades, the travel industry has used data models and analytics to price airfare, rental cars, and hotel rooms. It enables these companies to incorporate a broader range of data at scale. For example, algorithmic AI can use weather predictions to adjust the cost of convertibles or 4-wheel drive vehicles depending on whether it will be sunny or snowing.  

This technology isn’t just for the selling side. Priceline.com and Google Flights are perfect examples. They allow consumers to easily compare airfares and determine whether it is a good time to purchase a ticket.  

Even the government utilizes algorithmic AI to, among other things, prevent traffic jams and to maximize the revenue generated by its express highway lanes. As Stanford Law School Professor David Freeman Engstrom put it, “We are at the dawn of a revolution in how government uses AI to do its work,” with California Supreme Court Justice Mariano-Florentino Cuéllar adding that “whether they’re working to protect the environment or to limit illegal behavior in the marketplace, many federal officials understand the need for innovation to better serve the public –– and AI is a major part of that.”

Advertisement

If put in the wrong hands, could algorithmic AI potentially be used as a force for bad? Perhaps at some future point, but there are two ways that government executives and decisionmakers can address such cases. 

One approach would be to do what is often the most challenging thing for government executives and bureaucrats to do — nothing at all. They could let the free market determine which algorithmic AI innovations succeed and fail. Generally speaking, this approach would likely prove effective as consumers and businesses would not take long to reject the algorithmic software that does not provide them with maximum price efficiency.

Alternatively, government executives can consider mandating the availability of the data utilized within certain AI algorithms for analysis. They can require the release of sufficient information to allow for the ready audit of results to determine any illegal bias or collusion without impacting proprietary algorithms. One state, Arkansas, has already done so by publishing the denial rate for doctor-requested coverage of medical procedures, which provides a top-line metric. 

As part of a decade-long effort, a group of civic technology activists worked to make the U.S. Congress and the legislative process more open and transparent by utilizing basic artificial intelligence. Legislative information is now published in standardized XML formats to accelerate innovation in this space, allowing both government and private sector groups equal access.  

Advertisement

At the end of the day, however, government executives and decisionmakers should not begin regulating algorithmic AI merely out of fear. This technology has been used for nearly two decades without a hitch and it will continue to benefit the public and private sectors for the years and decades to come for as long as the government allows it to continue doing so. 

 

Join the conversation as a VIP Member

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement