Here's a Liberal Policy That Now Has Bill Maher 'Incensed'
Watch Don Lemon Shut Down WaPo's Taylor Lorenz Over This Take About Gaza...
There’s a Massive Pushback Brewing Against the Pro-Hamas Thugs Taking Over College Campuse...
The Left’s New School Choice Playbook in Arkansas Serves as a National Warning
Joe Biden Hands Out Obamacare to Illegal Immigrants
Democrat Massachusetts Gov. Approves $400 Million In Freebies for Illegal Immigrants
In Case You Didn't Know, Roads and Bridges Are Now 'Racist'
Joe Biden's Economic Advisor Has No Idea How 'Bidenomics' Work
Americans Overwhelmingly Describe Trump As Strong Leader, A Stark Contrast of What They...
Democrat Accused of 'Deliberately' Misleading Arizona House to Host Drag Story Hour at...
Jewish Organizations Abruptly Pull Out of Meeting With Biden Admin After Addition of...
Supporters of President Trump Should Not Support Biden’s DOJ or its Dark Antitrust...
The Truth About the CIA
The Left’s Radicalization Of Our Children
Holly Rehder: The Only MAGA Candidate in the Race for Missouri Lt. Governor
Tipsheet

A.I. Robot Will Be Answering Police Phone Lines in Portland

AP Photo/Paula Bronstein

This week, a new artificial intelligence system in crime-infested Portland, where police officers have been leaving the force in droves, will begin taking over answering duties of the city's nonemergency phone lines, Willamette Week reported.

Advertisement

"An automated attendant will answer the phone on nonemergency and based on the answers using artificial intelligence—and that’s kind of a scary word for us at times—will determine if that caller needs to speak to an actual call taker," Bureau of Emergency Communications director Bob Cozzie, the area's 911 director, told city commissioners last week.

The system will go online for "a couple hours a day to test and refine," Cozzie added.

Although testing and refining are always expected, and for now A.I. is only operating Portland's nonemergency services, our history with technology will tell you that it's only a matter of time before it takes over the system completely. 

It's also not surprising that a Democrat-led city, a hotspot for far-left violence carried out by defund-the-police militants, would be one of the first to implement A.I., given that it's an answer to an inevitable problem caused by cuts to the police budget: an overworked and understaffed workforce.

The new system was pitched in 2021 as a possible solution to slowing call response times. Only 41 percent of Portland's 911 calls in 2021 were being picked up within 20 seconds. The national standard is 95 percent, according to Willamette Week. 

So the call takers are A.I. What next? If it's a crime you're reporting, your trust in artificial intelligence will have to continue as it weaves itself into the police force. Aiplusinfo.com suggests through learning human behaviors, the software will develop the ability to mimic and eventually forecast future actions, catching crimes before they even happen. Predictive policing and facial-recognition technologies used to profile suspects, which pose privacy concerns, can use the photo of a perpetrator to find them in law enforcement databases or police lineups.

Advertisement

The technology, which has been used heavily since the Jan. 6 riot, led to at least three wrongful arrests of minority Americans, as reported by Fox News. These recognition algorithms have falsely identified black and Asian faces 10 to 100 times more than white faces, according to a 2019 study by the National Institute of Standards and Technology. 

If you're reporting a medical emergency to the the new robotic operators, the wide reach of A.I. is certain to continue. You'll, at the very least, run into it again at the hospital.

The Wall Street Journal spoke to an oncology nurse, Melissa Beebe, about the technological algorithms used to detect patterns of sickness and help diagnose patients:

While Beebe can override the AI model if she gets doctor approval, she said she faces disciplinary action if she’s wrong. So she followed orders and drew blood from the patient, even though that could expose him to infection and run up his bill. “When an algorithm says, ‘Your patient looks septic,’ I can’t know why. I just have to do it,” said Beebe, who is a representative of the California Nurses Association union at the hospital.

As she suspected, the algorithm was wrong. “I’m not demonizing technology,” she said. “But I feel moral distress when I know the right thing to do and I can’t do it.”

Beebe was able to right the wrong, but she has been working with cancer patients for 15 years. What if your nurse or doctor just graduated from medical school?

It's also impacting classrooms. Antony Aumann, a professor of philosophy at Northern Michigan University, told the New York Times that while grading essays, he noticed one that was by far the best in the class. After confronting the student, he learned he used A.I.

Advertisement

Most reports of educators catching A.I.-aided cheating are on written assignments rather than test-taking, but who's to say someone with these technologies couldn't ask for answers to a math or science problem? 

A study conducted by BestColleges found that 43 percent of college students have used ChatGPT or a similar A.I. application. Of those who have used A.I. tools, 50 percent say they have used them to help complete assignments or exams. That's 22 percent of all college students in the survey.

To make matters more hazy in the classroom, the technology policing A.I. usage doesn't get it right every time, as noted in a report done by the New York Times where a student was wrongfully accused of academic dishonesty while being proctored online. 

Join the conversation as a VIP Member

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement