Tipsheet
Premium

How Musk Is Reportedly Targeting Woke AI

While Elon Musk welcomes the development of artificial intelligence, he's been sounding the alarm about its dangers for a while. In the wake of the November launch of OpenAI's ChatGPT, however, Musk's concerns have grown. Not only does it give human-like responses to a number of queries, it has also passed a medical board exam, can write poems, churn out college essays in minutes, and code, leaving many wondering how it will change academia and the job market. 

But that's not the only problem with this AI, as Musk sees it. In December, the billionaire entrepreneur cautioned that "training AI to be woke—in other words, lie—is deadly." 

Now, he's reportedly trying to combat this threat and has approached several AI researchers to develop a rival to ChatGPT. 

Elon Musk has approached AI researchers in recent weeks about forming a new research lab to develop an alternative to OpenAI's ChatGPT, the Information reported on Monday, citing people with direct knowledge of the effort.

Tesla and Twitter chief Musk has been recruiting Igor Babuschkin, a researcher who recently left Alphabet's (GOOGL.O) DeepMind AI unit, the report said. [...]

Musk, who had co-founded OpenAI along with Silicon Valley investor Sam Altman in 2015 as a nonprofit startup, had left its board in 2018, but chimed in with his take on the chatbot, calling it "scary good".

Musk and Babuschkin have discussed assembling a team to pursue AI research but the project is still in the early stages, with no concrete plan to develop specific products, the report said quoting an interview with the latter. (Reuters)

Earlier this month, Musk made the case for regulating AI, calling it "one of the biggest risks to the future of civilization." 

"I think we need to regulate AI safety, frankly," Musk said at a summit in Dubai. "Think of any technology which is potentially a risk to people, like if it's aircraft or cars or medicine, we have regulatory bodies that oversee the public safety of cars and planes and medicine. I think we should have a similar set of regulatory oversight for artificial intelligence, because I think it is actually a bigger risk to society."