Typically, the creators of a new product or technology are the first and loudest to tout its benefits to the user or society. But with AI and its rapid advancement and proliferation, we're seeing a strange phenomenon. Those at the forefront are sounding the alarm, sometimes in quite apocalyptic terms.
During testimony before a Senate Judiciary subcommittee, Sam Altman, CEO of OpenAI, the company behind AI chatbot ChatGPT, opened up about some of his greatest concerns, going so far as to call on Congress to regulate the technology.
"My worst fears are that we cause significant – we, the field, the technology industry – cause significant harm to the world," he said. "I think that could happen in a lot of different ways. It's why we started the company."
"I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," Altman continued. "We want to work with the government to prevent that from happening."
OpenAI CEO Sam Altman (@sama) on A.I.: “If this technology goes wrong, it can go quite wrong.”
— ALX 🇺🇸 (@alx) May 16, 2023
pic.twitter.com/AzyeMXHspz
Sen. Dick Durbin (D-Ill.) pointed out how rare it is that companies come before lawmakers asking to be regulated.
Recommended
But with technology that has the power to greatly disrupt the job market in a negative way, it's likely necessary.
"There will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government but mostly action by government to figure out how we want to mitigate that," Altman said. "But I'm very optimistic about how great the jobs of the future will be."
Tesla CEO Elon Musk has also spoken clearly about AI's dangers.
"There's a strong probability that it will make life much better and that we'll have an age of abundance. And there's some chance that it goes wrong and destroys humanity," Musk told CNBC anchor David Faber. "Hopefully, that chance is small, but it's not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong."
That's why tech titans, including Musk, recently called for safeguards to be put in place on AI and an immediate pause to "the training of AI systems more powerful than GPT-4" to assess whether its effects will be positive and risks manageable.