““For every step we get closer to super powerful AI, each person's character gets an insane 10 points.”“
That's what Sam Altman had to say about the pressures of working with AI when he revealed his own thoughts on the dramatic change to OpenAI's executive board last November.
The head of OpenAI has blamed the pressures of working with artificial intelligence for rising tensions within the San Francisco company he helped found in 2015, where he said the “huge risks” involved in developing artificial general intelligence (AGI) had driven people “crazy.”
Explaining that working with artificial intelligence is “very stressful” because of the pressures involved, the technology CEO said he now expects “more strange things” to start happening around the world as the world approaches “very powerful artificial intelligence.” .
“As the world moves closer to artificial general intelligence, the risks, the pressures, the level of stress — all of that will rise,” Altman said during a discussion at the World Economic Forum in Davos. “For us, the board change was a microcosm of that, but it probably wasn’t the most stressful experience we’ve ever had.”
microsoft MSFT,
He is an investor in OpenAI, and at one point offered Altman a job before blessing his reinstatement.
Altman said the lesson he learned from the shake-up that led to his ouster as OpenAI's CEO on Nov. 17 and his reinstatement on Nov. 21 was the importance of preparedness, as he suggested that OpenAI had failed to address looming issues within the company. .
“You don't want important but not urgent problems hanging out there,” he said. “We knew our board was getting too small, and we knew we didn't have the level of experience we needed, but last year was such a difficult year for us in so many ways that we neglected it.” somewhat”. .
“Having a higher level of preparedness, more flexibility, and spending more time thinking about all the weird ways things could go wrong is really important,” Altman added.
Speaking at a panel discussion titled “Technology in a Turbulent World,” Altman also addressed OpenAI's legal dispute with the New York Times,
Which saw the publication file a lawsuit against the AI company in December over the use of its articles in ChatGPT training.
Altman said he was “surprised” by the New York Times' decision to sue OpenAI, as he claimed the California company had previously been in “productive negotiations” with the publisher. “We wanted to pay them a lot of money,” he said.
However, the technology chief sought to walk back claims that OpenAI relies on information gathered from The New York Times, claiming instead that future AI will be trained on smaller datasets obtained through deals with publishers.
“We're open to training at The New York Times but that's not a priority for us. We don't actually need to train on their data. I think that's something people don't understand,” Altman said.
“The one thing I expect to start changing is that these models will be able to take smaller amounts of high-quality training data during their training process and think about it more seriously,” Altman added. “You don't need to read 2,000 biology textbooks to understand biology at the high school level.”
However, the head of OpenAI acknowledged there was a “huge need for new economic models” that would see those whose work is used to train AI models rewarded for their efforts. He explained that future models could also see AI linking to publishers' own websites.
“OpenAI admits that it has trained its models on The Times’s copyrighted works in the past and acknowledges that it will continue to copy those works when it looks to the Internet to train models in the future,” New York Times senior advisor Ian Crosby told MarketWatch.
“Free-riding on the Times' investment in quality journalism by copying it to build and run alternative products without permission is the opposite of fair use,” Crosby said.
Earlier in the week, Altman also addressed the possibility of Donald Trump winning another term as president in the upcoming US elections scheduled for November this year, as he noted that the AI industry would be “fine” either way.
“I think America will be fine no matter what happens in this election,” Altman said in an interview with Bloomberg. “I think AI is going to be fine no matter what happens in this election, and we're going to have to work hard to make that happen.”
However, Altman warned that those in power failed to understand Trump's appeal.
“We never thought that what Trump was saying might resonate with so many people,” Altman said. “I think there has been a real failure to learn the lessons about what works for the citizens of America, and what doesn’t.”