Geoffrey Hinton, a pioneer of deep learning and a VP and engineering fellow at Google, is leaving the company after 10 years due to new fears he has about the technology he helped develop.
Geoffrey Hinton, who has been dubbed a “godfather of AI,” says he wants to speak openly about his concerns, and that part of him now regrets his life’s work.
Hinton told MIT Technology Review:
“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?”
He worries that extremely powerful AI will be misused by bad actors, especially in elections and war scenarios, to cause harm to humans.
He’s also concerned that once AI is able to string together different tasks and actions (like we’re seeing with AutoGPT), that intelligent machines could take harmful actions on their own.
This isn’t necessarily an attack on Google specifically. Hinton said he has plenty of good things to say about the company. But he wants “to talk about AI safety issues without having to worry about how it interacts with Google’s business.”
- Hinton’s concerns should be taken seriously. Despite having an extreme view on the risks posed by increasingly advanced AI, Hinton is a key player in AI research. He has a legitimate perspective on the field that it’s important to pay attention to. Even if you don’t agree with his overall premise, he highlights a major issue in AI. “A greater need, a greater focus, on ethics and safety is critical,” says Roetzer.
- But he’s not the first—or the only one—to raise these concerns. Researchers like Margaret Mitchell and Timnit Gebru also raised safety concerns at Google in the past, says Roetzer. Unfortunately, their concerns weren’t heard by the company at the time. They were both fired from Google.
- And not every AI researcher shares those concerns. Plenty of other AI leaders disagree with Hinton’s concerns. Some share his concerns about safety, but don’t go as far to believe AI can become an existential threat. Others, like Yann LeCun, strongly disagree with HInton that increasingly advanced AI will be a threat to humanity.
- Yet Hinton is not calling for a stop to AI development. Hinton has said publicly that he still believes AI should be developed. He said he believes AI has “so much potential benefit” that it should continue to be developed safely. “He just wants to put more time and energy into ensuring safety,” says Roetzer.