The chances of artificial intelligence causing human extinction within next 30 years have increased, according to Geoffrey Hinton
Artificial intelligence could lead to human extinction within three decades with a likelihood of up to 20%, according to Geoffrey Hinton, a pioneering figure in AI and recipient of the 2024 Nobel Prize in Physics. This marks an increase from a 10% risk, his estimate just a year ago.
During an interview on BBC Radio 4 on Thursday Hinton was asked whether anything had changed since his previous estimate of a one-in-ten chance of an AI apocalypse. The Turing Award-winning scientist responded, “not really, 10% to 20%.”
This led to the show’s guest editor, the former chancellor Sajid Javid, to quip “you’re going up.” The computer scientist, who quit Google last year, responded: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”
The British-Canadian scientist, who received this year’s Nobel Prize in Physics for his contributions to AI, highlighted the challenges of controlling advanced AI systems.
“How many examples do you know of a more intelligent thing being controlled by a less intelligent thing?…Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of,” Hinton, who is often called ‘the Godfather of AI’, said.
He suggested “imagine yourself and a three-year-old. We’ll be the three-year-old,” compared to a future AI that would be “smarter than people.”
Hinton noted that progress has been “much faster than I expected,” and called for regulation to ensure safety. He cautioned against relying solely on corporate profit motives, stating, “the only thing that can force those big companies to do more research on safety is government regulation.”
In May 2023, the Center for AI Safety released a statement signed by prominent scientists in the field, including Hinton, warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Among the signees are Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and Yoshua Bengio, considered an AI pioneer for his work on neural networks.
Hinton believes that AI systems could eventually surpass human intelligence, escape human control and, potentially, cause catastrophic harm to humanity. He advocates dedicating significant resources to ensure AI safety and ethical use, also emphasizing an urgent need for proactive measures before it’s too late.
Yann LeCun, Chief AI Scientist at Meta, has expressed views contrary to Hinton’s, stating that the technology “could actually save humanity from extinction.”
You can share this story on social media:
Read the full article here