John J Hopfield and Geoffrey E Hinton have been awarded the 2024 Nobel Prize for Physics for laying the foundations for modern artificial intelligence (AI) systems read more
Computer scientist Geoffrey Hinton has been awarded the 2024 Nobel Prize for Physics for laying the foundations for artificial intelligence (AI) along with scientist John J Hopfield. (Photo: AP)
Geoffrey E Hinton, the winner of 2024 Nobel Prize for Physics, has said that he is concerned about artificial intelligence (AI) taking control of our lives.
Hinton, a pioneer in the field of AI, was jointly awarded the Physics Nobel along with John J Hopfield for laying the foundations for modern AI systems.
The Royal Swedish Academy of Sciences awarded the Physics Nobel to Hopfield and Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks”. They used tools from physics to develop artificial neural networks that are the foundation of today’s powerful machine learning (ML) technology, a kind of AI that drives several applications ranging from everyday ChatGPT and Siri to complex data analyses systems.
BREAKING NEWS
The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” pic.twitter.com/94LT8opG79
In a telephonic interaction with the press during the announcement of the Nobel Prize, Hinton said that the world needs to worry about AI going out of control.
“It’s going to be wonderful in many respects, such as in healthcare. It could give us more efficient healthcare. It can improve our productivity. We can do the same thing with AI assistance in less times. But we need to worry about a number of bad consequences, particularly the threat of these things getting out of control,” said Hinton, an Emeritus Professor at the Department of Computer Science at the University of Toronto.
Hinton said that the rise of AI will be as consequential as the Industrial Revolution.
While machines exceeded humans in terms of physical strength in the Industrial Revolution, they would surpass humans in terms of intellectual capabilities with the rise of AI, said Hinton.
That would put humans in uncharted territory.
“We have no experience of what it’s to have things smarter than us,” said Hinton.
When asked about whether he had regrets about his role in the development of the foundations of modern AI, Hinton said that he indeed lives with certain regret.
Hinton said that he regrets that the AI systems surpassing human intelligence “will eventually take control”.
“There are two kinds of regrets. One is where you feel guilty about doing something you shouldn’t have done and then there is regret where you’d do something under similar circumstances, but it may in the end not end well. It’s the second kind of regret I have. In the same circumstances, I’d do the same again, but I’m worried that the overall consequences will be that these systems more intelligent than us will eventually take control,” said Hinton.
This is not the first time that Hinton has talked about the fears of AI, a technology he pioneered. He has also been called as the ‘Godfather of AI’.
Last year, Hinton told The New York Times that the pace of AI’s progress was “scary”.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain…Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary," said Hinton, adding that AI should not be scaled up until it’s properly understood.
Hinton further said, “The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Prominent scientists have long expressed fears about AI’s darker sides. For one, even creators do not know how most advanced AI systems work. Last year, AI scientist Sam Bowman said on Vox’s ‘Unexplainable’ podcast that there is no clear explanation of how many AI systems, including ChatGPT, work. He said that creators do not know how AI systems really work in the way they know the working of a ‘regular’ program like MS Word or Notepad.
“I think the important piece here is that we really didn’t build it in any deep sense. We built the computers, but then we just gave the faintest outline of a blueprint and kind of let these systems develop on their own. I think an analogy here might be that we’re trying to grow a decorative topiary, a decorative hedge that we’re trying to shape. We plant the seed and we know what shape we want and we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree. We just kind of started the process, let it go, and try to nudge it around a little bit at the end,” said Bowman.