As It Happens10:02The ‘godfather of AI’ says he’s worried about ‘the end of people’
There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.
Nowadays, he’s not so sure.
“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”
Hinton is known as the godfather of artificial intelligence (AI), a moniker he embraces. He and his colleagues helped develop artificial neural networks, the technology at the core of machine learning. His foundational work helped propel AI’s rapid advancement.
For the last decade, he divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.
Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”
But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes.
Change of heart
In recent months, AI tools like ChatGPT and Stable Diffusion have made headlines for their ability to rapidly generate text, images and audio.
The systems learn these skills by analyzing data, which is often scraped from across the internet. That’s thanks to their use of artificial neural networks — computing systems inspired by the biological neural networks of human and animal brains.
Hinton helped pioneer neural networks at the University of Toronto in 2012, alongside students Ilya Sutskever and Alex Krizhevsky.
In 2016, he and his colleagues Yann LeCun and Yoshua Bengio won the Turing Prize — considered the Nobel of computer science — for their work teaching machines to think like humans.
“For 50 years, I’ve been working on trying to get computers to learn in the hope I could make them learn as well as people,” Hinton said.
“But very recently, I came to the conclusion that the kind of digital intelligence we’re developing for things like big chatbots is actually a very different form of intelligence from biological intelligence — and may actually be much better.”
Hinton waited until he parted ways with Google before airing his concerns publicly. But he says he has no ill will toward the company.
Google, he says, has been at the forefront of AI advancements. Rather than release AI products publicly, however, the company has historically used them in-house to improve services like search.
But as competitors push new AI products out into the world, he says Google execs feel a pressure to keep up the pace. And when AI is available to the public, it has access to a much greater wealth of data than ever before.
“I think Google has been extremely responsible so far, and they will continue to be as responsible as they possibly can be,” Hinton said. “But in a competition with Microsoft, it’s not possible to hold back as much as maybe they would like to.”
Asked for comment, Google’s chief scientist Jeff Dean said he wishes Hinton well.
“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I’ve deeply enjoyed our many conversations over the years. I’ll miss him,” Dean said in an emailed statement sent by a Google spokesperson.
“We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Rhetoric around AI future overblown, says experts
Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good.
“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.
“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”
Those risks, she said, include privacy breaches, misinformation, fraud and instances where AI adopts human biases and reinforces discrimination.
The good news, she says, is that many jurisdictions already have laws designed to protect us from some of these things. We just need to apply them to the people and companies who are making and using AI.
Bartoletti says now is the time to create new rules and legislation to bridge the gaps. She pointed to the European Union’s Artificial Intelligence Act as a good example.
“Regulating the AI in itself is complex. You know, what does it mean? It’s a bit like regulating mathematics,” she said. “So what I think we need to do is to regulate the behaviour of people around these systems.”
WATCH | AI-generated music shaking the industry:
Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”
“That being said, I do think that some of these these ideas that … AI supercomputers are going to wake up and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear,” he said.
He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.
In his own research on how people perceive AI-generated artEpstein has found that when people attribute human agency to AI, they tend to devalue the human labour that went into its creation.
That, he says, can serve to benefit the corporations that deploy AI, as people will blame the technology rather than those wielding it irresponsibly.
‘It’s absolutely possible I’m wrong’: Hinton
Bartoletti and Epstein both say AI brings a huge potential for improving our lives in the field of medicine, for example. What’s more, they say it could fuel creativity, much like how the advent of photography freed artists from the constraints of hyper-realism and allowed them to explore other forms.
Hinton says he doesn’t necessarily disagree.
“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he said.
“The scenario we want is that these advanced digital intelligences form a kind of symbiotic relationship with us and make life just much easier — get rid of all the drudge work, make everybody more productive. That would be great, but I don’t think that’s guaranteed.”
#godfather #hes #worried #people #CBC #Radio