The world according to Hinton: Slowing AI down is not the answer

Eight months ago, Geoffrey Hinton, the esteemed professor emeritus at the University of Toronto who resigned his post at Google over concerns about artificial intelligence (AI) advances, stated in a speech at Collision 2023 that the world is “entering a period of huge uncertainty.”

When he speaks, people listen, due in large part to the fact Hinton, along with Yoshua Bengio and Yann Andre LeCun, won the coveted Turing Award in 2018, an honour that resulted in the three computer scientists being known from that point on as the “Godfathers of AI.”

In recognizing the trio, the Association for Computer Machinery (ACM), which awards the annual prize, noted at the time, “working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks.

“In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics – among other applications.”

At Collision, Hinton pointed out that “people whose opinion I respect have very different beliefs from me.

“Yann LeCun thinks everything is going to be fine. They (AI chatbots) are just going to help us; it is all going to be wonderful. But we have to take seriously the possibility that, if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control. And if they do that, we are in trouble.

“AI trained by good people will have a bias towards good, AI trained by bad people such as Putin or somebody like that will have a bias towards bad. We know they are going to make battle robots. They are busy doing it in many different defence departments. They are not going to be necessarily be good, since their primary purpose is going to be to kill people.”

Given those concerns, what seemed somewhat perplexing was that in March of last year, Hinton was not among the tech leaders who signed an open letter urging a six-month moratorium on development, saying that AI tools “present profound risks to society and humanity.”

The reason why became clearer earlier this month, when he spoke at an event in Toronto organized by the Vector Institute, a not-for-profit organization that focuses on AI research and where Hinton serves as chief scientific advisor.

When asked during a Q&A session whether the speed of AI is “spinning too fast,” he replied that while it certainly is, “I don’t think we’re going to solve it by slowing down,” adding that is the key reason he opted not to sign the letter.

“I do not think the right way to phrase the problem is in terms of whether you should go fast or slow. Partly because I do not think you are going to be able to slow things up. There’s too much economic gain from going fast. We have seen actually what happens if people try and slow things up in a situation that was slanted entirely in favor of safety, and profits still won. That is my view of what happened at Open AI.

“Slowing it down, A) is not feasible, and B) is not the main point. The main point is, it is possible, we can figure out how to make these things benevolent so we can deal with the existential threat that these things will take over. That is a different problem from figuring out how to stop bad people using them for bad things, which is more urgent. In my view, we should put huge effort into trying to figure it out.”

Hinton said that it will not solve all the problems, and, in particular, it will not solve the problem of bad people doing bad things with it.

“If you want regulations, the most important regulation should be not to open source big models. That is like being able to buy nuclear weapons at Radio Shack. It is crazy to open source these big models, because bad actors can then fine tune them for all sorts of bad things. In terms of regulations, I think that is probably the most important thing we can do right now.”

His presentation focused on whether digital intelligence will replace biological intelligence. There are today, he said, deep learning systems that “are incredibly powerful and understand in much the same way people do.

“When people say, ‘these models are different from us,’ ask them, ‘well, OK, how do we work? And what is different about it?’ And they cannot answer that question, except for Gary Marcus. Gary Marcus can answer that question. And he says, ‘we work by having symbol strings and rules, but you should still worry about it. Because although it does not understand anything, it is extremely dangerous.’ I call that wanting to have your cake and have it eat you too.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Paul Barker
Paul Barker
Paul Barker is the founder of PBC Communications, an independent writing firm that specializes in freelance journalism. He has extensive experience as a reporter, feature writer and editor and has been covering technology-related issues for more than 30 years.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs