Skip to Content

‘Godfather of AI’ shares 6 ways the tech might harm humans

By Jennifer Ferreira, CTVNews.ca Producer

Click here for updates on this story

    TORONTO (CTV Network) — Advancements around artificial intelligence technology are pushing the world into “a period of huge uncertainty,” according to AI pioneer Geoffrey Hinton. As the technology becomes smarter, the “godfather of AI” is highlighting six harms it may pose to humans.

While speaking at this year’s Collision tech conference in Toronto on Wednesday, Hinton explained that some of the danger around using AI stems from the possibility that it may develop a desire to control others.

“We have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control,” Hinton said. “If they do that, we’re in trouble.”

The cognitive psychologist and computer scientist resigned from Google earlier this year to speak more openly about the potential dangers of AI. Hinton has been voicing his concerns for months as AI technology has become more accessible to the public through tools such as ChatGPT.

Use of the AI chatbot has exploded since it was released in November 2022. Developed by OpenAI, an artificial intelligence research company, the tool is capable of imitating human-like conversation in response to prompts submitted by users. As a large language model, ChatGPT digests substantial amounts of data in text form and provides responses based on the information it has ingested.

But along with raising ethical issues related to plagiarism and the disclosure of personal information, ChatGPT has also produced offensive and biased results.

Hinton took centre stage at the conference and spoke to hundreds of attendees, some of whom sat on the floor after seats quickly filled up. More than 40,000 people from around the world descended upon Toronto for this year’s Collision tech conference, and nearly every talk touched on the wide-ranging implications of AI.

In his chat with Nick Thompson, CEO of The Atlantic, Hinton said large language models “still can’t match” human reasoning, although they are getting close. When Thompson asked if there is anything humans can do that a large language model could not replicate in the future, Hinton responded with “No.”

“We’re just a machine … we’re just a big neural net,” the British-Canadian scientist said. “There’s no reason why an artificial neural net shouldn’t be able to do everything we can do.”

A fellow “godfather of AI,” computer scientist Yann LeCun, shared his outlook on artificial intelligence at the Viva Technology conference in Paris earlier this month, describing it as “intrinsically good.”

Hinton, LeCun and Yoshua Bengio won the A.M. Turing Award, known as the Nobel Prize of computing, in 2018.

“The effect of AI is to make people smarter,” LeCun said on June 14. “You can think of AI as an amplifier of human intelligence and when people are smarter, better things happen.”

Hinton, however, remains skeptical that AI designed with good intentions will prevail over technology developed by bad actors.

“I’m not convinced that good AI that is trying to stop bad AI getting control will win,” he said.

Below are six key dangers AI may pose to humans, according to Hinton:

1. BIAS AND DISCRIMINATION By training with data sets that are biased, AI technology and large language models such as ChatGPT are capable of producing responses that are equally biased, Hinton said.

For example, a post from one Twitter user in December 2022 shows the chatbot wrote code that said only white or Asian men would make good scientists, a response that would have been derived from the data it was trained on. ChatGPT’s response to the prompt has since been updated and OpenAI has said it is working to reduce biases in the tool’s system.

Despite these challenges, Hinton said it’s relatively easy to limit the potential for bias and discrimination by freezing the behaviour exhibited by this technology, analyzing it and adjusting parameters to correct it.

2. BATTLE ROBOTS The idea of armed forces around the world producing lethal autonomous weapons such as battle robots is a realistic one, Hinton said.

“Defence departments are going to build them and I don’t see how you can stop them doing it,” he said.

It may be helpful to develop a treaty similar to the Geneva Conventions in order to establish international legal standards around prohibiting the use of this kind of technology, Hinton said. But such an agreement should be developed sooner rather than later, he said.

Last month, a conference of countries behind a Convention on Certain Conventional Weapons met to discuss lethal autonomous weapon systems. However, after 10 years of deliberation, international laws and regulations on the use of these weapon systems don’t yet exist.

Despite this, such technology is likely to continue to develop. Looking at the ongoing war in Ukraine, the country’s digital transformation minister, Mykhailo Fedorov, said fully autonomous killer drones were “a logical and inevitable next step” in weapons development, according to The Associated Press.

3. JOBLESSNESS The development of large language models will help increase productivity among employees and in some cases, may replace the jobs of people who produce text, Hinton said.

Other experts have also shared their concerns over AI’s potential to replace human labour in the job market. But employers will be more likely to use AI to replace individual tasks rather than entire jobs, said Anil Verma, professor emeritus of industrial relations and human resources management at the University of Toronto’s Rotman School of Management.

Additionally, the adoption of this technology will happen “gradually,” said Verma, who specializes in the impact of AI and digital technologies on skills and jobs.

“Over time, some jobs will be lost, as they have been through every other wave of technology,” Verma told CTVNews.ca in a telephone interview on May 24. “But it happened at a rate that we were able to adjust and adapt.”

While some may be hopeful that AI will help generate employment in new fields, Hinton said he is unsure of whether the technology will create more jobs than it will eliminate.

His recommendation to young people is to pursue careers in areas such as plumbing.

“The jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled,” he said. “[Manual dexterity] is still hard [to replicate].”

4. ECHO CHAMBERS One problem that has existed prior to the development of large language models and is likely to continue is the establishment of online echo chambers, Hinton said. These are environments where users come into contact with beliefs or ideas similar to their own. As a result, these perspectives are reinforced while other opinions are not considered.

There may be programs with algorithms using AI that have been trained on human emotion to expose users to a certain type of content, Hinton said. He brought up the example of large companies feeding users content that makes them “indignant” to try to encourage them to click.

It’s an open question as to whether AI could be used to resolve this issue or make it worse, Hinton said.

5. EXISTENTIAL RISK Finally, Hinton also raised concerns over the threat AI may pose to the existence of humanity. If this technology becomes much smarter than humans and is capable of manipulating them, it may take over, Hinton said. Humans have a strong, built-in urge to obtain control, and this is a trait AI will be able to develop, too, said Hinton.

“The more control you get, the easier it is to achieve things,” he said. “I think AI will be able to derive that, too. It’s good to get control so you can achieve other goals.”

Humans may not be able to overpower this desire for control, or regulate AI that may have bad intentions, Hinton said. This could contribute to the extinction or disappearance of humanity. While some may see this as a joke or an example of fearmongering, Hinton disagrees.

6. FAKE NEWS AI also has the ability to disseminate fake news, Hinton said. As a result, it’s important to mark information that’s fake as such to prevent misinformation, he said.

Hinton pointed to governments that have made it a criminal offence to knowingly use or keep counterfeit money, and said something similar should be done with AI-generated content that is deliberately misleading. However, he said he is unsure whether this kind of approach is possible.

CAN ANYTHING BE DONE TO HELP? Hinton said he has no idea how to make AI more likely to be a force for good than for bad. But before this technology becomes incredibly intelligent, he urged developers to work on understanding how AI might go wrong or try to overpower humans.

Companies developing AI technology should also put more resources into stopping AI from taking over rather than just making the technology better, he said.

“We seriously ought to worry about mitigating all the bad side-effects of [AI],” he said.

With files from The Canadian Press

Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.

ctvnews.caproducers@bellmedia.ca

Article Topic Follows: CNN - Regional

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

News Channel 3 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content