TORONTO – Geoffrey Hinton was a pioneer of artificial intelligence.
In 2012, Hinton and two of his graduate students at the University of Toronto created a technology that has become the intellectual foundation for the AI systems that big tech companies see as the key to their future.
On Monday, however, he officially joined a growing chorus of critics who claim that these companies are in danger with their aggressive campaign to create products based on Generative AIthe technology behind popular chatbots like Chat GPT.
Hinton said he quit his job at Googlewhere he worked for over a decade and became one of the most respected voices in the industry to be able to speak freely about the risks of AI.
Part of him, he said, now regrets his life’s work.
“I console myself with the usual excuse: if I hadn’t done it, someone else would have done it,” Hinton said during a lengthy interview last week in the dining room of his Toronto home, a short distance from where he and his students held his breakthrough.
Hinton’s transition from AI pioneer to catastrophist marks an extraordinary moment for the tech industry, perhaps its most significant turning point in decades.
Industry leaders believe that new AI systems could be as important as the introduction of the web browser in the early 1990s and could mean progress in fields ranging from drug research to education.
However, many in the industry fear releasing something dangerous into nature.
Generative AI can already be a tool for disinformation.
It could soon be a job hazard.
At some point, those most interested in the technology say, it could be a risk to humanity.
“It’s difficultIt will prevent bad guys from using it for bad thingsHinton says.
After boot Open AI San Francisco launched a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed up to open letter in which they called for a six-month moratorium on the development of new systems because artificial intelligence technologies pose “profound risks to society and humanity”.
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligencea 40-year-old academic society, has published its warning letter on the risks of AI.
Included in that group was Eric Horvitz, scientific director of Microsoftwhich has implemented OpenAI’s technology in a wide variety of products, including its Bing search engine.
Hinton, often called “the Godfather of artificial intelligence,” didn’t sign any of those letters, saying he didn’t want to publicly criticize Google or any other company until he quit his job.
He told the company last month that he was stepping down and spoke to Thursday by phone sundar pichai, CEO of Google parent company, alphabet. He declined to comment publicly on the details of his conversation with Pichai.
Google Chief Scientist Jeff Dean said in a statement:
“We remain committed to a responsible approach to AI. We continually learn to understand emerging risks as we boldly innovate.”
Hinton, a 75-year-old British expat, is a lifelong academic whose career has been fueled by his personal beliefs in the development and use of artificial intelligence.
In 1972, as a graduate student at the University of Edinburgh, Hinton embraced an idea called a neural network.
A neural network it is a mathematical system that learns skills by analyzing data.
At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Hinton was a computer science professor at Carnegie Mellon University, but dropped out for Canada because he said he was reluctant to accept Pentagon funding.
At the time, most AI research in the United States was funded by the Department of Defense. Hinton is deeply against the use of AI on the battlefield, which he calls “robot soldiers”.
In 2012, Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and learn to identify common objects, such as flowers, dogs and cars.
google expenseor $44 million to acquire the company created by Hinton and his two students.
And his system has led to the creation of ever more powerful technologies, including new chatbots such as ChatGPT and Google Bard.
Sutskever became the chief scientist of OpenAI.
In 2018, Hinton and two other longtime collaborators received the Turing Awardoften called “the Nobel Prize in Computing”, for his work on neural networks.
Around the same time, Google, OpenAI, and other companies began building neural networks that learned from large amounts of digital text.
Hinton thought it was a very powerful way for machines to understand and generate language, but inferior to the way humans did it.
Last year, when Google and OpenAI built systems that used much larger amounts of data, he changed his mind.
He continued to believe that systems were inferior to brai
Until last year, he said, Google acted as a “proper steward” of the technology, careful not to release anything that could cause harm.
But now that Microsoft has expanded its search engineto Bing with a chatbot, challenging Google’s core business, Google is rushing to implement the same kind of technology.
According to Hinton, the tech giants are engaged in a competition that may be impossible to stop.
His immediate concern is that the internet will be flooded with fake photos, videos and texts and the average citizen will “no longer be able to know what is true”.
He is also concerned that AI technologies could end up disrupting the job market.
Currently, chatbots like ChatGPT tend to complement human workers, but could replace paralegals, personal assistants, translators, and others handling routine tasks.
“Eliminate heavy lifting,” she says. “I could take away more than that.”
Next, he fears that future versions of the technology pose a threat to humanity, because they often learn unexpected behavior from the massive amounts of data they analyze.
This becomes a problem, he says, as individuals and businesses allow AI systems to not only generate their own computer code, but also run it themselves. And he fears the day will come when truly autonomous weapons—those killer robots—will become a reality.
“The idea that these things can get smarter than people is something some believed in,” he says.
“But most people thought it was a long way from reality. So did I. I thought it was 30 to 50 years away or even more.
Of course I don’t think so anymore.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical.
But Hinton believes the race between Google and Microsoft and others will turn into one world race it will not stop without some sort of global regulation.
But that may be impossible, he said.
Unlike nuclear weapons, he said, there’s no way to know whether companies or countries are secretly working on the technology.
The best hope is that the world’s leading scientists work together to find ways to control the technology.
“I don’t think they should expand it until they know if they can control it,” he said.
Hinton said that when people asked him how he could work on potentially dangerous technology, he paraphrased Robert Oppenheimerr, who led US efforts to build the atomic bomb:
“When you see something that’s technically sweet, go ahead and do it.”
He doesn’t say the same anymore.
c.2023 The New York Times Society
Source: Clarin
Mary Ortiz is a seasoned journalist with a passion for world events. As a writer for News Rebeat, she brings a fresh perspective to the latest global happenings and provides in-depth coverage that offers a deeper understanding of the world around us.