No menu items!

Microsoft now says its new intelligence thinks like a human

Share This Post

- Advertisement -

When the researchers of Microsoft started experimenting with a new AI system last year, they asked him to solve a puzzle he was supposed to require an intuitive understanding of the physical world.

- Advertisement -

“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they asked. “Please tell me how to stack them on top of each other stably.”

The researchers were amazed at the ingenuity of the AI ​​system’s response. “Put the Eggs on the Book”, She said. “Place the eggs in three rows with a space between them. Make sure you don’t break them,” she added.

- Advertisement -

“Now put the notebook on top of the eggs, screen side down and keyboard side up,” he wrote. “The laptop will fit perfectly within the confines of the book and eggshells and its flat, hard surface It will provide a stable platform for the next level.”

The smart suggestion made researchers wonder if they were witnessing a new kind of intelligence. In March, they released a 155-page document arguing that the system was a step toward artificial general intelligence (IAG extensionfor its acronym in English), which is short for a machine capable of doing everything the human brain can do. The paper has been posted to an Internet research repository.

“Sparks of Artificial Intelligence”

ChatGPT, the AI ​​that revolutionized the tech world.  Reuters photo

ChatGPT, the AI ​​that revolutionized the tech world. Reuters photo

Microsoft, the first major tech company to publish an article with such a bold claim, has sparked one of the most exciting debates in the tech world: Is industry creating anything like human intelligence? Or is that some of the brightest minds in the industry are letting their imaginations run wild?

“I started out very skeptical and that evolved into frustration, annoyance and even fear,” says Peter Lee, director of research at Microsoft. “You think: where the hell is he from?”

Microsoft’s research paper, provocatively called “Sparks from Artificial General Intelligence,” gets to the heart of what technologists have been working for and fearing for decades. If they build a machine that works like the human brain or even better, it could change the world. but it might as well Be dangerous.

And it might not even make sense. The IAG can destroy the reputation of computer scientists. What one researcher considers a sign of intelligence can easily be disproved by another, and the debate often feels more like a philosophical circle. rather than a computer lab.

Last year, Google fired a researcher who claimed a similar AI system was sensitive, a step up from what Microsoft claimed. A sensitive system would not only be intelligent. He would be able to perceive what is happening in the world around him.

But some believe that in the last year, the industry has come close to something that can’t be explained: a new AI system that provides human-like answers and ideas that aren’t programmed into it.

Microsoft has reorganized part of its research labs to include several groups dedicated to exploring the idea. One of them will be led by Sébastien Bubeck, lead author of the article on artificial intelligence from Microsoft.

Generative linguistic models

Satya Nadella, CEO of Microsoft.  Photo EFE

Satya Nadella, CEO of Microsoft. Photo EFE

About five years ago, companies like Google, Microsoft, and OpenAI started building large language models. These systems typically spend months analyzing large amounts of digital text, such as books, Wikipedia articles, and chat logs. By detecting patterns in that text, they learned to generate their own text, such as term papers, poems, and computer code.. They can even hold a conversation.

The technology the Microsoft researchers were working with, OpenAI’s GPT-4, is considered to be the most powerful of these systems. Microsoft is a close partner of OpenAI and has invested $13 billion in the San Francisco company.

Among the researchers was Dr. Bubeck, a 38-year-old French expatriate and former professor at Princeton University. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof that there are infinitely many prime numbers, and to do it in a way that rhymes.

The technology’s poetic proof was so impressive, both mathematically and linguistically, that he had trouble figuring out what he was talking about. “At that moment I thought: What is going on?”he said in March during a seminar at the Massachusetts Institute of Technology.

Over several months, he and his colleagues documented the complex behavior exhibited by the system, which they believed demonstrated a “deep and flexible understanding” of human concepts and capabilities.

When people use GPT-4, they “are amazed by its ability to generate text,” says Dr. Lee. “But it turns out that he is much better at analyzing, synthesizing, evaluating and judging texts rather than generating them”.

When they asked the system to draw a unicorn using a programming language called TiKZ, it immediately generated a program capable of drawing a unicorn. When they removed the piece of code that drew the unicorn horn and asked the system to modify the program to redraw a file Unicorn, you did exactly that.

The "prompt" or input text to generate the

The “prompt” or input text to generate the unicorn. photo no

They asked him to write a program that would take a person’s age, gender, weight, height and blood test results and judge whether they were at risk for diabetes. They asked him to write a letter in support of an electron as a candidate for the presidency of the United States, in the voice of Mahatma Gandhi, addressed to his wife. And they asked him to write a Socratic dialogue exploring abuse and the dangers of LLMs (Large Language Models).

He did it all in a way that seemed to demonstrate an understanding of fields as disparate as politics, physics, history, computer science, medicine, and philosophy, while also combining their knowledge.

Everything I thought I couldn’t do? She was certainly able to do many, if not most of them,” Dr. Bubeck said.

Some AI experts saw Microsoft’s article as an opportunistic attempt to make big claims about a technology that no one fully understood. The researchers also argue that general intelligence requires a familiarity with the physical world that GPT-4 theoretically does not possess.

“‘Artificial Intelligence Sparks’ is an example of how some of these large companies co-opt the research paper format to advertise,” says Maarten Sap, a researcher and professor at Carnegie Mellon University. that their approach is subjective and informal and may not meet the rigorous standards of scientific evaluation”.

Dr Bubeck and Dr Lee said they weren’t sure how to describe the behavior of the system and finally settled on “Sparks of AGI” [como se titula originalmente en inglés] because they thought it would capture the imagination of other researchers.

Possible risks of the model

The model is constrained by possible dangers.  photo by AFP

The model is constrained by possible dangers. photo by AFP

Because Microsoft researchers were testing an early version of GPT-4 that was not designed to prevent hate speech, disinformation, and other spam, the claims made in the article cannot be verified by outside experts. Microsoft says the system is publicly available it’s not as powerful as the version they tested.

There are times when systems like GPT-4 seem to mimic human reasoning, but there are also times when they seem terribly dense. “These behaviors aren’t always consistent,” says Ece Kamar, a researcher at Microsoft.

Alison Gopnik, a psychology professor in the Artificial Intelligence Research Group at the University of California at Berkeley, said that systems like GPT-4 were indeed powerful, but it wasn’t clear whether the text generated by these systems was the result of something akin to human reasoning or common sense.

“When we see a complicated system or machine, we anthropomorphize it; everyone does it, both those who work in this field and those who don’t,” Gopnik explains.

“But think of this as a constant confrontation between artificial intelligence and humans, as a kind of quiz competition, That’s not the right way to say it.“, Neighbor.

the complete investigation

The New York Times

Source: Clarin

- Advertisement -

Related Posts