No menu items!

Hollywood’s killer robots become tools of the military

Share This Post

- Advertisement -

WASHINGTON – When in October, President Joe Biden announced severe restrictions on the sale of the most advanced computer chips to China, it did so, in part, as a way to give American industry a chance to regain its competitiveness.

- Advertisement -

However, there was a second goal at the Pentagon and the National Security Council: arms control.

“There are a number of informal conversations in the industry – all informal – about what Dell safety standards would look like

- Advertisement -
“There are a number of informal conversations in the industry — all of them informal — about what AI security standards would look like,” said Eric Schmidt, former chairman of Google and chairman of the Defense Innovation Council. Photo Mike Blake/Reuters

In theory, if the Chinese military can’t obtain such chips, it will likely slow down its attempts to develop AI-powered weapons.

This would give the White House, and the entire world, time set a few rules around the use of artificial intelligence in everything from sensors, missiles, and cyberweapons, to ultimately prevent some of the nightmares that have been portrayed in Hollywood movies:

computers and autonomous killer robots that distance themselves from their human creators.

Now, the cloud of fear around the popular bot Chat GPT and other types of AI-powered software have made restricting the sale of chips in China seem like only a temporary fix.

When Biden appeared at the White House Thursday for a gathering of tech executives who are having a hard time limiting the dangers of this technology, his first comment was:

“What you are doing has great potential and also great risk.”

According to his national security advisers, this was a reflection of the recent classified reports on the potential of new technologies to disrupt warfare, cyber conflicts and, in the most extreme case, the decision making on the use of nuclear weapons.

But even as Biden sounded his warning, Pentagon officials were saying on tech forums they believed it wasn’t a good idea that six-month break in developing the next generations of ChatGPT and similar types of software:

neither the Chinese nor the Russians will wait.

“If we stop, guess who won’t: potential enemies overseas,” Pentagon intelligence chief John Sherman said Wednesday.

“We have to move forward.”

His categorical statement highlighted the tension currently felt throughout the defense community.

Nobody really knows what these new technologies can do when it comes to developing and controlling weapons, and nobody has any idea what kind of arms control regime it might work, if there is one.

This omen is inaccurate, but very worrying.

Could ChatGPT empower actors who may not have previously had easy access to destructive technology?

It could speed up the clashes between the superpowers and not leave much time for the diplomacy and negotiation?

“The industry is not stupid and we are already seeing attempts at self-regulation,” he said. Eric Schmidtformer president of Google who served as the inaugural chair of the Innovation Advisory Board from 2016 to 2020.

“So there are a number of informal talks going on in the industry right now — they’re all informal — about what safety rules would look like versus AI,” said Schmidt, who wrote, along with former secretary of state henry kissingera series of articles and books on the potential of artificial intelligence to disrupt geopolitics.

The preliminary attempt to put protective barriers to these systems is obvious for anyone who tried early versions of ChatGPT.

The robots don’t answer questions about how to harm someone with a mix of drugs, for example, or how to blow up a dam or cripple nuclear centrifuges, which are operations the United States and other countries have carried out without the benefit of intelligence tools.

But these action blacklists will only reduce the misuse of these systems; few people believe they can stop such attempts completely.

there’s always youno tricks to circumvent safety limits, as anyone who has tried to turn off the insistent beeping of a car’s seat belt alarm system can attest.

While new software has popularized the problem, it’s not new to the Pentagon.

The first rules on the development of autonomous weapons were published ten years ago.

Five years ago, the Joint Pentagon Center for Artificial Intelligence study the use of artificial intelligence in combat.

Some weapons already run on autopilot.

For a long time the patriot missiles, the same ones that shoot down missiles or airplanes that enter protected airspace have had an “auto” mode, which allows them to fire without human intervention when overwhelmed by targets arriving faster than a human could react.

but they have to be supervised by humans who can cancel attacks if needed.

In the military, AI-powered systems can accelerate the pace of battlefield decisions to the degree of create entirely new risks of accidental attacks or decisions derived from confused or deliberately false alerts of imminent attacks.

“A fundamental problem with AI in the military and in national security is how to defend against attacks that are faster than the speed at which humans make decisions, and I think that problem isn’t solved,” Schmidt said. .

“In other words, the missile arrives so fast that there must be an automatic response. What if it’s a false signal?

safety

Tom Burt, who heads trust and security operations at Microsoft, a company that is forging ahead with using new technology to revamp its search engines, told a recent forum at George Washington University that he thought that intelligence systems Artificial data would help defending parties detect anomalous behavior more quickly than they would help attackers.

There are specialists who disagree.

But Burt pointed out that he was afraid he might “boost up” the spread of targeted disinformation.

All of this heralds a new era of gun control.

Some experts say that since it would be impossible to stop the spread of ChatGPT and similar software systems, the best hope is limit specialized chips and any other computational power needed to power this technology.

This will no doubt be one of many arms control plans to unfold in the coming years, at a time when at least the major nuclear powers seem uninterested in negotiating old weapons, let alone new ones.

c.2023 The New York Times Society

Source: Clarin

- Advertisement -

Related Posts