No menu items!

AI ‘Havsora’ appeared in the Israel-Hamas war

Share This Post

- Advertisement -

Advanced artificial intelligence (AI) has been introduced in the Israel-Hamas war that has been going on for nearly three months. The Israel Defense Forces (IDF) is carrying out airstrikes by locating Hamas agents through an AI targeting system and bombing them. It is a system to minimize damage by finding more bombing targets and estimating the number of civilian deaths in advance.

Attacked more than 12,000 targets in 27 days
A village devastated by an Israeli airstrike in the Gaza Strip. [뉴시스]

On November 2, the IDF said on its website, “We are using the ‘Habsora’ (meaning gospel in Hebrew) AI system to quickly locate enemies and create targets by analyzing vast amounts of data,” and added, “Hamas and “We introduced this system to accurately attack related infrastructure and minimize unrelated civilian damage,” he said. The IDF claimed that it attacked more than 12,000 targets during the 27-day battle by creating targets at high speed through AI. This means that more than 400 targets were attacked per day.

- Advertisement -

Havsora is a program developed by Israel’s Ministry of Communications and Information and is a type of AI-based decision support system that helps determine targets. The AI ​​system makes recommendations for airstrikes targeting homes or areas where Hamas or Palestinian Islamic Jihad militants are likely to reside. It also helps predict in advance how many civilians could be killed in a home attack and makes attack decisions based on an assessment of potential collateral damage.

Habusora creates targets through the ‘probabilistic inference’ method, which is a core function of the machine learning algorithm. Basically, it is a technology that analyzes large amounts of data to identify patterns and establish results. The efficiency and accuracy of these algorithms largely depend on the quality and quantity of data being processed. In recent years, the IDF has built a database on 30,000 to 40,000 suspected militants and collected residence information. This information, together with a large set of information obtained from drone footage, satellite surveillance data, communications interception, and detailed movement and behavior pattern monitoring, is analyzed and used as the basis for targeting.

- Advertisement -

Tal Mimran, a lecturer at the Hebrew University of Jerusalem who carried out targeting work for the IDF, told the U.S. National Public Radio, “AI systems are taking over the work done by traditional intelligence agents at a speed that surpasses human cognitive ability,” adding, “With just 20 officers, “While a formed group can create 50 to 100 targets in 300 days, Habsorah creates about 200 targets in 10 to 12 days,” he explained. In fact, the IDF said that before the targeting program was launched, it was generating 50 targets per year, but after the system was activated, it was able to generate more than 100 targets per day.

US, China, Russia develop AI military tools
The Israeli military is targeting and attacking Hamas militants in the Gaza Strip using artificial intelligence (AI). [이스라엘공군 제공]The Israeli military is targeting and attacking Hamas militants in the Gaza Strip using artificial intelligence (AI). [이스라엘공군 제공]

Above all, the IDF claims that the reason for introducing this AI system is to attack more accurately targeted targets and reduce civilian casualties. Habusora even supports a function that measures the rate of civilians evacuating from a building and displays how many civilians remain in red, yellow, and green at a glance, like a traffic light. IDF claims that targeting accuracy has been improved through this AI technology, but some are concerned about AI malfunction. The essence of an AI system is to provide results based on correlation with statistical and probabilistic inferences obtained from past data, rather than any type of inference, factual evidence, or ‘causal relationship’. Because of this, there is a possibility that the targets of this war may be misclassified.

According to data analyzed by a joint research team of the City University of New York (CUNY) Graduate Center and Oregon State University’s Earth and Environmental Science team, one in three buildings in the Gaza Strip was damaged or destroyed as a result of this war. In the process, Palestinian civilian casualties continue to increase. According to the Gaza Strip’s Health Ministry, more than 18,000 Palestinians have died so far, most of them women and children.

The Israel-Hamas war is not the first time AI has been introduced into war. Russia used AI technology related to drones and satellites during its invasion of Ukraine, and China is also developing related weapons. The United States is also developing AI to identify targets in the field. An AI computer vision program is being created through the AI ​​project ‘Maven’ promoted by the U.S. Department of Defense. Some of these tools collect far more satellite imagery than human analysts can sift through, and some even use off-the-shelf computer vision algorithms to directly spot tanks or anti-aircraft guns.

Experts believe that this AI-based targeting algorithm is an intermediate step in the autonomous system that will eventually be deployed on the battlefield. Today, automatic weapons fall into two main categories: One is a fully automated lethal weapons system without human intervention, and the other is an autonomous weapon that, in principle, can be controlled by humans. AI technology can be applied at various stages, from intelligence, surveillance, and reconnaissance support like Habusora to autonomous weapons systems that select and attack targets on their own. As the dominance of AI technology in war grows, controversies over ethical issues are bound to intensify. If AI drones or killer robots that identify and kill targets without human intervention are deployed in war, it may become more difficult to determine responsibility for AI use and conflict.

Military use of AI should strengthen international security
The next-generation ‘Airborne Police and Electronic Warfare System (ARES)’ aircraft autonomous research system developed by the U.S. Air Force Research Laboratory as part of the AI ​​project ‘Maven’. [미국공군연구소 제공]The next-generation ‘Airborne Police and Electronic Warfare System (ARES)’ aircraft autonomous research system developed by the U.S. Air Force Research Laboratory as part of the AI ​​project ‘Maven’. [미국공군연구소 제공]

As the introduction of AI in warfare became a reality, more than 40 countries in November issued a political declaration on the responsible use of AI for military purposes. Including South Korea, the United States, France, Germany, Canada, Australia, Singapore, and Japan participated, while North Korea, Russia, China, and Israel were excluded. The U.S. State Department emphasized in its February guidelines that military use of AI must be conducted ethically and responsibly and strengthen international security. In addition, it stipulates treaties that countries must implement when developing autonomous systems, including AI-based decision support systems and information collection systems such as Habusora. Marta Bo, a researcher at the Stockholm International Peace Research Institute, said through the Guardian, “AI technology introduced in war causes ‘automation bias’ (too much reliance on auxiliary tools) even when humans are involved.” “The more we rely on AI systems, the more mechanized we become.” “There is a risk of losing the ability to take civilian casualties into account because of the way it is done,” he added.

Lee Jong-rim, science reporter

<This article Weekly Donga No. 1420It was published in >

Israel-Palestine War

Source: Donga

- Advertisement -

Related Posts