AI is generating ‘100 bombing targets a day’ for the Israeli army in Gaza

Attention has been focused on the havoc AI can wreak on elections. But maybe we should have been concerned with its impact on war

In the so-called year of elections, which will see people from the US to Belarus going to the polls, attention has rightly been focused on artificial intelligence (AI) and the potential for massive disinformation. But even more troubling developments are happening with AI used in war.

Tech corporations such as Google and Palantir have been swarming all over Ukraine almost since the commencement of hostilities. Time magazine sums it up in a chilling headline, “How tech giants turned Ukraine into an AI war lab”. And that’s what is happening: Ukraine has become a laboratory for testing everything from AI targeting systems to advanced facial recognition. Most of this work is being undertaken by corporations accountable to no one, but which will reap massive financial rewards. We all know how well self-regulation works.

According to Time, Ukraine wishes to emulate Israel, famously friendly to tech start-ups and innovation. It wants not just to win the war, but as minister of digital transformation Mykhailo Fedorov says, to use tech as the main economic engine for the future.

Israel has always been at the forefront of military technology. It was the tenth highest exporter of major conventional weapons from 2018-2022, and had the seventh highest level of military expenditure. European countries both buy from and supply Israel with weapons. Israel’s customers include Ireland, which has bought at least €14.7 million in defensive equipment such as drones.

READ MORE

AI targeting software may be responsible for the extraordinarily high death toll of civilians in Gaza as retaliation for the unprecedented, brutal and callous Hamas attack on Israel on October 7th, when Hamas murdered and raped civilians and kidnapped hostages. The Gaza Ministry of Health claims over 29,000 Palestinians have been killed in Gaza and nearly 70,000 Palestinians injured.

According to the Guardian and other sources, including a blog post from the Israeli Defense Forces (IDF), Israel is employing an AI-based system known as Habsora to provide information at lightning speed about what it believes to be Hamas militants within Gaza. Translated into English, Habsora means Gospel. Could there be a more dystopian name for software designed to target and kill people more rapidly?

Habsora sits atop other intelligence-gathering systems and presents targets to human operatives who then allegedly have the final word on whether to attack. In a Ynet interview in June last year, former IDF chief of staff Aviv Kochavi spoke about the use of AI in Israeli warfare in 2021. He said that “in the past, we would produce 50 targets in Gaza in a year. Now, this machine created 100 targets in a single day, with 50 per cent of them being attacked”.

While Israel theoretically could (but obviously will not) release the data on which Habsora was trained, the way it selects targets remains opaque even to Israelis, the so-called black box problem. Unlike previous technologies, we do not really know how AI comes to its decisions. (We do know humans select what AI is trained on, so human bias can be baked in.) We have seen this in everything from AI involving curriculum vitae screening for job interviews to identification of people eligible for child benefit. One CV-screening software prioritised men called Jared who had played high school lacrosse because its analysis of high-performing employees found a disproportionate number of people with those two criteria.

In 2021, the Dutch government had to resign because a self-learning algorithm designed to establish who was entitled to child benefit wrongly labelled 20,000 parents as fraudsters, including a disproportionate number of immigrants. Lives were destroyed as a result.

Given the black box problem, identifying potential Hamas targets in one of the most built-up areas on the planet is unethical to a frightening degree. An online news website run by Palestinian and Israeli journalists, +972, and Hebrew language website Local Call have claimed that AI has created what amounts to a “mass assassination factory” and that civilian deaths, sometimes in the hundreds, are foreseen and accepted in pursuit of Hamas.

Israel struck a deal in January with tech corporation Palantir, which is also deeply embedded in Ukraine. (Palantir is named for the seeing stones in The Lord of the Rings. Fans will know how well using a Palantir worked out.)

Palantir founder Peter Thiel is a libertarian sceptical of democracy. He has said that “politics is about interfering with other people’s lives without their consent”. He funded Trump in 2016 but no longer does.

In an Atlantic article, Thiel says that he backed Trump because “somebody needed to tear things down – slash regulations, crush the administrative state – before the country could rebuild”.

According to Time, “Palantir has embedded itself in the day-to-day work of a wartime foreign government in an unprecedented way.” Providing its tech for free, it is now in use in Ukraine’s ministries of defense, economy and education. The EU, UK and the US have taken some steps towards AI regulation but it is nowhere near enough. Non-accountable and, in some cases, anti-democratic corporations are racing ahead of regulators. We need a human-centred regulatory framework with enforceable sanctions to curb the dystopian reality that already exists.

  • See our new project Common Ground, Evolving Islands: Ireland & Britain
  • Sign up for push alerts and have the best news, analysis and comment delivered directly to your phone
  • Find The Irish Times on WhatsApp and stay up to date
  • Our In The News podcast is now published daily – Find the latest episode here