WHEN trying to critique the overwhelming power of military systems, it’s always best to start at their cheesiest point.

So, herewith, the various names of the artificial intelligence (AI) systems currently being used by the Israeli armed forces: The Gospel, Fire Factory, Depth Of Wisdom, Alchemist, Lavender, and Where’s Daddy.

While you’re laughing mirthlessly, try the US equivalents: Project Maven, Gorgon Stare and Agile Condor. It’s the Full Partridge, except with awesomely lethal consequences.

I share the horror of colleagues and comrades at this week’s stories. Lavender is an AI-driven Israeli army software that claims to identify up to 37,000 male Hamas operatives.

The National: An Israeli military spokesman said that forces ‘are preparing for a ground manoeuvre’ (Ariel Schalit/AP)

It’s triggered whenever they step into their known domestic spaces (thus the grotesquely cruel “Where’s Daddy” nomenclature).

Lavender’s purported 90% accuracy rate speedily justifies the bombing of homes, when criteria for civilian fatalities are met. (These are tens of killings for “junior” commanders; hundreds, if it’s high-level Hamas leadership).

Easy to see how international humanitarian law, particularly around the “proportionality” of civilian-to-military casualties, would be shredded if such a system was given credibility. The story here is that it was given far too much.

READ MORE: Inside my 48 hours on the campaign trail with Humza Yousaf

It seems that entire family and community networks have been obliterated, as the AI-driven targeting of specific Hamas operatives proceeds. Might this have, one imagines, perverse consequences?

That penny seems to have dropped with the off-the-record Israeli officers who contributed to this report, pulled together by the news sites +972 Magazine and Local Call.

(+972 is the telephone country code that can be used to dial throughout Israel-Palestine).

As the publications report: “B, a senior intelligence source, said that in retrospect, he believes this ‘disproportionate’ policy of killing Palestinians in Gaza also endangers Israelis, and that this was one of the reasons he decided to be interviewed.”

B continues: “In the short term, we are safer, because we hurt Hamas. But I think we’re less secure in the long run. I see how all the bereaved families in Gaza – which is nearly everyone – will raise the motivation for [people to join] Hamas 10 years down the line. And it will be much easier for [Hamas] to recruit them.”

Two other sources in the +972 piece cite “revenge” (for Hamas’s October 7 atrocities) as the motivation for relying on such a mass-target-generating system. Source A said: “We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.”

READ MORE: Palestinian hits out as UK's Israel stance shifts after UK aid workers killed

The report reveals that almost every civilian in Gaza was already trackable through myriad forms of surveillance.

But we know from our own social media tales that AI is only as powerful as the parameters that filter its information. And this report shows some head-shredding faults in the Lavender system.

Shift the data parameters around, and “the machine started bringing us all kinds of civil defence personnel, police officers, on whom it would be a shame to waste bombs”, said B. “They help the Hamas government, but they don’t really endanger soldiers.”

The National: Refugee camps have been devastated by IsraelRefugee camps have been devastated by Israel

Another dodgy parameter was whether mobile phones were regularly replaced – this being an everyday reality for most Gazans, as they handled the social chaos of war. The machine also flagged as suspicious someone who helps Hamas but isn’t on a salary, or who was a former member. “Each of these features is inaccurate”, claimed another +972 source.

There’s a crucial question to ask here: Does the significantly indiscriminate killing in Gaza by Israeli forces use AI as the tool of malevolent intent, or as the excuse for it?

Take this, from source B again. “[Lavender and AI] has proven itself. There’s something about the statistical approach that sets you to a certain norm and standard. There has been an illogical amount of [bombings] in this operation. This is unparalleled, in my memory.

“And I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago”, B continues. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”

READ MORE: Patrick Harvie: It shouldn't take UK aid worker killings to see more action on Gaza

That machines have a “coldness” in their determinations on who to kill, making it easier for the supervising humans who take barely a minute to approve its recommendations, has a queasy echo for the Israeli state.

The late Zygmunt Bauman wrote a whole book, titled Modernity And The Holocaust, on the way that systems of bureaucracy and categorisation were essential to the “industrialisation of death” in the camps. What was required to overcome our “animal pity” against killing, observed Bauman, was the authorisation of violence; the turning of the work involved into routine; and the dehumanisation of Jews.

It's hard not to see the depths of the tragedy here. Here’s an Israeli leadership which wants to say “never again” to extermination by mechanised and genocidal slaughter. They deploy the most mechanised, non-human systems against those who threaten that spectre. And then verge, effectively, upon genocide themselves.

It’s also clear that Israel-Gaza is a heads-up for a very dangerous and lethal era of AI in warfare more broadly.

The near-peremptory role of “humans in the loop” of AI battle systems, designed to compete against each other for ever-speedier responses, is a problem for many national militaries.

I spent a few deeply dispiriting hours in the video and written company of Aberdeen University’s Dr James Johnson. He’s an expert in the relationship between AI and nuclear warfare.

The National: Joe Biden has won the Democratic presidential primary in North Dakota (Andrew Matthews/PA)

Currently, the president of the United States, Joe Biden (above), has six minutes to retaliate to the detection of nuclear rocket flares in the silos of opponents, as indicated by appropriate monitoring systems.

The usual Kubrickian madness.

But would AI systems eliminate human error in these scenarios – or accelerate their consequences?

I must admit, as the Aberdeen academic started to enthusiastically scenario plan about a “2025 flash nuclear war in the Taiwan Straits”, I turned my heavy heart away. More Dr Strangelove than Dr Johnson.

There’s not many places after that, other than the embrace of loved ones, that you can turn your appalled face towards.

I did enjoy the bracing scepticism of the tech-savvy defence analyst John Robb in this Twitter/X post: “Isn’t [Lavender] this the same system that wasn’t able to detect the preparations and the mobilisation for the Oct 7 offensive and was largely ignorant of hundreds of miles of tunnel infrastructure?”

One must always be ready for the possibility of what the military academics call “automation bias”. That seems to be the key insight shared by the anonymous sources to the +972 website. Small wisdom, after such damage done. For those seeking an independent Scotland, we must use democratic sovereignty to maintain a critical distance from current drumbeats to war. Whether that involves removing nuclear weapons, or extricating ourselves from this ghastly proliferation of artificially intelligent military systems.

I attended an excellent (and well-populated) event a few weeks ago, run by the Centre for Technomoral Futures at Edinburgh University, where the concept of “responsible” or “accountable”

AI was broached.

It’s these kinds of resources and sentiments within our country’s institutions of research that we need to draw on, as we consider the “utility of force” (in Sir Rupert Smith’s phrase) for a future Scottish state.

In the meantime, and finally, I noticed something quite shameful in my reading of the +972 article. In order to grasp their argument, I skipped ahead of picture after picture of buried bodies, flattened tenements and crushed institutions. All as a result of the Israeli forces’ response – incommensurate beyond words – to the massacre of October 7.

Amid their meticulous delineation of the Lavender AI system, the editors put those photographs there for a reason. I will now return to the webpage, and bear witness.