You are here
Killer algorithms
Jul 22,2018 - Last updated at Jul 22,2018
Warnings about the risks posed by artificial intelligence (IA) seem to be everywhere nowadays. From Elon Musk to Henry Kissinger, people are sounding the alarm that super-smart computers could wipe us out, like in the film “The Terminator”. To hear them talk, you would think we were on the brink of dystopia, that Skynet is nearly upon us.
These warnings matter, but they gloss over a more urgent problem: weaponised AI is already here. As you read this, powerful interests, from corporations to state agencies, like the military and police, are using AI to monitor people, assess them and to make consequential decisions about their lives. Should we have a treaty ban on autonomous weapons? Absolutely. But we do not need to take humans “out of the loop” to do damage. Faulty algorithmic processing has been hurting poor and vulnerable communities for years.
I first noticed how data-driven targeting could go wrong five years ago, in Yemen. I was in the capital, Sanaa, interviewing survivors of an American drone attack that had killed innocent people. Two of the civilians who died could have been US allies. One was the village policeman, and the other was an imam who had preached against Al Qaeda days before the strike. One of the men’s surviving relatives, an engineer called Faisal Bin Ali Jaber, came to me with a simple question: Why were his loved ones targeted?
Faisal and I travelled 11,265 kilometres from the Arabian Peninsula to Washington looking for answers. White House officials met Faisal, but no one would explain why his family got caught in the crosshairs.
In time, the truth became clear. Faisal’s relatives died because they got mistakenly caught up in a semi-automated targeting matrix.
We know this because the US has admitted that its drones attack targets whose identities are unknown. That’s where AI comes in. The US does not have deep human intelligence sources in Yemen, so it relies heavily on massive sweeps of signals data. AI processes this data, and throws up red flags in a targeting algorithm. A human fired the missiles, but almost certainly did so on the software’s recommendation.
These kinds of attacks, called “signature strikes”, make up the majority of drone strikes. Meanwhile, civilian air strike deaths have become more numerous under President Donald Trump, over 6,000 last year in Iraq and Syria alone.
This is AI at its most controversial. And the controversy spilled over to Google this spring, with thousands of the company’s employees protesting, and some resigning, over a bid to help the Defence Department analyse drone feeds. But this is not the only potential abuse of AI we need to consider.
Journalists have started exploring many problematic uses of AI: Predictive policing heatmaps have amplified racial bias in our criminal justice system. Facial recognition, which the police are currently testing in cities like London, has been wrong as much as 98 per cent of the time. Shop online? You may be paying more than your neighbour because of discriminatory pricing. And we have all heard how state actors have exploited Facebook’s News Feed to put propaganda on the screens of millions.
Academics sometimes say that the field of AI and machine learning is in its adolescence. If that is the case, it is an adolescent we have given the power to influence our news, to hire and fire people and even kill them.
For human rights advocates and concerned citizens, investigating and controlling these uses of AI is one of the most urgent issues we face. Every time we hear of a data-driven policy decision, we should ask ourselves: Who is using the software? Who are they targeting? Who stands to gain, and who to lose? And how do we hold the people who use these tools, as well as the people who built them, to account?
Cori Crider, a US lawyer, investigates the national security state and the ethics of technology in intelligence. She is a former director of international human rights organisation Reprieve. Copyright: Project Syndicate, 2018.
www.project-syndicate.org