We hold people with power to account. Why not algorithms?
Hannah Fry, British mathematician, lecturer on the Mathematics of Cities and a public speaker/The Guardian, UK
As we delegate technology more responsibility to diagnose illness or identify suspects, we must regulate it
Robert Jones was driving home through the pretty town of Todmorden, in West Yorkshire, when he noticed the fuel light flashing on the dashboard of his car. He had just a few miles to find a petrol station, which was cutting things rather fine, but thankfully his GPS seemed to have found a short cut – sending him on a narrow winding path up the side of the valley.
Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track, but Robert wasn’t fazed. After all, he thought, he had “no reason not to trust the satnav”.
Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,” he told the Halifax Courier after the incident. “So I just trusted it. You don’t expect to be taken nearly over a cliff.”
I can imagine Robert was left red-faced by his blunder, but in a way, I think he’s in good company. When it comes to placing blind faith in a piece of software, his mistake was one we’re almost all capable of making. In our urge to automate, in our eagerness to adopt the latest innovations, we appear to have developed a habit of unthinkingly handing over power to machines.
All around us, algorithms provide a kind of convenient source of authority: an easy way to delegate responsibility, a short cut we take without thinking. Who is really going to click through to the second page of Google results every time and think critically about the information that has been served up? Or go to every airline to check if a comparison site is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route?
But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.
If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.
Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.
The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine.