The need for AI ethics is nothing new
Fears about how AI might change our future are running high at present. There are concerns that large language models such as ChatGPT could be misused by bad actors for disinformation, terrorism, abuse and fraud, as well as potential harms arising from legitimate use cases. Bias and inaccuracy could spell great risk for people on the receiving end of automated or semi-automated decision-making, and the potential impact of these technologies on employment is as yet unknown.
But these are not necessarily new problems, relevant only to the likes of Microsoft, Google and a handful of AI startups. Ethical problems in technology have been with us for decades, and they should concern all enterprises which operate systems at scale.
Take the Horizon scandal, considered the biggest miscarriage of justice in recent British history, which has been in the headlines again this week. This involved a traditional accounting system – not a machine learning algorithm in sight – by which the Post Office ruined the lives of hundreds of sub-postmasters. It was a corporate disaster for the Post Office which incurred an estimated £1bn in legal costs, years of lost management time, and has come under deep Parliamentary scrutiny.
This strategy was then signed off at the very top, serving as a public declaration of intent to harness the cloud
We might identify three reasons why things went so horribly wrong.
Firstly, the Post Office maintained a naïve faith in Horizon’s integrity when it should have been obvious that guaranteeing the absence of flaws was logically impossible, particularly for such a complex application. It failed to hold itself accountable to its sub-postmasters: a dependent group of users with few resources and no independent means of defending themselves against allegations which turned out to be false. And finally, there was no requirement for transparency – no external quality standards or regulations that might have exposed the potential risks before they materialised in such a damaging way.
It took the sub-postmasters over ten years of tenacious and well-supported campaigning before they won the right to have the Horizon black box cracked open for inspection. A court order was secured, enabling an independent audit which identified system flaws that the Post Office could and should have noted years earlier.
‘We should all be asking: what flawed systems are ruining lives today, in ways that are less visible than Horizon? How would we even know?’
Years after the Horizon scandal first broke, too many of us are still too prepared to believe that new technology does exactly what it says on the tin and no more. We should all be asking: what flawed systems are ruining lives today, in ways that are less visible than Horizon? How would we even know?
Organisations should not assume that a legitimate and well-motivated use case is risk-free. Data may be poor quality, incomplete or used outside of the context in which it was originally collected, creating blind spots, biases and flawed analysis leading to flawed decision-making. We should also be routinely considering which groups will be impacted by a new use case (such as workers, service users, customers) alongside our obligations to those people, the rights they have and how they might be impacted. Broadening out beyond a narrow compliance focus will create a more comprehensive picture of risk, where the mitigations may be relatively simple – a formal complaints process, for example.
Perhaps the newer and scarier language of AI is helping us to take these problems seriously, which is a positive development. But we need to tackle the right problem, which is abuse of power related to the technology in use today. This is a genuine risk affecting many organisations doing relatively ordinary things with data. For this reason, we all need to broaden our conception of risk and governance when it comes to tech and data projects.
we need to tackle the right problem, which is abuse of power related to the technology in use today