Categories
Uncategorized

Bias found in AI system used to detect UK benefits fraud

In an ‘Exclusive’ report, the Guardian on 6 December reports that

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality,

In other words,

Age, disability, marital status and nationality influence decisions to investigate claims.

The report goes on that

“An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

Only this summer, the DWP (Department of Work and Pensions) claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers” because the final decision on whether a person gets a welfare payment is still made by a human.

No fairness analysis has yet been undertaken regarding potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status.

The Just Algorithms Action Group (JAAG) was established by people concerned at the unjust treatment of some people by automated decision-making systems that use algorithms.

JAAG is currently examining closely the Government’s Data Use and Access Bill, which contains clauses that it fears might weaken even further individuals’ protection against such unfair decisions.

The full Guardian report is here.

Categories
Uncategorized

Surveillance, AI, and Ethics

Like it or not our lives are increasingly affected by new technologies and the Criminal Justice System is one area where Artificial Intelligence will play a significant part.

On 25 November, JAAG, together with Quakers in Criminal Justice, organised a webinar entitled “Surveillance, AI, and Faith-based Ethics”.

Professor Emeritus David Lyon of Queens University, Canada spoke about “Dataveillance, AI and Human Flourishing”.
Looking at surveillance from a historical perspective, David noted some of the ways in which AI is now being used to covertly surveil all of us and harvest our data for profit (Surveillance Capitalism). He argued for a return to an earlier concept of surveillance based upon Relationality, Care, Justice, and Reliability.

Professor Emeritus Mike Nellis of the University of Strathclyde spoke about “Artificial Intelligence, Smart Prisons and Automated Probation”.
He outlined some of the ethical questions raised by the trend towards making use of AI in (underfunded and understaffed) prisons and probation services. The potential roles played by AI systems call into question what society wishes prisons and probation services to exist for. And are we heading for creation of an underclass (the recipients of probation services) whom we feel deserves to be interacted with only by machines? Should we become as discerning as the Amish are in deciding whether or not to adopt AI systems in criminal justice or elsewhere?

A recording of the webinar is available here.