Categories
Uncategorized

Just algorithms: Siani Morris calls for AI regulation

This letter from Siani Morris, one of JAAG’s Directors, was recently published in The Friend magazine.

“Ruth Jones’ suggestion (The Friend, 23 June) that Quaker principles could help inform an ethical artificial intelligence (AI) gives food for thought. Quaker business ethics do have a great deal to offer when considered against current AI systems that make life harder for the weakest in society.

For instance, right now AI algorithms replicate human biases. There are also issues with: falsehoods being replicated (which makes them difficult to correct); creative rights being exploited; and with AI systems not understanding the adverse affects of their decisions on humans. A central problem is that humans mistake AI output for meaningful text, and act accordingly. This is happening right now. Information given by AI language models may not be true, reputations may be ruined, and decisions may be ill-founded – and the basis for those decisions unknown.

Another issue is social justice and the effects of AI on vulnerable groups. We need systems that do not contribute to excessive inequalities of wealth distribution. As Ruth mentions, sustainability is another important aspect. In addition, certain activities should just be forbidden, including those that threaten peace and community.

Visions of anthropomorphic machines causing the destruction of civilisation are common, but these take attention away from the very real problems we already have from the use of AI systems. Exploitative practices, often motivated by desire for power or financial gain, are already increasing social inequality and centralising power.

We need to do more than just hope that AI can gain an ethical consciousness. We need humans inside the AI loop, ethical guidelines that are strongly enforced, and enough transparency that people know what is going on and are able to make choices accordingly.

So, we need to focus on providing this transparency, letting people know when AI is being used, and on the accountability of developers and deployers. We need protection against exploitative working practices. New regulation should protect the rights and interests of people and thereby shape the actions and choices of corporations. We should be building machines that work for the common good, not adapting society to the wishes of those few elites currently driving the AI agenda. Those most impacted by AI systems, which includes the most vulnerable in society, should have their opinions taken into account. (It’s worth noting that many of these problems do not just apply to AI, but also to other digital systems, many already being deployed, including the UK universal credit software.)

But let’s not forget that technology can be used for good or for ill. Phone tapping exists, but that doesn’t mean we want to give up our phones. The same could apply for AI. There is a huge potential for new AI tools that could be truly good for humanity, especially in the medical field. How might Joseph Rowntree, for example, have used AI?”

JAAG is a Quaker-inspired non-profit group working for greater social justice in AI.

Categories
Uncategorized

The UK’s AI Summit – a missed opportunity

JAAG is today pleased to join with over 100 concerned civil society organisations and individuals in addressing an open letter to the UK Prime Minister about the much – vaunted UK Global Summit on AI Safety.

We point out that many millions of people are already feeling the harmful effects of AI: whether they have been fired from their job by an algorithm, or been subject to authoritarian biometric surveillance, or seen their small business squeezed out by big tech companies.

Yet the communities and workers most affected by AI have been marginalised, and civil society has been sidelined, by the Summit.

JAAG believes that these issues can only be tackled if those who are most exposed to AI harms are fully involved in discussions and debate. Only if the whole of society is given a voice can we ensure that the future of AI is as safe and beneficial as possible for everyone.

An open letter to the Prime Minister on the ‘Global Summit on AI Safety’ 

Dear Prime Minister,

Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”. 

Yet the communities and workers most affected by AI have been marginalised by the Summit.

The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited. 

This is a missed opportunity. 

As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.

For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.

This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.

People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.

Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence. 

To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.

For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.

In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.

 

Signed:

  • 5 Rights Foundation

  • Access Now

  • AI Now Institute

  • Amnesty International

  • ARTICLE 19

  • Avaaz Foundation

  • BARAC UK

  • Big Brother Watch

  • BWI Global Union, representing 12 million workers globally

  • Center for Countering Digital Hate (CCDH)

  • Centre for Technomoral Futures, University of Edinburgh

  • Child Rights International Network (CRIN)

  • Community Union

  • Connected By Data

  • Consumers International

  • Data & Society

  • Data, Tech & Black Communities CIC

  • Defend Democracy

  • Derechos Digitales

  • Education International

  • Elanta Services

  • Eticas Tech

  • European Trade Union Confederation (ETUC), representing 45 million members from 93 trade union organisations in 41 European countries

  • Fair Trials

  • Fair Vote UK

  • Finance Innovation Lab

  • ForHumanity

  • Glitch

  • Global Action Plan

  • Global Witness

  • Homo Digitalis

  • Inclusioneering

  • IndustriALL Global Union representing over 50 million workers in 140 countries

  • Institute for the Future of Work

  • International Trade Union Confederation, representing 191 million trade union members in 167 countries and territories

  • International Transport Workers Federation, representing 18.5 million workers globally

  • IPANDETEC

  • Just Algorithms Action Group 

  • Just Treatment

  • Kristophina Shilongo, Senior Mozilla Fellow in Tech Policy

  • Liberty

  • Migration Mobilities, University of Bristol

  • Mozilla

  • NASUWT, teachers union

  • National Education Union

  • National Union of Journalists

  • Open Futures

  • Open Rights Group

  • Privacy International

  • Prospect union

  • Public Law Project

  • Research ICT Africa

  • Reset Tech

  • Safe Online Women Kenya

  • Statewatch

  • StopWatch

  • Superbloom

  • The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), representing 60 unions and 12.5 million American workers

  • The Citizens

  • The End Violence Against Women Coalition 

  • The Open Data Institute

  • The Racial Justice Network

  • The Trade Union Advisory Committee to the OECD

  • The Trades Union Congress, representing 6 million UK workers

  • TSSA – the union for people in transport and travel

  • Understanding Patient Data

  • UNI Europa Union, representing 7 million European service workers

  • UNI Global Union, representing 20 million service workers in 150 countries

  • UNISON – the public service union

  • UNITE the Union

  • United Tech and Allied Workers

  • United We Rise Uk

  • USDAW – Union of Shop, Distributive and Allied Workers

  • WHAT TO FIX

  • Worker Info Exchange

  • Adam Leon Smith, Chair of British Computer Society Fellows Technical Advisory Group

  • Andelka Phillips, Senior Lecturer in Law, Science and Technology, University of Queensland

  • Baroness Dawn Primarolo, former MP

  • Baroness Frances O’Grady, former General Secretary of the TUC

  • Burkhard Schafer, Professor of Computational Legal Theory, University of Edinburgh

  • Dr Alex Wood, University of Bristol

  • Dr Gina Helfrich, Centre for Technomoral Futures, University of Edinburgh

  • Dr Julian Huppert, University of Cambridge and former MP

  • Dr Mike Katell, The Alan Turing Institute

  • Dr Miranda Mowbray honorary lecturer in Computer Science at the University of Bristol

  • Dr Nora Ni Loideain, Information Law & Policy Centre, Institute of Advanced Legal Studies, University of London

  • Dr P M Krafft, Creative Computing Institute

  • Dr Richard Clayton, Director, Cambridge University Cybercrime Centre 

  • Dr. Cristina Richie Lecturer of Ethics of Technology at University of Edinburgh

  • European Center for Not-for-Profit Law

  • Ismael Kherroubi Garcia, CEO of Kairoi 

  • Judith Townend, Reader in Digital Society and Justice, University of Sussex

  • Kate Baucherel, Galia Digital

  • Lord John Monks, former General Secretary of the TUC

  • Maria Farrell, writer and Senior Fellow At Large, University of Western Australia Tech and Policy Lab

  • Mick Whitley MP

  • Neil Lawrence, University of Cambridge DeepMind Professor of Machine Learning and Senior AI Fellow at The Alan Turing Institute

  • Peter Flach, Professor of Artificial Intelligence, University of Bristol

  • Phoebe Li, Reader in Law and Technology, University of Sussex

  • Professor Alan Bundy, the School of Informatics at the University of Edinburgh

  • Professor Alex Lascarides, University of Edinburgh

  • Professor Douwe Korff, Emeritus professor of international law, European human rights and digital rights expert

  • Professor Lilian Edwards, Newcastle Law School

  • Professor Martin Parker, University of Bristol Business School

  • Professor Nathalie Smuha, KU Leuven Faculty of Law & NYU School of Law

  • Professor Peter Sommer, Birmingham City University

  • Professor Sara (Meg) Davis, University of Warwick

  • Professor Sonia Livingstone, London School of Economics 

  • Professor Sue Black OBE, Durham University

  • Professor Vijay Varadharajan, Advanced Cyber Security Engineering Research Centre (ACSRC), The University of Newcastle, Australia

  • Rachel Coldicutt, Executive Director, Careful Trouble

  • Shân M. Millie, Bright Blue Hare

  • Tabitha Goldstaub, former chair of the AI Council

  • Tania Duarte, founder of We and AI

  • Thompsons Solicitors

  • University and College Union (UCU)

 

Categories
Uncategorized

MPs demand stop to live facial recognition surveillance

65 UK MPs and peers, from all political parties, have called for the use of live facial recognition surveillance to be put on pause.

The group of MPs – which includes Conservative MP David Davis, Labour politicians Diane Abbott and John McDonnell, and Liberal Democrat leader Ed Davey – called on UK police and private companies to immediately stop using live facial recognition for public surveillance.

British police have used live facial recognition technology (FRT) at public events, including the coronation of King Charles II and the British Grand Prix. However, this technology has long been criticised by civil liberties groups as an invasion of privacy.  

The use of “static” FRT seems well established. The policing minister, Chris Philp, has acknowledged that all 45 police forces are currently using FRT. He plans to set up ‘national shoplifting database’ to include the passport photos of all 45 million adults. September saw the launch of Project Pegasus, in which the country’s biggest retailers will hand over their CCTV footage to the police who will run them through their databases using this facial recognition technology to identify shoplifters. Critics say that using passport photos – which people only provide for the purposes of travelling – to track them when they go to the shops is an extreme invasion of privacy.

The parliamentarian’s concern is specifically with “live” facial recognition, such as using a camera on top of a police van to scan everybody who walks past in real time, and then running the images through a database of our “faceprints” – including images taken from social media accounts. Police officers could even use their phone to scan someone’s face and run it through a database of sensitive biometric data.

History suggests that surveillance technology is likely to be targeted at minority groups, especially people of colour. When the Metropolitan Police first trialled this technology, they often deployed it in socially deprived areas and at events attended primarily by people of colour – such as the Notting Hill carnival. MIT research in 2018 found that facial recognition software made mistakes in 21% to 35% of cases for darker-skinned women, but the error rate for light-skinned men was less than 1%.

In June, the European Court of Human Rights described facial recognition technology as highly intrusive, and ruled that using it to identify and arrest participants of peaceful protests could have “a chilling effect in regard of the rights to freedom of expression and assembly”. Use of FRT could soon be banned in the EU under forthcoming legislation.

The MPs’ statement listed many areas of concern with FRT:

  • its incompatibility with human rights,

  • the potential for discriminatory impact,

  • the lack of safeguards,

  • the lack of an evidence base,

  • an unproven case of necessity or proportionality,

  • the lack of a sufficient legal basis,

  • the lack of parliamentary consideration, and

  • the lack of a democratic mandate.

JAAG believes that the use of this technology, which has serious implications for individuals’ privacy, must be paused. There needs to be a full public debate about it, and Parliament must review its implications and establish stringent safeguards; the use of any such technology should always be subject to Parliamentary scrutiny.

Sources

Reuters: https://www.reuters.com/world/uk/british-lawmakers-call-pause-live-facial-recognition-surveillance-2023-10-05/?mkt_tok=MTM4LUVaTS0wNDIAAAGOxjhzn1Xvn1NdJvQ3nthoue-1pxGGIE85YgcMclKNQrQ-E86i5idiRDYABZjD4qllZ8we-4ILh2LsBmbf3GdPkOz-1NSUFBDqZvv6VzfDkRG_

https://www.theguardian.com/technology/2023/oct/06/mps-and-peers-call-for-immediate-stop-to-live-facial-recognition-surveillance

https://www.theguardian.com/technology/2023/jul/08/police-live-facial-recognition-british-grand-prix?CMP=Share_AndroidApp_Other

https://www.theguardian.com/commentisfree/2023/oct/12/shoplifting-facial-recognition-shops-police-surveillance-powers?CMP=Share_AndroidApp_Other