Categories
Uncategorized

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Categories
Uncategorized

“Students’ use of AI spells death knell for critical thinking”

From The Guardian:
Prof Andrew Moran and Dr Ben Wilkinson on the ramifications of the explosion in university essays being written with artificial intelligence

https://www.theguardian.com/technology/2025/mar/02/students-use-of-ai-spells-death-knell-for-critical-thinking

Categories
Uncategorized

Bias found in AI system used to detect UK benefits fraud

In an ‘Exclusive’ report, the Guardian on 6 December reports that

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality,

In other words,

Age, disability, marital status and nationality influence decisions to investigate claims.

The report goes on that

“An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

Only this summer, the DWP (Department of Work and Pensions) claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers” because the final decision on whether a person gets a welfare payment is still made by a human.

No fairness analysis has yet been undertaken regarding potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status.

The Just Algorithms Action Group (JAAG) was established by people concerned at the unjust treatment of some people by automated decision-making systems that use algorithms.

JAAG is currently examining closely the Government’s Data Use and Access Bill, which contains clauses that it fears might weaken even further individuals’ protection against such unfair decisions.

The full Guardian report is here.

Categories
Uncategorized

Surveillance, AI, and Ethics

Like it or not our lives are increasingly affected by new technologies and the Criminal Justice System is one area where Artificial Intelligence will play a significant part.

On 25 November, JAAG, together with Quakers in Criminal Justice, organised a webinar entitled “Surveillance, AI, and Faith-based Ethics”.

Professor Emeritus David Lyon of Queens University, Canada spoke about “Dataveillance, AI and Human Flourishing”.
Looking at surveillance from a historical perspective, David noted some of the ways in which AI is now being used to covertly surveil all of us and harvest our data for profit (Surveillance Capitalism). He argued for a return to an earlier concept of surveillance based upon Relationality, Care, Justice, and Reliability.

Professor Emeritus Mike Nellis of the University of Strathclyde spoke about “Artificial Intelligence, Smart Prisons and Automated Probation”.
He outlined some of the ethical questions raised by the trend towards making use of AI in (underfunded and understaffed) prisons and probation services. The potential roles played by AI systems call into question what society wishes prisons and probation services to exist for. And are we heading for creation of an underclass (the recipients of probation services) whom we feel deserves to be interacted with only by machines? Should we become as discerning as the Amish are in deciding whether or not to adopt AI systems in criminal justice or elsewhere?

A recording of the webinar is available here.

Categories
Uncategorized

BMA wants tighter controls on use of AI in healthcare

The British Medical Association (BMA) wants the use of AI in  healthcare to prioritise “safety, efficacy, ethics, and equity.”

It wants to see a much clearer framework to ensure that each AI implementation is “rigorously assessed in real-world settings and continuously monitored to ensure it improves care quality and job satisfaction without exacerbating inequalities”. Doctors say stronger “governance and up-to-date regulation” are needed in order to protect patient safety.

The BMA represents all UK doctors and medical students. It has just published “Principles for Artificial Intelligence (AI) and its application in healthcare”.

The report notes that AI offers potential benefits of AI in healthcare including better efficiency, diagnosis, and treatment; it is being used in healthcare administration, clinical decision-making, diagnostics, personalised treatment, digital therapies, analysis of population health data, and biomedical research.

However, the BMA cautions that successful use of AI in healthcare depends on its proper testing, and dealing with issues of liability, regulation, and data governance.

This is because, as the doctors’ association points out, the use of AI can involve serious risks, which include potential harms to patient health, exacerbation of health inequalities, and impacts on doctor-patient relationships and productivity. So effective AI use requires careful management to maximise benefits and mitigate risks

THE BMA wants to see AI tools being rigorously tested for safety and efficacy, better governance and regulation that evolves as AI evolves, and clear legal liability frameworks.

The Association wants AI in healthcare to prioritise safety, efficacy, ethics, and equity. Each AI implementation must be rigorously assessed to ensure it doesn’t exacerbate inequalities.

And the BMA wants staff and patients to be given the choice to opt out or dispute AI decisions. Legal liability must ensure that developers are accountable and that doctors can challenge AI decisions.

Categories
Uncategorized

The use of Artificial Intelligence in warfare

This issue was recently discussed at a meeting of members of the Just Algorithms Action Group.

In recent days we have read reports of AI being used in the on-going conflict in Gaza. It is reported that Israel used an AI-powered database called ‘Lavender’ to identify human targets and another one called “the Gospel’ to identify buildings and structures. An experimental facial recognition technology known as ‘Red Wolf ‘ has also been used to track Palestinians and determine restrictions on their freedom of movement.

The Guardian on 3 April 2024 noted: “Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions (…)” and described the very broad criteria that the AI had been given to select targets, allowing for large numbers of civilians to be killed in the process. The machine was able to generate a vast number of targets in a short period of time with very little human intervention, almost automating the attacks.

Allocating the task of killing people to an automated system offers a route to remove the burden of human complicity and relieve the consciences of those setting the rules. Victims of this system, many of them innocent bystanders, are reduced to something like avatars rather than human beings.

But what about the workers who design these products? What agency do they have? Can they stop this?

A former Google whistleblower has written about his experience. William Fitzgerald played a role in the cancellation of Project Maven, a Google/US military contract to develop AI for military drones. He says that Google is very different from what it was a few years ago; it tightened its rules for its employees’ involvement in politics, allowing it to fire more than 50 employees recently for ‘disruptive activity’. They were fired for asking for transparency on Project Nimbus, a joint Google/Amazon contract to develop cloud technology for Israel’s government and military. The campaign was led by No Tech For Apartheid, a US based tech worker movement.

All this gives hope for the future, but at what price? Fitzgerald ends his article with this analysis: “A document that clearly demonstrates Silicon Valley’s direct complicity in the assault on Gaza could be the spark. Until then, rest assured that tech companies will continue to make as much money as possible developing the deadliest weapons imaginable.”

The use of AI in the conflict in Gaza is just one example of a wider problem: there is a global trend towards the increased application of military AI and other advanced technologies in conflict, including reports of AI being used in Ukraine. AI technologies can be used for target manipulation by weapons systems.

In addition, Facebook’s algorithms have been used to create hate speech during Ethiopia’s Tigray civil war, and voice cloning has been used in Sudan’s civil war.

All of these uses of AI directly contribute to violence.

The widely reported use of military AI in Gaza  escalates this trend, giving rise to urgent questions for the UK Government.

JAAG recognises that these developments are extremely disturbing. As a Quaker-led organisation our primary desire is for peace.  In particular, we urge the new UK government to do what it can both on the international scene and in controlling the UK defence industry to stop civilian populations, and infrastructure, being used as testing grounds for unregulated technology development with lethal effect.

If you are interested to find out more, you may be interested in the UK Campaign to Stop Killer Robots and their petition, which is jointly run with Amnesty International.

Categories
Uncategorized

The worrying ideologies behind Big Tech – pt 2.

Why we should be wary of AI that is shaped by these ideologies

These ideologies all have two things in common. One: they come from a place of enormous (financial) wealth; Two: they assume that tech alone can solve any real-life problem. 

However, it is by no means certain that all the results of AI will benefit society. Instead, it can result in even greater imbalance of power and exploitation, and is typically not geared towards making our societies fairer or more just.

The tech leaders who raise fears about anthropomorphic machines taking control of our civilisation in the distant future do this in order to take attention away from the very real problems we are facing now.

  • Corporations already deploy automated systems which centralise power and increase social inequality.

  • AI is feeding into the move towards political authoritarianism. Machine learning algorithms replicate existing biases and falsehoods. A problem with large language models (such as Chat GPT) is that we can be tempted to view the system as an oracle, trust its output and act accordingly. False information can ruin reputations and ill-founded decisions can be made. 

  • People’s rights and privacy are undermined via intensive data gathering and surveillance, as well as disregarding creative and other rights and the wellbeing of vulnerable groups.

It’s striking that those who espouse these ideologies show little concern for problems such as growing social and economic inequality, the rise of authoritarianism, or the centralisation of power.

Alternative views

Fortunately, not all tech entrepreneurs think the same way.

For example, Mustufa Suliman and Michael Bhaskar, wrote The Coming Wave, Technology, Power and the 21st Century’s Greatest Dilemma, which highlights the inevitability of dangers from unconstrained AI and synthetic biology technologies, the near impossibility of containing them, and a possible set of solutions to contain them. Suliman was a founder of DeepMind, whose mission was to develop artificial general intelligence, (AI with human-like adaptability). In The Coming Wave he sets out 10 strategies for the “containment” necessary to keep humanity on the narrow path between societal collapse and AI-enabled totalitarianism. These include increasing the number of researchers working on safety from a few hundred to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out international treaties to restrict and regulate dangerous tech.

Another tech leader, Timnit Gebru, founded the Distributed AI Research Institute (DAIR) which provides encouraging alternatives to big tech. “We believe that artificial intelligence can be a productive, inclusive technology that benefits our communities, rather than work against them. However, we also believe that AI is not always the solution and should not be treated as an inevitability. Our goal is to be proactive about this technology and identify ways to use it to people’s benefit where possible, caution against potential harms and block it when it creates more harm than good.”

JAAG’s view

There is a huge potential for new AI tools truly good for humanity, especially in the medical field.  But, as with most technology, it can be used with good and bad effect.

JAAG believes that we all – users, tech entrepreneurs and legislators – need to focus on providing transparency; this means being open about when AI is being used and what it is used for. We need AI developers and deployers to be fully accountable.

We need protection against exploitative working practices. A central requirement is regulation that protects the rights and interests of people and thereby shapes the actions and choices of big corporations.

Strongly enforced ethical guidelines, reflecting human well-being and society’s priorities will be crucial.

We should be building machines that work for us, not adapting society to the wishes of the few elites currently driving the AI agenda. Those most impacted by AI systems, who include the most vulnerable in society, should have their opinions taken into account. 

Instead of worrying so much about imaginary digital minds, we should focus on the current exploitative practices of companies developing AI, which increase social inequality and centralise power.

 <<< Part 1

Categories
Uncategorized

The worrying ideologies behind Big Tech – pt 1

What are the ideologies behind Big Tech?

New ideologies – alongside neo-liberal and capitalist worldviews – are very influential among ‘Big Tech’ leaders: those who basically decide how AI develops.

JAAG has very strong concerns about these viewpoints, which are explained in Part 2, but first we consider the ideologies themselves.

1 Effective Altruism (EA)

EA advocates using reason and evidence to figure out how to help others as effectively as possible. Key aspects include parity in location: choosing to help strangers in other countries with greater needs rather than local people, and earning to give: make as much money as possible (e.g. as a tech entrepreneur or banker) in order to donate it to charities believed to do the most good.

EA people talk about AI safety, how AI is beneficial to humanity, shaping the future of AI governance. They give grants and career advice for people to enter these fields.

EA fuels the AI research agenda, notably Large Language Models (LLMs).

2 Long-termism

This view, often associated with EA, attaches great importance to the interests of future people. ‘Strong long-termism’ posits that future people are just as important as those living today.

Therefore, its adherents judge actions on the basis of their (assumed) impact many years ahead; they advocate boosting economic growth and minimising the risk to humanity posed by rogue artificial intelligence. More and more, EA funds long-termist projects, e.g. preparation for space colonization and existential threats.

Peter Thiel and Elon Musk are among the founders of Open AI, whose ostensible mission was to build beneficial artificial general intelligence (AGI). Musk says long-termism is a close match to his philosophy. Five years after its founding, Open AI released a large language model (LLM) called GPT-3. The ‘race to the bottom’ continues, with LLMs and text-to-image models etc., without anyone addressing any of the identified harms.

Elon Musk also helped found a long-termist organization called ‘The Future of Life institute’.

He also donated £1M to the Future of Humanity Institute affiliated to Oxford University and run by Nick Bostrom. However Oxford University decided to shut it down in April 2024 after 19 years of operation.

The long-termist view of anthropomorphic machines taking control and civilisational destruction can be seen as AI hype. It is taking attention away from the very real problems we have here and now from corporations and people deploying automated systems.

3 Effective Accelerationism (e/acc)

Marc Andreessen, a venture capitalist and a proponent of e/acc has stated: “We believe any deceleration of AI will cost lives … Deaths that were preventable by the AI that was prevented from existing is a form of murder.” In this context, Andreessen opposes “sustainability”, “social responsibility”, “stakeholder capitalism”, the “Precautionary Principle”, “risk management”, “de-growth”, “socialism”, “and “depopulation” among many others.

Andreessen Harrowitz venture capital investments are backing a ‘heathcare’ start up called Ciitizen that wants to make it easy to collect patient data by persuading patients to ask for their data, store it in their platform and share it with 3rd parties. It is incredibly naive to think that this will lead to safe outcomes.

Andreessen has released the ‘Techno-Optimist manifesto’ in which he writes that technology makes the world not only a better place but also a good place. Andreessen has stated that society cannot progress without (economic) growth, and growth must come from technology: “there is no material problem – whether created by nature or by technology – that cannot be solved with more technology … Give us a real world problem, and we can invent technology that will solve it”.

4 Neo-liberalism

Technology is often used by politicians to fit their political narrative. Governments of all persuasions may espouse the use of AI and other technologies in order to get more power and control, for political manipulation or mass surveillance, etc; in China, AI is already being used for authoritarian purposes.

The Neo-liberal ideology – which aims need to dismantle the state and allow maximum freedom to entrepreneurs – gave birth to the UK’s Universal Credit system and Australia’s Robodebt scheme, which have been catastrophic for so many of the least well-off. Governments are also commissioning algorithms to control migration or to limit legitimate protests. 

When technology is created with a focus on increasing profit, companies are reluctant to take an ethical approach in its development. Safeguarding measures are costly, as is content moderation. The first people who got fired when Musk took over Twitter were the ethics team working to stop abusive tweets. 

Neo-liberal ideology has created a tiny, unrepresentative billionaire class who decide what technology we should have, who decide which societal problems are worth focusing on, who control the technological narrative, and who can spend millions lobbying governments to see things their way.  

>>> Part 2

Categories
Uncategorized

Farewell, Data Protection Bill??

The Data Protection and Digital Information Bill – about which JAAG had expressed significant concerns – will not become law in this Parliamentary session because business has been curtailed due to the Prime Minister’s decision to call an early General Election.

The Bill aimed to reform the UK data protection regime following Brexit.

JAAG, like many other civil society organisations, had expressed concern that the Bill would have:

  • removed individuals’ rights to not to be subjected to automated decision-making (without any human involvement);

  • limited people’s ability to access information about how their personal data is collected and used;

  • reduced the independence of the Information Commissioner’s Office (ICO);

  • given the Secretary of State undemocratic controls over data protection;

  • given businesses new freedoms to use personal data for commercial activities; and

  • watered down organisations’ obligations to be accountable for the way they use personal data. 

It is, of course, possible that the next UK government will introduce this, or a similar, Bill.

JAAG will continue to scrutinise government plans that may curtail citizens’ rights over the way their personal data is used.

Categories
Uncategorized

New JAAG Article on Greening Digital

Interest in sustainable IT is growing. The British Computer Society (BCS) has just published, on 20 May, in their magazine ITNOW for computer professionals, an article by JAAG’s Andrew Nind and Siani Pearson (https://doi.org/10.1093/itnow/bwae059) – in which they explain how the assessment of digital technologies from a climate perspective can be misleading, and why the use of smaller systems should be considered.

Carbon footprints tend to be calculated on an operational basis, without quantification of the embedded carbon. Furthermore – and rarely assessed numerically – the way in which energy-related emissions are accounted for tends to significantly understate the operational emissions that are physically caused by the sector. A more holistic assessment of the climate impact of ICT, combined with greater accountability for its emissions, could potentially lead to different courses of action: for instance, a greater focus on low energy models and data streamlining and less of a focus on location for new developments.

Related articles: