Categories
Uncategorized

Palantir win lucrative NHS data contract

The Just Algorithms Action Group (JAAG) is a non-profit group working for greater social justice in Artificial Intelligence and algorithmic systems.

That’s why we are extremely concerned at the government’s plans for a Federated Data Platform (FDP) for NHS health data.

We are worried about the lack of transparency in the government’s proposals. In particular,

  • we’re not convinced that there will be proper consent and safeguards to govern how patients’ data will be used and protected,

  • we think it unwise to lock this valuable data set into a system controlled by a monopoly business, and

  • we believe that the £480m could be more effectively spent on direct patient care.

As if this wasn’t enough, yesterday the government announced that that they have awarded the lucrative contract to run the FDP – the biggest IT contract in the health service’s history – to Palantir, a US company.

It’s already worrying that the intimate health data of millions of UK citizens will be in the control of a company outside the UK.

What’s even more alarming is that Palantir have a track record that doesn’t inspire confidence in their ability to handle sensitive health information ethically. Palantir is known for its work with intelligence and military agencies in the US, UK and elsewhere, such as the CIA.

The firm gained a foothold in the NHS in March 2020 when, at the government’s invitation, it began analysing huge amounts of health service data to help with the official response to the unfolding Covid pandemic’. Palantir charged the government only a nominal £1 fee.

The government say that: “… there will be clear rules and auditability covering who can access this data, what they can see, and what they can do. …. The provider of the software will not … be permitted to access, use or share it for their own purposes”. A separate provider has been given a contract for ‘Privacy Enhancing Technology’, “to enhance the security of data used in the FDP”. An advisory group of health and care stakeholders and patients will help to shape how the FDP is implemented.

However, JAAG shares the concerns expressed by many others about privacy, ethics and human rights:

  • The Doctors’ Association UK is concerned that “Basic issues of informed consent are being ignored, and this deal could lead to a loss of privacy and seriously erode patient trust.”

  • The British Medical Association recently told the then health secretary, Steve Barclay it had serious concerns involving privacy and ethics, about both the FDP and Palantir in particular.

  • In 2020, Amnesty International expressed concern at Palantir’s ‘highly questionable’ human rights record, saying “The UK public have a right to know what sort of company is being granted unprecedented access to their health data records, and precisely what Palantir intends to do with it.”

  • David Davis, the former Brexit secretary, has said that Palantir’s close relationship with the CIA meant that “it is the wrong company to put in charge of our precious data resource. Even if it behaved perfectly, nobody would trust it,” he told the House of Commons.

 

Further information

Government: https://questions-statements.parliament.uk/written-statements/detail/2023-11-21/hcws57

Amnesty: https://www.amnesty.org.uk/press-releases/usa-concerns-over-tech-giant-palantir-involvement-immigration-enforcement

Guardian: https://www.theguardian.com/society/2023/nov/20/nhs-england-gives-key-role-in-handling-patient-data-to-us-spy-tech-firm-palantir

Telegraph: https://www.telegraph.co.uk/news/2023/11/20/tech-palantir-us-billionaire-peter-thiel-nhs-data-contract/

https://www.foxglove.org.uk/campaigns/palantir-last-chance-petition/

 

 

Categories
Uncategorized

12 challenges for AI governance – is the UK lagging behind?

A parliamentary report urges the UK government to make a “serious, rapid and effective effort to establish the right governance frameworks” on artificial intelligence (AI)

In August 2023, the UK Parliament’s Science, Innovation and Technology Committee published the interim results of its inquiry into AI. They identified twelve AI challenges that policymakers must address (see below).

MPs want an AI bill to be a priority for ministers, but the government has previously indicated that it does not intend to introduce legislation to regulate AI in the short term. The MPs’ report warns that legislation is needed already in the 2023/24 parliamentary session, because any delay would “risk the UK… falling behind … the European Union and the United States who are pressing ahead with legislation”. A General Election is expected in 2024.

The government should be moving with “greater urgency” said committee chair, Greg Clark, because the UK “is already risking falling behind the pace of development of AI”. MPs fear that “ if the UK does not bring in new statutory regulation for three years it risks … other legislation — like the EU AI Act —becoming the de facto standard.”

Only last month, the independent Ada Lovelace Institute pointed out that, on the one hand, the government wants to position the UK as a global hub for AI safety research but, on the other, is proposing no new laws for AI governance and is actively pushing to deregulate existing data protection rules in a way that is a risk to its AI safety agenda. MPs are concerned at the government’s plans to ask existing regulatory bodies to deal with the twelve AI governance challenges, without giving them any new powers or formal duties.

Twelve challenges for government according to MPs

1.           Bias:
AI can introduce or perpetuate biases that society finds unacceptable.

2.           Privacy
AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.

3.           Misrepresentation
AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.

4.           Access to Data:
The most powerful AI needs very large datasets, which are held by few organisations.

5.           Access to Compute:
The development of powerful AI requires significant compute power, access to which is limited to a few organisations.

6.           Black Box:
Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.

7.           Open-Source:
Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.

8.           Intellectual Property and Copyright:
Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.

9.           Liability:
If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.

10.        Employment:
AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.

11.        International Coordination:
AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.

12.        Existential:
Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.

The JAAG view

JAAG is concerned that, without adequate parliamentary scrutiny and appropriate regulation, people in the UK will continue to be at risk from biased decisions, invasions of privacy, and inadequate redress.

What is more, after Brexit, UK legislation in this area is likely to diverge more and more from that of the EU. As a result, UK citizens are unlikely to enjoy the protection currently afforded by the General Data Protection Regulation (GDPR). Obviously, too, they will not be covered by the forthcoming EU AI Act (not yet enacted in law), which it is planned will (for example) impose strict safeguards on high-risk AI systems and will ban activities such as mass facial recognition surveillance.

This new EU regulation has been developed with the advice of experts, including on the ethics of AI, as well as with the input of civil society and elected representatives. JAAG wants any future UK legislation to be debated and designed in a similar way. The current UK government does not have a good track record of involving civil society in AI discussions (see our earlier blog about this), but it is vital that UK citizens are not exposed to greater risks from AI than their EU counterparts.

More information

https://committees.parliament.uk/committee/135/science-innovation-and-technology-committee/news/197236/ai-offers-significant-opportunities-but-twelve-governance-challenges-must-be-addressed-says-science-innovation-and-technology-committee/

https://iapp.org/news/a/uk-committee-report-calls-for-accelerated-ai-governance-regime/

https://techcrunch.com/2023/08/31/uk-ai-governance-committee-report/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAAxsIton5d0a9rTn29CUJbF3YMe6OsYgBbTza5x3b41JOqOj9ZEGCA9Prwu7qquDpiXB7_yLxUbRF10HzlUr7P95eXjQ10zosV1gBtgg1Dj6IqrCsTSOup0iWf6QjhWKYBWMix1w6TPE6qmrk_U5L6SH9m2pReMwj74MdxyhyPKO

https://www.theguardian.com/technology/2023/oct/24/eu-touching-distance-world-first-law-regulating-artificial-intelligence-dragos-tudorache