Categories
Uncategorized

The worrying ideologies behind Big Tech – pt 2.

Why we should be wary of AI that is shaped by these ideologies

These ideologies all have two things in common. One: they come from a place of enormous (financial) wealth; Two: they assume that tech alone can solve any real-life problem. 

However, it is by no means certain that all the results of AI will benefit society. Instead, it can result in even greater imbalance of power and exploitation, and is typically not geared towards making our societies fairer or more just.

The tech leaders who raise fears about anthropomorphic machines taking control of our civilisation in the distant future do this in order to take attention away from the very real problems we are facing now.

  • Corporations already deploy automated systems which centralise power and increase social inequality.

  • AI is feeding into the move towards political authoritarianism. Machine learning algorithms replicate existing biases and falsehoods. A problem with large language models (such as Chat GPT) is that we can be tempted to view the system as an oracle, trust its output and act accordingly. False information can ruin reputations and ill-founded decisions can be made. 

  • People’s rights and privacy are undermined via intensive data gathering and surveillance, as well as disregarding creative and other rights and the wellbeing of vulnerable groups.

It’s striking that those who espouse these ideologies show little concern for problems such as growing social and economic inequality, the rise of authoritarianism, or the centralisation of power.

Alternative views

Fortunately, not all tech entrepreneurs think the same way.

For example, Mustufa Suliman and Michael Bhaskar, wrote The Coming Wave, Technology, Power and the 21st Century’s Greatest Dilemma, which highlights the inevitability of dangers from unconstrained AI and synthetic biology technologies, the near impossibility of containing them, and a possible set of solutions to contain them. Suliman was a founder of DeepMind, whose mission was to develop artificial general intelligence, (AI with human-like adaptability). In The Coming Wave he sets out 10 strategies for the “containment” necessary to keep humanity on the narrow path between societal collapse and AI-enabled totalitarianism. These include increasing the number of researchers working on safety from a few hundred to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out international treaties to restrict and regulate dangerous tech.

Another tech leader, Timnit Gebru, founded the Distributed AI Research Institute (DAIR) which provides encouraging alternatives to big tech. “We believe that artificial intelligence can be a productive, inclusive technology that benefits our communities, rather than work against them. However, we also believe that AI is not always the solution and should not be treated as an inevitability. Our goal is to be proactive about this technology and identify ways to use it to people’s benefit where possible, caution against potential harms and block it when it creates more harm than good.”

JAAG’s view

There is a huge potential for new AI tools truly good for humanity, especially in the medical field.  But, as with most technology, it can be used with good and bad effect.

JAAG believes that we all – users, tech entrepreneurs and legislators – need to focus on providing transparency; this means being open about when AI is being used and what it is used for. We need AI developers and deployers to be fully accountable.

We need protection against exploitative working practices. A central requirement is regulation that protects the rights and interests of people and thereby shapes the actions and choices of big corporations.

Strongly enforced ethical guidelines, reflecting human well-being and society’s priorities will be crucial.

We should be building machines that work for us, not adapting society to the wishes of the few elites currently driving the AI agenda. Those most impacted by AI systems, who include the most vulnerable in society, should have their opinions taken into account. 

Instead of worrying so much about imaginary digital minds, we should focus on the current exploitative practices of companies developing AI, which increase social inequality and centralise power.

 <<< Part 1

Categories
Uncategorized

The worrying ideologies behind Big Tech – pt 1

What are the ideologies behind Big Tech?

New ideologies – alongside neo-liberal and capitalist worldviews – are very influential among ‘Big Tech’ leaders: those who basically decide how AI develops.

JAAG has very strong concerns about these viewpoints, which are explained in Part 2, but first we consider the ideologies themselves.

1 Effective Altruism (EA)

EA advocates using reason and evidence to figure out how to help others as effectively as possible. Key aspects include parity in location: choosing to help strangers in other countries with greater needs rather than local people, and earning to give: make as much money as possible (e.g. as a tech entrepreneur or banker) in order to donate it to charities believed to do the most good.

EA people talk about AI safety, how AI is beneficial to humanity, shaping the future of AI governance. They give grants and career advice for people to enter these fields.

EA fuels the AI research agenda, notably Large Language Models (LLMs).

2 Long-termism

This view, often associated with EA, attaches great importance to the interests of future people. ‘Strong long-termism’ posits that future people are just as important as those living today.

Therefore, its adherents judge actions on the basis of their (assumed) impact many years ahead; they advocate boosting economic growth and minimising the risk to humanity posed by rogue artificial intelligence. More and more, EA funds long-termist projects, e.g. preparation for space colonization and existential threats.

Peter Thiel and Elon Musk are among the founders of Open AI, whose ostensible mission was to build beneficial artificial general intelligence (AGI). Musk says long-termism is a close match to his philosophy. Five years after its founding, Open AI released a large language model (LLM) called GPT-3. The ‘race to the bottom’ continues, with LLMs and text-to-image models etc., without anyone addressing any of the identified harms.

Elon Musk also helped found a long-termist organization called ‘The Future of Life institute’.

He also donated £1M to the Future of Humanity Institute affiliated to Oxford University and run by Nick Bostrom. However Oxford University decided to shut it down in April 2024 after 19 years of operation.

The long-termist view of anthropomorphic machines taking control and civilisational destruction can be seen as AI hype. It is taking attention away from the very real problems we have here and now from corporations and people deploying automated systems.

3 Effective Accelerationism (e/acc)

Marc Andreessen, a venture capitalist and a proponent of e/acc has stated: “We believe any deceleration of AI will cost lives … Deaths that were preventable by the AI that was prevented from existing is a form of murder.” In this context, Andreessen opposes “sustainability”, “social responsibility”, “stakeholder capitalism”, the “Precautionary Principle”, “risk management”, “de-growth”, “socialism”, “and “depopulation” among many others.

Andreessen Harrowitz venture capital investments are backing a ‘heathcare’ start up called Ciitizen that wants to make it easy to collect patient data by persuading patients to ask for their data, store it in their platform and share it with 3rd parties. It is incredibly naive to think that this will lead to safe outcomes.

Andreessen has released the ‘Techno-Optimist manifesto’ in which he writes that technology makes the world not only a better place but also a good place. Andreessen has stated that society cannot progress without (economic) growth, and growth must come from technology: “there is no material problem – whether created by nature or by technology – that cannot be solved with more technology … Give us a real world problem, and we can invent technology that will solve it”.

4 Neo-liberalism

Technology is often used by politicians to fit their political narrative. Governments of all persuasions may espouse the use of AI and other technologies in order to get more power and control, for political manipulation or mass surveillance, etc; in China, AI is already being used for authoritarian purposes.

The Neo-liberal ideology – which aims need to dismantle the state and allow maximum freedom to entrepreneurs – gave birth to the UK’s Universal Credit system and Australia’s Robodebt scheme, which have been catastrophic for so many of the least well-off. Governments are also commissioning algorithms to control migration or to limit legitimate protests. 

When technology is created with a focus on increasing profit, companies are reluctant to take an ethical approach in its development. Safeguarding measures are costly, as is content moderation. The first people who got fired when Musk took over Twitter were the ethics team working to stop abusive tweets. 

Neo-liberal ideology has created a tiny, unrepresentative billionaire class who decide what technology we should have, who decide which societal problems are worth focusing on, who control the technological narrative, and who can spend millions lobbying governments to see things their way.  

>>> Part 2

Categories
Uncategorized

Farewell, Data Protection Bill??

The Data Protection and Digital Information Bill – about which JAAG had expressed significant concerns – will not become law in this Parliamentary session because business has been curtailed due to the Prime Minister’s decision to call an early General Election.

The Bill aimed to reform the UK data protection regime following Brexit.

JAAG, like many other civil society organisations, had expressed concern that the Bill would have:

  • removed individuals’ rights to not to be subjected to automated decision-making (without any human involvement);

  • limited people’s ability to access information about how their personal data is collected and used;

  • reduced the independence of the Information Commissioner’s Office (ICO);

  • given the Secretary of State undemocratic controls over data protection;

  • given businesses new freedoms to use personal data for commercial activities; and

  • watered down organisations’ obligations to be accountable for the way they use personal data. 

It is, of course, possible that the next UK government will introduce this, or a similar, Bill.

JAAG will continue to scrutinise government plans that may curtail citizens’ rights over the way their personal data is used.