Categories
Uncategorized

Ethical dilemmas posed by Artificial Intelligence

By Siani Morris

Long term worries about the impact of AI are being raised in the media. Yuval Noah Harari warns, for example, about the creation of a completely new culture (because AI can create new cultural artefacts), fake intimacy and the destruction of democracy.[ii] Ex-Google chief Dr Geoffrey Hinton left his job[iii] due to concerns about bad actors and existential risks due to things more intelligent than us taking control. The head of OpenAI, who developed GPT, is raising alarm more widely about the potential impacts of AI and calling for regulation.[iv]

Hype

It could be argued that most Big Tech leaders and AI developers say they’re aware of the risks associated with AI. For example, in March 2023, the Future of Life Institute published a letter[v] –signed by 30,000 people including Elon Musk[vi] [vii], Steve Wozniak (Apple co-founder), Dr Geoffrey Hinton (ex-Google AI expert) and Yuval Noah Harari (academic historian and bestselling author). They call on major AI developers to agree on a six-month pause of any systems more powerful than GPT-4 [viii] and to use that time to develop a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.

So we might assume that Big Tech leaders and AI developers will make sure nothing goes wrong.

In fact, the situation is complex and nuanced. Many Big Tech leaders do indeed stress the danger of anthropomorphic machines taking control and destroying civilisational; but this can be seen as AI hype designed to mask – and take attention away from – the very real problems that people are experiencing here and now from automated systems.

Ethical shortcomings of AI systems

The (non-)ethical behaviour of AI-assisted machines has been in the news not only from AI developers raising the alarm but also due to problematic cases that have come to light, such as bias in judicial decision-making[ix], bias in recruitment[x], chatbots that started to use racist language[xi], etc. Companies like Amazon are taking control of their customers’ (and other people’s) data and using it. More fundamental questions have been asked about whether any AI could ever be trusted with decisions that impact human life.[xii] [xiii] [xiv]

So, the rapid development of AI systems brings with it hitherto unthought-of ethical dilemmas.

It might technically be possible to overcome some of these if we code some of our ethical values into some of the AI systems and hope it alters their behaviour as we would wish. We could perhaps think of an evolutionary process allowing some ethical-like behaviour to emerge under favourable conditions, or to favour machines that might want to continue to exist, or it might happen that a machine could develop (from some kind of a reward system) a new sub-goal related to itself continuing to exist. But this might sometimes contradict what might be in the best interests for humans. The ethical problem remains.

An AI ethical vacuum

AI may be deployed in ways that involve privacy and security breaches, discriminatory outcomes and an adverse impact on human autonomy. For example, training data for AI systems may be obtained via unauthorised means: not only is there a privacy issue, but creative rights can be exploited. There are currently legal challenges to OpenAI about the way that it obtains and uses other people’s intellectual property. There are also issues with the truth of the outputs of the system: not only do algorithms replicate the biases of the training data, but there are issues concerning falsehoods being replicated and difficult to correct.

A central problem with large language models like ChatGPT is that humans can treat these systems as some kind of Oracle and mistake the system’s output for meaningful text and act accordingly. Information given may not be true and reputations may be ruined. Relevant actors within the environment in which the AI is embedded should anticipate and take account of the social impacts of technology post deployment but unfortunately this is often not done in a timely way: consideration should be given to job disruption, social justice, sustainability, and the effects on vulnerable groups, among other aspects. Many local councils in UK have had to abandon the use of facial recognition systems because of ethical and legal issues.

The way in which issues with algorithms within automated systems are dealt with, whether AI systems or other types of automated system, can be paramount in the avoidance (or otherwise) of harm.  For example, there is an ongoing miscarriage of justice that is currently in the news in which hundreds of sub-postmasters in the UK were wrongfully prosecuted for theft, false accounting and/or fraud. In 1996, International Computers Limited (ICL) began working on a computer accounting system, called Horizon, for the Post Office and the UK government. By 1999, ICL was part of Fujitsu, and Horizon was introduced, however it wrongly detected the existence of financial discrepancies at multiple post office branches. Investigations and legal cases have been held, with some compensation given, but the matter is still not completely resolved. Meanwhile, many hundreds of the sub-postmasters’ lives have been very adversely affected.

In June 2020, four working single mothers successfully defeated a court appeal by the Department of Work and Pensions following considerable hardship to them due to loss of Universal Credit income arising from a design failure (related to pay date clashes) within the automated system used in Universal Credit and the refusal to fix this.

Also in 2020, the exam regulator Ofqual downgraded almost 40% of the A-level grades assessed by teachers, which culminated in a government U-turn and the system being scrapped.

So, what should concern us is not so much the morality of AI but rather about the morality of the companies or other entities that develop and control AI; it is they who need to be obliged to act ethically.

Instead of worrying about imaginary digital minds we should focus on the current exploitative practices of companies that develop AI systems which increase social inequality and centralise power.[xv]

We should be building machines that work for us, not adapting society to the wishes of those few elites currently driving the AI agenda.

Those most impacted by AI systems, who include the most vulnerable in society, should have their opinions taken into account. 

A better approach – ethical AI

It Is vital that governments regulate to protect individuals’ rights and interests, and thereby shape the actions and choices of corporations.

Humans need to reach a shared understanding as far as is possible in a given context, based around social good and ethical standards.

A positive approach to ensuring that AI always operates in line with ethical standards would incorporate several elements. As a start, there would need to be a legal framework ensuring:

  • protection against exploitative working practices

  • strongly enforced ethical guidelines reflecting society’s priorities, and

  • transparency (including being transparent about the fact that AI is being used), so that people know what is going on and are able to make choices accordingly

  • accountability (of developers and deployers) , and  

  • humans in the loop so that the meaning and impact of automated decisions can be assessed and reviewed and it is not just a case of the computer saying ‘no’, particularly where the decision would significantly or adversely affect someone.

But how could this be achieved?

Ethical guidelines

There are different approaches to ethics – rules-based versus outcomes-based – and there are cultural sensitivities to take into account.

AI ethical standards are still quite immature, but there is a plethora of AI ethical guidelines. To quote just one example: ACM[xvi] recently proposed nine principles for responsible algorithmic systems: legitimacy and competency, minimising harm, security and privacy, transparency, interpretability, maintainability, contestability, accountability and limiting environmental impacts.

Ethical ecosystem

For any standards to have effect, they would need to be overseen and enforced. The extent to which the ethical impact of an AI system would be assessed, and the degree of properties such as transparency would need to be consistent with the AI system’s impact and associated public policy.

Good AI governance should be deployed across organisations, with strong oversight.[xvii]

Other types of AI systems, such as those causing exploitation, should just not be developed.

European legislators are finalising a new AI Act with embedded ethical protections; this will not apply to the UK, which is taking a much lighter touch approach to regulation since Brexit. The proposed EU AI Act is designed to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU”. It will “provide risk-based, legally binding rules for AI systems that are placed on the market or put into service in the Union”.

Ethical working practices

Ethical design should be employed at all stages of AI development and deployment[xviii]; e.g. in the deployment of AI, any potentially harmful effects should be mitigated, so[xix] if jobs are lost through AI[xx], society would need to set up rewarding activities for those put out of a job, and protect their income.

Conclusion

There is a huge potential for new AI tools that are truly good for humanity, especially in the medical field.  But technology can be used for good or for ill. For example, though phone tapping exists we probably wouldn’t like to do without the telephone; we rely on legislation based on ethical standards to protect us. The same could apply for AI.

See also the companion blog post “An ethical framework for AI and humanity?”


Notes

[i]             AI or Algorithms?

It is not just AI that is the problem here. When I recently asked for a definition of AI in chat GPT, I received the result ‘A field of computer science exploring the creation of intelligent machines capable of performing tasks that typically require human intelligence.’
But some issues are in common with a broader category relating to usage of algorithms to make decisions affecting people. In computing, an algorithm is a procedure for producing a defined result (e.g. performing a calculation), as distinct from a specific set of instructions (program) implementing that algorithm.

In JAAG we extend the term to encompass the whole system of procedures and rules followed by people as well as computers for a given purpose. We focus, in particular, on systems by which an individual is judged to merit receiving some kind of benefit, service, or privilege, or of being subject to some kind of charge or penalty. As harm can result from their deployment, even now, we should take care not to narrow down unduly the focus of consideration.

[ii]            Yuval Noah Harari argues that AI has hacked the operating system of human civilisation (economist.com)

[iii]           AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google – BBC News

[iv]            Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence – BBC News

[v]             Pause Giant AI Experiments: An Open Letter – Future of Life Institute

[vi]            Elon Musk among experts urging a halt to AI training – BBC News

[vii]           Musk co-founded OpenAI in 2015 but resigned from the board in 2018 and subsequently failed to take over the company when his bid was rejected. Now it is in partnership with Microsoft. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could become an oracle threatening the need for their search engines.

[viii]          GPT-4 stands for Generative Pre-trained Transformer 4 and is a multimodal large language model (LLM) created by the startup OpenAIreleased on March 14, 2023, that is the next version of the previous (GPT-3.5 based) ChatGPT and can take images as well as text as input.

[ix]           https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

[x]            https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[xi]           https://www.businessinsider.com/microsofts-tay-has-been-shut-down-after-it-became-racist-2016

[xii]           Krishnan, A., 2009. Killer robots: legality and ethicality of autonomous weapons. Ashgate Publishing Ltd.

[xiii]          Bjorgen, E. et al, 2018. Cake, death, and trolleys: dilemmas as benchmarks of ethical decision-making. In AAAI/ACM conference on artificial intelligence, ethics and society, pp. 23–29.

[xiv]          Misselhorn, C., 2018. Artificial morality. Concepts, issues and challenges. In Society, Vol. 55, No. 2, pp. 161–169.

[xv]           People like Timnit Gebru and Emily Bender are drawing attention to these issues. See for example their Stochastic Parrot paper On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency and subsequent article Statement from the listed authors of Stochastic Parrots on the “AI pause” letter (dair-institute.org) and Twitter thread: @emilymbender@dair-community.social on Mastodon on Twitter: “Okay, so that AI letter signed by lots of AI researchers calling for a “Pause [on] Giant AI Experiments”? It’s just dripping with #Aihype. Here’s a quick rundown. >>” / Twitter Also: A misleading open letter about sci-fi AI dangers ignores the real risks (substack.com)

[xvi]          This was based upon the Association for Computer Machinery’s Code of Ethics and Professional Conduct – This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018. The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way. Additionally, the Code serves as a basis for remediation when violations occur. The Code includes principles formulated as statements of responsibility, based on the understanding that the public good is always the primary consideration.

[xvii]         cf. IAPP Privacy and AI Governance Report

[xviii]        See for example UK Government Data Ethics Framework (publishing.service.gov.uk)

[xix]          See for example: Understanding artificial intelligence ethics and safety – A guide for the responsible design and implementation of AI systems in the public sector by David Leslie from the Alan Turing Institute.

[xx]           Or rather when…for example, BT estimate up to a fifth of its jobs will be lost due to AI by the end of the decade: BT to cut 55,000 jobs with up to a fifth replaced by AI – BBC News

Categories
Uncategorized

An ethical framework for AI and humanity?

By Siani Morris

Ethics

Where does ethics come from? It’s made up by people in a cultural context; there is no standard upon which everyone would agree.

The ethics of decision-making systems is a live topic, and well-funded institutions are already conducting projects in this area.

There is plenty of background information about ethical AI already produced[i], with sets of broadly common principles that include privacy, transparency, accountability, sustainability, etc.

Broadly speaking, these ethical frameworks, designed to influence company behaviour, can be seen as a broadening out of the already existing privacy accountability governance frameworks, although there are additional aspects to be considered such as technical means to address bias.

But there is also currently quite a lot of ‘ethics washing’ going on, especially from large companies trying to water down ethical AI principles and regulation.[ii]

So some ethical frameworks will be more trustworthy than others, depending on an individual’s viewpoint and the background of the people producing them: for example, EU citizens might rate more highly the outputs of the EU ethics expert group influencing the upcoming EU AI Act[iii]

Key elements of an ethical process

In order to devise an ethics for ‘doing the right thing’, with respect to the behaviour of an accountor (such as the developer or deployer of an AI system), criteria need to be specified for judging ‘good’, ‘bad’, ‘right’, etc.  and then the ethical quality of the accountor’s actions needs to be judged according to these criteria, and if there are shortcomings, reasons need to be provided.

A core part of the accountability involved is to determine and clarify the rights and obligations of actors, including clearly allocating privacy and security responsibilities across supply chain actors and other parties involved in providing the service.

It is useful to distinguish between

  • ethics inside AI-based systems,

  • the ethics of businesses or people developing AI-based systems, and

  • the ethics of AI deployments.

Although it is important for all three aspects to be ethical, the ethical problems involved in these three aspects may be solved using different techniques. For instance, the question of ethics inside AI seems more abstract, and current efforts fall short of expectations[iv], especially as the newer types of AI system no longer directly encode heuristics or semantics. On the other hand, the ethics of people developing AI and the ethics of deploying AI in a particular context may be more amenable to being solved using currently available techniques. There are already some techniques that have been recommended to develop AI in an ethical manner. [v] [vi]

The deployment of AI-based systems requires the analysis of development techniques and social contexts, as well as the deploying the organisation’s own practices.

There are different approaches to ethics that may be taken within each of these:

  • rules-based versus outcome- based,

  • virtue ethics and

  • cultural sensitivities to be taken into account.[vii]

For example, Aristotelian human well-being / flourishing could be made central to an ethical approach when developing AI systems (as with IEEE[viii], where associated metrics are also provided).

Ethical decision making

In every aspect of human activity, different ethical approaches can be taken. Broadly speaking, these divide into

  • teleological approaches (an ethics of what is good – for example, utilitarianism) and

  • deontological approaches (an ethics of what is right – for example, using Kant’s categorical imperative).

Depending on which ethical approach is taken, the answer about the right course of action might be different.

  • A teleological decision looks at the rightness or wrongness based on the results or the outcomes of the decision.

  • A deontological decision instead considers the moral obligations and/or duties of the decision maker based on principles and rules of behaviour[ix].

Ethics in business

The ethical dimensions of productive organisations and commercial activities have been studied since the 1970s within the field of business ethics, and a number of different approaches can be taken corresponding to this[x], ranging from Milton Friedman’s view of corporate executives’ responsibility generally being to maximise profits while conforming to basic rules of the society to the opposing idea of corporate social responsibility (actions by businesses that are not legally required and intended to benefit other parties).

In order to use these ethical approaches in a practical perspective by embedding ethics within business operations, one approach is to try alternative approaches and see the extent to which there is agreement. This may sound simple, but actually it is not necessarily an easy process.

Let us consider comparing just one form of deontological judgment with one form of teleological judgment. If the result were that you would be doing the wrong thing and getting the wrong results (poor outcome), it might seem fairly obvious that a project fitting in that category should not go ahead, just as it needs little thought that a project doing the right thing and getting a good outcome is perfectly fine to go ahead. However, if you regularly deliver highly on the deontological spectrum but poorly on the teleological spectrum, you may well go out of business as it just might not be sustainable financially to continue. Conversely, if delivering highly on the teleological spectrum but low on the deontological spectrum, the drive for profit is taking precedence over consideration about what is (or is not) the right thing to do. Also, there is a zone of ethical nuances (especially along the boundaries between these) where the conclusion is not clear. Furthermore, when there is an economic slump, things can be perceived to be ethically questionable that would not have been before, so this ethical nuances zone can change.

Moreover, there tend to be different kinds of ethical perspectives for different types of organisations. For instance, guardian roles (such as regulators) seem to favour a deontological culture, whereas commercial institutions seem to favour a teleological culture and other actors (such as activists and technologists) may favour virtue ethics roles. Broadly speaking, governmental policy makers have outcome-based ethics, like commercial organisations, but are interested in economic and developmental outcomes at the national or regional level rather than the organisational level. Citizens have their own ethical framework.

These different ethical frameworks and potentially conflicting objectives can make designing ethical codes of practice for the configuration and commercial use of new technologies difficult. The code of practice could be a failure if it is unacceptable to any of these types of stakeholder. It must provide adequate protection of individuals’ rights and interests. It must also give guidelines and assist with compliance to laws and regulations, as well as being practical for information technologists to comply with, and allowing new innovative mechanisms to achieve their potential for driving socially and economically beneficial applications. In addition, it is beneficial to take into account a more nuanced understanding of “harm” including risk, potential harm, and forms of harm other than just physical and financial. In the data protection sphere, this is somewhat accounted for within the notion of Data Protection Impact Assessments (DPIAs), which extend the standard practices of security risk analysis to examine also harms to the data subject with regard to a proposed activity. In carrying out this assessment the summation of the harm across society needs to be properly evaluated and justified, as otherwise there is a risk that the potential harm to a single individual, as measured by an organisation that has a particular activity in mind, will typically tend to be overridden by other concerns.

There are different examples of ethical frameworks for decision making in contexts particularly influenced by recent technological development from different countries.

But many initiatives that purport to define principles and frameworks for the ethical development and deployment of AI are not truly independent, being sponsored by corporations.

Whose ethics?

But whose ethics would be used in an AI system?  There is no standard ethical framework universally accepted. Even if different nations or parties were to agree to one, this would not be enforceable and people could have a false sense of security, thinking things were fine. The issue of companies who ultimately control usage of AI and their selfish motives would remain.

How can we account for the subjectivity of ethical judgment? Solutions might include allowing customisation or tailoring, having inputs from the different parties involved or a process protecting the rights and interests of the less powerful (as suggested for algorithmic impact assessment procedures[xi]). Another approach tried was the BCS framework[xii], which was a meta-framework that allowed different ethics approaches to be slotted in, although the realisation of this system was not completed. Discussion amongst affected and involved parties should be part of the process: there are some current promising initiatives such as the Open Data Institute’s [xiii]Ethics Canvas, and others of a similar nature, which provide a framework for stakeholder discussion and information capture.

Some of these aspects and principles are easier to embed into AI systems than others, for example dependent upon the nature of the AI system (rules-based or machine learning (ML), etc.). Virtue ethics may be used as a regulating force as goal-directed behaviour becomes internally set. Techniques influenced by ethical considerations include the provision of flowcharts to encourage developers to think about ethical aspects at appropriate points, the provision of a common vocabulary for different involved parties, the education of developers, giving people impacted a choice and a say, banning certain types of development absolutely, etc.

We should also put in place mechanisms to address in a more ethical way the consequences of AI, such as introducing Universal Basic Income at a decent level, as jobs come more under attack, and making sure that the money to pay for this will come from the people who are benefitting most from the introduction of the new technology.

Substantive Basis for Ethics

How can we provide a substantive basis for ethical judgment? Most ethical frameworks do not provide a substantive basis for ethical judgment but rather a list of values for discussions, and this can prove inadequate in terms of independent assessment of a proposal. Domain by domain, it might be possible to take an analogous approach to that described by Gary Marx for the surveillance domain (as an example of new types of activity around enhanced surveillance possible using new technology)[xiv]; he shows how societal norms can inform an ethics for surveillance. To do this, he asserts that the ethics of a surveillance activity must be judged according to the means, the context and conditions of data collection, and the uses or goals; he suggests 29 questions related to this. The more one can answer these questions in a way that affirms the underlying principle (or a condition supportive of it), the more ethical the proposed activity would be deemed to be. In this case, four conditions are identified that, when breached, are likely to violate an individual’s reasonable expectation of privacy, respect for the dignity of the person is a central factor and emphasis is put on the avoidance of harm, validity, trust, notice, and permission when crossing personal borders.

We need ethical frameworks for AI

Nevertheless, despite these problems, and any framework being potentially some kind of straightjacket that is kicked against, if we want AI to exist in our world then we do need to have principles and frameworks that will try to control its usage, as the alternative is even worse. 

Overall, the recent document produced by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems on Ethically Aligned Design provides a very good consideration from a ‘neoclassical’ engineering perspective of how both classical Western and non-Western ethical approaches may be taken into account to benefit AI development and deployment.[xv]

One example solution: Quaker Ethics

As an organisation influenced by (although not centrally tied to) Quaker principles, JAAG considers a good starting point for an ethical framework to be Quaker business ethics[xvi], whose central principles are truth and integrity (including speaking truth to power); justice, equality and community; simplicity; peace; the light within/individual moral compass.

With regard to AI ethics, there are several aspects that should be considered in a given context; these include the truth of the outputs of the system (including replication of bias and ability to detect and correct inaccurate information); social justice and effects on vulnerable groups; sustainability; and not contributing to excessive inequalities of wealth distribution. In addition, certain activities should just not be allowed, including those that threaten peace and community.

Oversight

And who will enforce and judge this properly? Much can happen ‘under the radar’, regulators are typically very under-resourced and there will be some bad (or otherwise-focused) actors not willing to engage at all. Licensing and trust marks issued by trusted third parties which check what is actually happening in the AI black boxes and beyond could be useful. But ultimately who funds and who guards the guardians?

Brent Mittelstadt’s research[xvii] in “ethical AI” has shown that principled self-regulation, via agreed ethical principles, cannot be relied upon to be effective. These findings match our own experience.

Based on long experience with assessing safety-related systems, we are proposing not only standardisation and independent certification but also the use of protections at each relevant stage of a system’s lifecycle.

A future possibility: Automating techniques for ethical checking

It is interesting to speculate how the cause of a problem may also be used to help with its solution. AI-based systems are currently deployed through an exhaustive investigation of physical and digital artefacts by human beings. However, it is well within the realm of possibility that techniques to analyse digital artefacts and draw conclusions about the AI creation and deployment process would soon be available.[xviii]  In our opinion, there will be increasing efforts to automatically identify digital artefacts that could be used to infer ethically problematic practices. Initially, these might be limited to generating reports which are then evaluated by a human. However, as with as automated testing and natural language generation, or theorem proving, computer science has a history of automating small tasks that build on each other to perform a complex operation. This may result in more sophisticated tools that may eventually be able to perform deeper analysis on a company’s artefacts to infer ethical values, just as the ISO 9000 series of certifications allows us to infer product quality from process quality. We would speculate that simpler tasks such as checking whether datasets have been de-biased would be the first milestone in this endeavour, with other practices following suit. Automation thus functions as a marker of human ingenuity, where we are able to dissociate the task from the intelligence required to do it, by breaking it down into simpler objectively evaluated parts.

See also the companion blog post “The Ethical dilemmas posed by AI”

Notes

[i]              See for example the list available from the Institute for Ethical AI and Machine Learning: EthicalML/awesome-artificial-intelligence-guidelines. This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards and regulation.

[ii]              An Evaluation of Guidelines – The Ethics of Ethics – Thilo Hagendorff’s research paper that analyses multiple ethics principles being proposed by different parties.

[iii]             European Commission’s Guidelines for Trustworthy AI – The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year.

[iv]                        Nallur, V., 2020. Landscape of Machine Implemented Ethics. In Science and Engineering Ethics, Vol. 26, pp.2381–2399. https://arxiv.org/abs/2009.00335

[v]                         Heidari, H. et al, 2018. A moral framework for understanding of fair ML through economic models of equality of opportunity.  arXiv:1809.03400.

[vi]                        Anderson, M. et al, 2019. A value-driven eldercare robot: Virtual and physical instantiations of a case-supported principle-based behavior paradigm.  In Proceedings of the IEEE, Vol. 107, No. 3, pp. 526–540. ; Gebru, T. et al, 2020. Datasheets for Datasets, arXiv:1803.09010 [cs]. Available at: http://arxiv.org/abs/1803.09010 

[vii]                       Harris, I: Commercial Ethics: Process or Outcome? Gresham Lecture, London, 6 Nov (2008).

[viii]                      IEEE’s Ethically Aligned Design – A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems that encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. See for example ead1e.pdf (ieee.org).

[ix]                        More information about the various different sub-approaches and philosophers in such a taxonomy of commercial ethics is given in Harris’s lecture.

[x]                         Moriaty, J.: Business Ethics. Stanford Encyclopedia of Philosophy November (2016)

[xi]                        https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/

[xii]                       Harris, I., Jennings, R.C., Pullinger, D., Rogerson, S., Duquenoy, P.: Ethical assessment of new technologies: a meta-methodology. J. Inf. Commun. Ethics Soc. 9(1), 49–64 (2010). Emerald Group Publishing / Google Scholar

[xiii]                      https://theodi.org/article/data-ethics-canvas/

[xiv]                     Marx, G.T.: An ethics for the new surveillance. Inf. Soc. 14(3), 171–186 (1998) / Google Scholar

[xv]                      ead1e.pdf (ieee.org)

[xvi]                     Quaker Business Ethics Principles: The Quakers and Business Group (hubble-live-assets.s3.amazonaws.com)

[xvii]                    Principles alone cannot guarantee ethical AI by Brent Mittelstadt, Oxford Internet Institute and Turing Institute, 2019, and Counterfactual explanations without opening the black box: automated decisions and the GDPR by Sandra Wachter, Brent Mittelstadt, & Chris Russell.

[xviii]                   Automation: An Essential Component Of Ethical AI? Nallur, V.; Lloyd, M.; and Pearson, S. In ICT, Society and Human Beings, volume 15, pages 229–232, July 2021