Just ten years ago, the prospect of finding a job as an artificial intelligence ethicist would have sounded more like something out of a sci-fi story than a real job. And yet, in early 2019, KPMG published a list of the Top 5 AI hires companies need to succeed in 2019 with AI ethicist at number 5 on the list.
Indeed, 2019 was quite the year for AI ethics as a phenomenon: on the one hand, it reached unexpected prominence in policy debates as the silver bullet to govern AI without stifling innovation; on the other, there was a backlash against ethics guidelines as a ruse to avoid regulation that coined the term AI-ethics-washing. To get some clarity on the role of ethics in the governance of AI, let’s begin by unpacking some of the key concepts.
First of all, what
This means that we will not discuss what is known as
The AI ethics boom
To understand the role of ethics guidelines in AI governance, it's instructive to begin with some numbers. A 2019 study by Anna Jobim et al. entitled, The global landscape of AI ethics guidelines, “identified 84 documents containing ethical principles or guidelines for AI[...] with 88% having been released after 2016.” The inventory of AI ethics guidelines compiled by Algorithm Watch lists over 160, and is constantly being updated.
Another study of the AI ethics landscape is the Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI from Harvard’s Berkman Klein Center. In this study, the authors analyse 36 sets of AI principles, and note a ‘convergence’ around 8 key themes shared by the various sets of principles:
- Privacy
- Accountability
- Safety and Security
- Transparency and Explainability
- Fairness and Non-discrimination
- Human Control of Technology
- Professional Responsibility
- Promotion of Human Values
Although they note that these principles could be seen to represent a normative core, they caution against drawing any overly optimistic from this apparent convergence. As they note:
On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.
Indeed, while we could see this apparent convergence as a sign that we’re moving towards some common ethical foundation, we need to acknowledge that it is far easier for companies and governments to sign up to relatively vague ethical principles than it is for them to change business practices or enforce restrictive laws.
If principles committing companies to fairness and non-discrimination had serious legal consequences, we’d surely have far more heated disagreement about precisely how to define these principles (there are, after all, 21 different definitions of fairness in machine learning). Regarding companies committing to 'privacy' as a principle, one can think of the joke that while Facebook say that they “take our privacy seriously,” in reality they “take our privacy, seriously.”
On a serious note, concepts such as fairness in machine learning are not only contentious, but researchers have proven that different definitions of fairness are in fact mutually exclusive. This means that two companies can be committed to fairness in AI systems in conflicting ways.
In the idea that AI should “promote human values” we also see a problem: all values are, arguably, human values. The values driving the worst atrocities of human history were ‘human values,’ so it’s hard to take solace in such a commitment, especially if it comes from companies who are notorious for putting profit ahead of human and environmental welfare.
It certainly seems clear that there was an uptick in the production of ethics guidelines for AI beginning in 2016, but what caused this? It's hard to say with certainty, but one possibility is that the enthusiasm (and funding) for drafting ethical principles for AI coincided with the fear of governments introducing regulation for AI.
At this time, AI was emerging as the new buzzword (following the reign of 'Big Data'), but the public were beginning to hear more and more about scandals involving AI technology. In 2015, for example, controversy had erupted after Google's photos app tagged images of a Black couple as 'gorillas.' 2016 also saw the release of Cathy O'Neil's seminal investigation of the negative societal impact of algorithmic systems, Weapons of Math Destruction.
Indeed, this role of ethics in dodging regulation is precisely what Rodrigo Ochigame claimed in an inflamatory article about the role of Big Tech in promoting the 'new discipline' of AI ethics:
To characterize the corporate agenda, it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.
This idea of ethics guidelines serving as means to dodge regulation is what gave rise to the accusation of 'ethics washing.'
This accusation was most famously made by Thomas Metzinger, a philosophy professor and member of the European Union's High Level Expert Group on AI, who accused the group of ethics washing because the 'Ethics Guidelines for Trustworthy AI' which they produced had been weakened due to industry dominance within the group.
Ethics as a practice vs. ethics as a governance mechanism
It’s maybe helpful at this point to make a distinction between two levels of how we look at ‘AI ethics.’ On the one hand, we can look at things at the team level, where people developing AI systems engage in processes of ethical reflection to improve their products.
This could involve developers working with ethicists (whether as part of the team or as consultants), or using one of the variety of technical tools that have been developed to ensure technical fairness in AI systems (notable examples are IBM’s Fairness 360 and Google’s What-If tool). If such practices encourage ethical reflection and thereby improve products, it can't be a bad thing in itself.
On the other hand, we can look at the phenomenon of ethics at an institutional level where it's supposed to function as a governance mechanism. Many of the biggest tech companies are signatories to numerous sets of ethical guidelines, and yet routinely roll out AI products that cause harm. Can be any real impact if a company violates a principle from a set of ethical guidelines that they signed onto? Or are we just going to hear the tired response that they will "work to do better."
Moreover, there are plenty of governments that have signed up to nice-sounding guidelines, such as the OECD Principles on AI, and yet are actively investing in and rolling out obviously unethical AI systems such as live facial recognition in public spaces.
In the case of companies, there's likely no case in which having an ethicist on board would stop serious abuses, because we need to take account of the power dynamics in these situations. How much impact can an individual ethicist have, especially if, as many claim, the negative consequences of some of these technologies are not bugs, but features?
This is especially the case when the conclusion of an ‘ethical assessment’ would be that the product simply shouldn’t be developed. There is no way, for example, to make something like Clearview AI’s facial recognition product ‘ethical’; it simply shouldn't exist, and this is probably not going to be considered constructive feedback at a team meeting.
Similarly, if an independent ethics board came to the conclusion that the business model of Facebook or Twitter requires fundamental changes to stop the platform from causing harm, is it likely that these platforms would (or could) follow the advice?
There is also a flawed assumption that we can have agreement on both the definition and selection of ethical principles, and on their application. A good example of this is the first of Google’s AI principles:
As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.
The obvious questions to ask here are: benefit for whom, and harm for whom? If this really means 'benefits for Google and its shareholders' versus 'harm for people and communities who are subject to its technologies,' then it seems likely that the benefits will be given more weight than the harms. We should also ask whether those likely to be harmed are getting meaningful input into these moral calculations.
This problem of weighing up benefits against harms raises the question of whether utilitarian approaches to ethics (where the idea is to maximise benefit and/or minimize harm) are really the right way to approach the ethical challenges raised by AI systems.
In contrast to such approaches which seek to achieve 'net benefit' by weighing up one person's harm against another's gain, an approach grounded in human rights begins from the starting point that certain harms are simply unacceptable, so let's take a look at how the human rights framework could address some of these problems.
Human rights as an alternative framework
Many of the issues raised so far against AI ethics as a form of governance can arguably be better taken care of by applying a human rights lens to AI. One of the earliest applications of the human rights framework to the topic of AI was the 2018 Toronto Declaration - Protecting the right to equality and non-discrimination in machine learning systems. Since then there has been an ever increasing amount of work in this area, with academics, civil society organisations and international bodies all publishing
Whereas voluntary ethics guidelines leave large scope for companies to interpret what different principles mean, the international human rights framework has established mechanisms for resolving such ambiguities, and enforcing compliance, even if that hasn't always been without issues. As the authors of the Principled Artificial Intelligence study noted:
Existing mechanisms for the interpretation and protection of human rights may well provide useful input as principles documents are brought to bear on individuals cases and decisions, which will require precise adjudication of standards like “privacy” and “fairness,” as well as solutions for complex situations in which separate principles within a single document are in tension with one another
Given that many harmful uses of AI spring from good intentions, it’s also important to ask the kind of questions that the human rights framework motivates us to ask. This can mean asking what harms are unacceptable rather than whether the benefits outweigh the costs. This comes back to the more deontological foundations of human rights in contrast to the utilitarianism that tends to dominate industry ethics principles. A human rights framework offers a greater chance of finding agreement about red lines, rather than aspirations, which is arguably more the advantage of ethics.
To circle back to our earlier distinction, we could say that ethics has an undeniable role improving how development teams think about certain issues, but lacks the 'teeth' to be effective as a form of governance, which a human rights framework can do much better.
As a recent example of how a human rights framework can provide the red lines we need, the UN Special Rapporteur on Contemporary Forms of Racism, E. Tendayi Achiume, has called for an "immediate moratorium on the sale, transfer and use of surveillance technology," and has further stated that in certain cases "it will be necessary to impose outright bans on technology that cannot meet the standards enshrined in international human rights legal frameworks prohibiting racial discrimination."
The human rights framework is not beyond criticism, of course. As Achiume has noted elsewhere, there has been a "general marginality of racial equality within the global human rights agenda" (see her piece, Putting racial equality onto the global human rights agenda). In the domain of AI governance, Pak-Hang Wong has noted the serious challenged posed to the framework by respect for cultural pluralism (see Cultural Differences as Excuses?). Nevertheless, both authors are positive that the human rights framework can rise to these challenges and better integrate both racial equality and a respect for cultural pluralism.
Let's take a closer look at how a human rights or civil rights lens (and other non-utilitarian frameworks) can help us see when tweaks to an algorithm are not enough and when we simply need to ban certain technologies.
The case for a ban: when standards and guidelines aren't enough
It is important to note that a number of critics have pointed to the dangers of focusing on making harmful technologies 'more ethical.' Engaging with technologies such as facial recognition in order to improve them can have the consequence of legitimising them as a solution, whereas we should arguably be rejecting them wholesale.
As Ali Breland notes in his essay Woke AI won’t save us, “[t]he problem with the “woke AI” pushed by companies like IBM is that it asks us to see criminal justice in the same way that companies like Aetna want us to see healthcare: something that basically works fine, but which could use a few technological tweaks.”
In opposition to this perspective that sees AI development as merely needing some ethical guidance and improvement, we've seen an increase in calls for certain systems to be prohibited. Numerous activists and organisations are now calling for facial recognition technologies to be halted or even banned outright.
Coming from a human rights framework, a network of civil and human rights organisations from across Europe (EDRi), have called for a ban on facial recognition technologies that enable mass surveillance. As they point out, such systems violate human rights in such an egregious manner that they simply have to be banned.
In the United States, calls for banning facial recognition have also increased in 2020, largely driven by the Black Lives Matter protests. As Malkia Devich-Cyril explains in this article, Defund Facial Recognition:
[in] an era when policing and private interests have become extraordinarily powerful — with those interests also intertwined with infrastructure — short-term moratoriums, piecemeal reforms, and technical improvements on the software won’t defend Black lives or protect human rights [...] facial recognition and other forms of biometric policing don’t need more oversight, or to be reformed or improved. Facial recognition, like American policing as we know it, must go.
When it comes to AI technologies that undermine our rights, we need more than vague ethical principles and technical fairness tools.
We cannot rely on the goodwill of companies to protect us from abuses caused by technologies which they have a vested interest in developing and deploying. Indeed, rather than companies proactively taking steps to prohibit certain technologies, much of the momentum so far has come from grassroots initiatives such as the growing tech worker movement.
Ultimately, ethics guidelines, no matter how well intentioned, will not suffice. We need governments and international insitutions to step up and draw red lines so that certain applications of AI - from biometric surveillance to predictive policing - are stopped in their tracks, and appropriate safeguards and accountability mechanisms are instituted for others.
Bibliography & Resources
Overviews
The global landscape of AI ethics guidelines
AI ethics guidelines inventory from Algorithm Watch
Alan Winfield - An Updated Round Up of Ethical Principles of Robotics and AI
AI Ethics Lab - Dynamics of AI Principles
On human rights as an alternative framework
Governing artificial intelligence: ethical, legal and technical opportunities and challenges
Unboxing Artificial Intelligence: 10 steps to protect Human Rights
E. Tendayi Achiume, UN Special Rapporteur on Contemporary Forms of Racism - Racial discrimination and emerging digital technologies: a human rights analysis
UN Special Rapporteur Warns of Racial Discrimination Exacerbated by Technology
Emerging digital technologies entrench racial inequality, UN expert warns
E. Tendayi Achiume - PUTTING RACIAL EQUALITY ONTO THE GLOBAL HUMAN RIGHTS AGENDA
Access Now - Human Rights in the Age of Artificial Intelligence
Amnesty International - Ethical AI principles won't solve a human rights crisis
Article 19 - Governance with teeth: How human rights can strengthen FAT and ethics initiatives on artificial intelligence
Access Now - Laying down the law on AI: ethics done, now the EU must focus on human rights
Governing artificial intelligence: upholding human rights and dignity
AI and the Global South: Designing for Other Worlds
Artificial Intelligence: What’s Human Rights Got To Do With It?
AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR
Artificial intelligence & human rights: opportunities & risks
On Ethics Washing:
Ethics as an escape from regulation: From ethics-washing to ethics-shopping?
AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing
The Invention of “Ethical AI” - How Big Tech Manipulates Academia to Avoid Regulation
MIT Technology Review - In 2020, let’s stop AI ethics-washing and actually do something
-
Key quote:
“AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. We’re falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.”
Integrating (not “adding”) ethics and critical thinking into data science
Thinking About ‘Ethics’ in the Ethics of AI
From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy
Fuzzy Logic, Fuzzy Ethics - The industry-wide turn toward ethics obscures tech’s allergy to politics
Ethics of Technology Needs More Political Philosophy
'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley
On Google’s failed ethics board:
- AI Weekly: Google’s ethics council barely lasted a week, but there’s a thin silver lining
- Google cancels AI ethics board
- Google’s brand-new AI ethics board is already falling apart
- Hey Google, sorry you lost your ethics council, so we made one for you
US Military adopts ethical AI guidelines:
- DOD Adopts Ethical Principles for Artificial Intelligence
- U.S. military adopts new ethics principles for using AI in war
How to avoid ethics washing:
“That could mean that, as with the establishment of ethics boards, these firms get to shape the way they’re regulated. As Poulson puts it, “Why do tech companies get to choose their own critics?” It’s a dynamic that’s long worked in tech’s favor, and we’re seeing it action right now, with Zuckerberg calling for regulation of Facebook but on the company’s own terms.”
Real or artificial? Tech titans declare AI ethics concerns
On machine ethics:
- Critiquing the Reasons for Making Artificial Moral Agents - Aimee van Wynsberghe & Scott Robbins
- Self driving car accident algorithms are not a trolley problem - Sven Nyholm
- Machine ethics: The robot’s dilemma
Miscellaneous
Lessons in practical AI ethics - Helping the innovation community turn theory into practice
European Union Panel for the Future of Science and Technology (STOA) report from ethics to policy
Artificial intelligence in a crisis needs ethics with urgency
The Growing Marketplace For AI Ethics
The ethics of artificial intelligence: Issues and initiatives
Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech
Principles alone cannot guarantee ethical AI
Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.
Video: Tech Won't Build It: The new tech resistance discussion panel