AI Myths
AI can be objective/unbiased
AI can be objective/unbiased

AI can be objective/unbiased

In this section, we'll tackle the idea that AI systems can be purely objective or that they can be free of bias. We'll outline what people mean when they talk about AI bias, and look at how focusing on fixing bias in AI systems can distract from more important questions of addressing power relations and tackling broader social problems.

One claim we frequently hear about AI systems is that they can increase the accuracy and objectivity of decisions. Given how many studies have shown that human decision making is prone to unconscious biases, this might sound promising. In this vein, we've heard claims that AI can reduce discrimination in the hiring process and even how Disney have used machine learning to analyse scripts for gender and racial bias.

However, in contrast to this optimism about AI helping to tackle bias, we've been inundated with stories about AI systems causing harm to marginalised groups and further supporting oppressive systems. Some of the most high profile cases of such harms are: Amazon’s recruitment system that automatically rejected CVs from women applicants; Google’s photo app which labelled a picture of a Black couple as ‘gorillas'; and the racial disparities in the accuracy of facial detection systems exposed by Joy Buolamwini and Timnit Gebru.

To make sense of this disparity between promises of AI overcoming human bias and the ever-growing evidence of AI systems doing precisely the opposite, let's start by looking more closely at the problem of 'AI bias.'

What do we mean by 'AI bias'?

When people talk about 'AI bias,' they are usually talking about bias in

, an approach to programming where systems 'learn' patterns and rules by analysing data, as opposed to traditional approaches where rules have to be manually programmed.

Of course, even the most straightforward, hand-coded system can be biased. Take, for example, a hypothetical line of code for sorting loan applications that automatically disqualifies anyone who chooses ‘female’ as their sex. Such overt bias is easy to spot and most likely easy to fix, but things get more tricky in complex systems, and particularly in those based on machine learning.

This is especially important given that practically all of what we hear referred to as AI these days is based on some form of machine learning. It's worth noting, however, that because even the simplest system can lead to harmful outcomes,

prefer to use terms such as automated decision (making) systems to capture both complex machine learning algorithms and more traditional approaches to programming.

In essence, machine learning is an approach to AI where instead of hand coding hundreds or even hundreds of thousands of rules to dictate how the system should behave, we get the system to ‘learn’ from large datasets

.

To come back to the example of the loan application, instead of explicitly writing a rule to tell our system to reject applications from people with certain characteristics or attributes, we train our system on past data about loan applicants. The idea is that we let the system figure out what characteristics indicate that a person will pay back their loan.

Now, this might sound all rosy: if it’s based on hard data, then surely its conclusions will be objective, won’t they? Unfortunately, the answer is no. There are many reasons why machine learning systems end up discriminating against certain groups. The most frequently cited cause of bias in ML is biased data, summed up in the famous phrase garbage in, garbage out, meaning that if we enter biased data into a system, we will receive biased results. To return to the loan example, if the past data we use to train our system comes from a bank which has systemically excluded minorities from finance, then our fancy new system will likely to learn discriminate in a similar manner.

However, the focus on biased data as the primary source of bias in ML has received much criticism, with the problem most recently highlighted during a Twitter discussion of an ML system that reconstructs pictures of faces from blurred images. The system, which seems to work fine on White people, turned a blurred image of Barack Obama into a picture of a White man: Obama pulse

In response to claims that the system was biased against Black people, prominent researcher Yann LeCun tweeted that "ML systems are biased when data is biased." Timnit Gebru, the technical co-lead of the Ethical Artificial Intelligence Team at Google, responded to LeCun by critiquing this framing of the problem and recommending that he watch her Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision in which she demonstates that: "fairness is not just about data sets, and it’s not just about math. Fairness is about society as well, and as engineers, as scientists, we can’t really shy away from that fact.”

For more on the exchange between Gebru and LeCun, which ended in LeCun leaving Twitter, see this account from Khari Johnson in Venturbeat or this one from The Gradient. One of the many things that the exchange highlighted was that we cannot only focus on dataset bias when discussing bias in ML systems. In the following sections, therefore, we'll begin by looking common causes of bias in ML, such as biased data, but gradually move toward a broader perspective which takes into account the other factors that lead to ML systems causing harms.

Bias in machine learning

One word of warning before we continue: the issue of bias in machine learning is made all the more complex by the mixing of two meanings of the word: on the one hand, bias is a technical term in statistics that essentially means any ‘deviation from a standard,’ or the "difference between the average prediction of our model and the correct value which we are trying to predict"; on the other hand, we have the more commonplace understanding of bias as a undesirable or harmful prejudice. It is this second meaning which we will mainly discuss in what follows, but let’s briefly look at the first meaning to clear things up.

Regarding this technical meaning of bias, in their paper Algorithmic Bias in Autonomous Systems, David Danks and Alex John London explain that claims about bias depend on making judgments against some standard:

we can have statistical bias in which an estimate deviates from a statistical standard (e.g., the true population value); moral bias in which a judgment deviates from a moral norm; and similarly for regulatory or legal bias, social bias, psychological bias, and others.

Indeed, they point out that introducing some sort of bias into statistical models is common, deliberate, and often necessary as a way to offset biases that we don’t want in our models. If we know, for example, that the data we are using is skewed in one particular way, we might try to compensate for this unwanted bias by introducing deliberate bias:

While this choice might be absolutely correct in terms of future performance, it is nonetheless a source of algorithmic bias, as our learning algorithm is not neutral (in a statistical sense). As we noted earlier, not all biases, algorithmic or otherwise, are bad.

What we want to discuss here, however, is not this extremely broad meaning of bias, but rather the type of bias that leads to discrimination and harms for certain people because they or members of a particular group or possess certain characteristics.

Discussions of this latter type of bias often focus on achieving fairness, however, as we will see below, determining what counts as fairness is itself a hugely complex issue, and there are many reasons to be critical of the

when discussing these issues.

One issue is that focusing on achieving fairness can turn issues of discrimination in narrow technical questions, as opposed to looking at them in their broader context through a lens of social justice and power relations. We'll come back to such criticisms at the end of this piece, but let's first take a look at how the issue of bias is typically framed in these discussions.

Types of bias

There are many ways to categorise the different types of bias that can affect machine learning systems, and we provide a number of resources later in the bibliography. What follows here is by no means comprehensive or definitive, but hopefully can offer a basic introduction. In The Foundations of Algorithmic Bias, Zachary Lipton lists three primary ways in which bias can enter a machine learning system.

First, there is straight up biased data. As mentioned above, this is the source of bias that tends to get too much attention, but it's a good place to start. As an example, let’s think about training a natural language generation (NLG) tool. The basic idea of such a tool is that it can take some input data from a user as a prompt, and generate natural sounding text to construct a complete text.

To train our system, we would need to get our hands on a huge dataset of natural language. As you can imagine, it would be quite important that this training data was not say, full of racist and sexist language, as we would not want our NLG tool to do the kind of things you see in the table below:

GPT2 table

Unfortunately, the examples in this table (taken from the paper, The Woman Worked as a Babysitter: On Biases in Language Generation) are not from some amateur effort to make an NLG tool, but rather examples generated by perhaps the most high profile NLG system, OpenAI’s much-hyped GPT-2. You may have heard of GPT-2 from its

and the utterly .

So what was in GPT-2’s training data that leads to it producing disastrously biased text like we see in the table? Well, the model was trained on a dataset called WebText, a corpus of 8 million documents aggregated by scraping external links from Reddit pages with at least 3 net ‘upvotes.’

As Timnit Gebru and Eun Seo Jo point out in their article, Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning, the WebText dataset’s focus on Reddit, whose users tend to be “male, in their late-teens to twenties, and urbanites,” means that “the dataset consists of materials of topical relevance to online discussions among this demographic.”

Moreover, anyone familiar with Reddit’s history of hosting toxic content could probably guess that setting your criterion as “at least 3 people on Reddit thought this was good” might lead to you including a lot of racist, sexist and generally bigotted text in your training data, and this seems to be precisely what happened.

The second source of bias mentioned by Lipton is bias by omission. In the context of facial recognition, studies such as Joy Buolamwini and Timnit Gebru’s Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification have shown how facial recognition systems end up with gender and racial biases. Buolamwini and Gebru’s analysis of a number of systems showed that “all classifiers performed best for lighter individuals and males overall [and] performed worst for darker females.”

This poor performance is caused by a number of factors which they outline in the paper, but one reason why such systems end up with these biases is the omission of representative data from the datasets these systems were trained on. As Buolamwini and Gebru point out, one of the datasets they discuss “was estimated to be 77.5% male and 83.5% White.”

Another example of bias by omission is highlighted by Virginia Eubanks in her book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. One chapter of her book looks at Allegheny County’s predictive algorithm for child neglect and abuse, which was designed as “a statistical model to sift information on parents interacting with the public benefits, child protective, and criminal justice systems to predict which children were most likely to be abused or neglected.” However, Eubanks notes that the “data set it utilizes contains only information about families who access public services, so it may be missing key factors that influence abuse and neglect.”

For the lucky and wealthy families who have had no interaction with welfare services, the system simply has no data, and so it effectively assumes that children from those families are not at risk of abuse or neglect. At the same time, it is essential to note that the problem here is not simply to collect more data on everyone: we need to question whether using an ML or automated decision making system is appropriate in such a sensitive context at all.

The third source of bias listed by Lipton comes from surrogate objectives, also known as proxies. The problem here arises when the outcome that we want our ML system to predict cannot be captured in a straightforward manner by the data we have. Say, for example, we have a recommendation system (like those used by Youtube or Netflix) that uses ML to predict which content a person would like to see next. As designers of the system, we probably want the system to recommend high-quality, interesting content.

However, we have no way to directly measure whether content is interesting or of high quality. Instead, we settle for a surrogate measure, or proxy, such a clicks. The idea here is that the articles or videos that get clicked on most often are probably the best and most interesting ones.

This choice of proxy is highly contentious, however, as it seems that sensationalist content and conspiracy theories tend to get more clicks than objective and factual pieces. Some researchers have even claimed that Youtube's recommendation system is responsible for radicalising users and promoting disinformation and conspiracy theories.

An example of how the choice of proxies can lead to racial bias can be seen in the case of a 'high-risk care management' system used by US hospitals which ended up being biased against Black patients. The aim of the system was to identify which patients required additional resources and attention due to serious health conditions. The algorithm assigns patients a risk score, and those in the 97th percentile receive additional care.

However, researchers found that at "a given risk score, Black patients are considerably sicker than White patients." Due to the racial bias in the system, only 17.7% of the patients identified for extra care were Black, and the researchers calculated that without racial bias the number should have been 46.5%.

The cause of the racial bias in this system was the choice of proxy. Because there is no direct measure for 'health,' the designers of the system chose to measure health-care costs. The issue is that Black patients typically generate lower costs than White patients with similar health issues, due to problems such as unequal access to healthcare for Black patients.

This meant that a Black patient who spent $6,000 on healthcare was typically much sicker than a White patient who spent the same money, and so using this proxy measure for health/sickness meant that the system was unable to capture this disparity. Although there is a correlation between health care expenditure and sickness (i.e. sicker patients tend to spend more on healthcare), this choice of proxy failed to account for how the health system tends to fail Black patients.

Our discussion of sources of bias here is by no means complete, but should give a basic understanding of the most obvious ways that bias can enter a system. For a more detailed account of the sources of bias in machine learning systems, check out A Survey on Bias and Fairness in Machine Learning, where the authors list no less than 23 sources of bias. A broader perspective on the problem can also be found in A Framework for Understanding Unintended Consequences of Machine Learning or in the paper A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics.

Can we 'fix' bias?

It should now be established beyond doubt that 'AI systems' are anything but objective and that they are highly susceptible to bias: they reflect the context in which they are created and there are numerous ways in which potentially serious biases can distort the decisions and predictions made by these systems.

Nevertheless, we still come across people who question whether this is any worse than plain-old human decision making. Indeed, some have argued that although serious biases can distort the functioning of AI systems, we at least have the possibility to check, understand and fix these biases, whereas we can do little to nothing about biased humans.

In an op-ed in the New York Times entitled Biased Algorithms Are Easier to Fix Than Biased People, Sendhil Mullainathan says that:

Humans are inscrutable in a way that algorithms are not. Our explanations for our behavior are shifting and constructed after the fact. To measure racial discrimination by people, we must create controlled circumstances in the real world where only race differs. For an algorithm, we can create equally controlled just by feeding it the right data and observing its behavior.

While there is certainly some truth to this idea that we can more easily audit - and thereby reduce the biases of - AI systems when compared to those of individual human decision makers, there are a number of factors that this argument fails to account for. As Cathy O’Neil, the author of Weapons of Math Destruction, pointed out in a response to the aforementioned piece from the New York Times, biased algorithms might be easier to fix in theory, but the reality is that almost no researchers, let alone the people affected by these systems, have access to them.

Indeed, we rely on the goodwill of the entities deploying these systems to ensure that they don’t lead to discriminatory outcomes. Moroever, even in cases where researchers or auditors do have access, improving these system is a hard statistical and sociological problem.

In their article, Discrimination in the Age of Algorithms, Mullainathan et al. acknowledge this disparity of information and recommend that regulation be introduced to enforce the kind of transparency measures that would make AI systems open to audit. There are many proposals for these kinds of transparency measures, from public registers of AI systems, to documentation methods such as Datasheets for Datasets and Model Cards for Model Reporting.

Whatever shape such measures take, it seems clear that AI systems won’t be able to avoid perpetuating harmful bias, let alone at helping to reduce bias in human discrimination, if they remain hidden from scrutiny and audit due to concerns about trade secrets.

At the same time, transparency will not be a panacea: it may well be that systems are simply unfixable, and should not be used in certain contexts. Using an AI system to solve a problem is not a neutral choice, and brings with it a host of risks and externalities which we will discuss below.

It is also worth mentioning that a number of companies and researchers have developed technical tools to detect and mitigate bias. Notable examples are IBM’s Fairness 360 and Google’s What-If tool. However, such systems remain plagued by the difficulty of deciding on what to aim for instead of bias: should a system aim to 'maximize fairness,' or are there other measures, such as justice, which need to be prioritized.

In one tutorial, for example, Arvind Narayanan looks at 21 different definitions of fairness in machine learning and examines the politics behind each of them, demonstrating that there is no one definition that is unambiguously the right one and that our choice depends on contentious political questions. Other scholars have shown how different formalisations of fairness criteria are incompatible and demonstrated the necessity of making trade-offs among different criteria.

Two interactive tools demonstrate this trade-off situation perfectly. Firstly, in the game Survival of the Best Fit, developed by students from NYU Abu Dhabi, the player plays an educational game about hiring bias in AI. The game aims “to explain how the misuse of AI can make machines inherit human biases and further inequality.”

A second interactive game was developed by MIT Technology Review, which simulates a courtroom algorithm to determine whether a defendant should be granted bail. Both of these games are fantastic illustrations of the intractable tradeoffs that we have to make when trying to achieve fairness in these systems and beg the question of whether fairness is an appropriate framework.

The response to such demonstrations of the incompatibility of fairnesss definitions should not be to simply throw our hands in the air say that it's impossible to make everyone happy and satisfy all criteria. Rather, we should look outside the narrow lens of tweaking algorithms to see if the real harms and injustices can be tackled by other means.

Beyond the bias framing

Beyond such debates about mitigating bias, a number of people have pointed to the limitations of framing the harms caused by AI systems as problems of bias and fairness. Kinjal Dave, for example, asks why we speak of algorithmic bias rather than more directly naming abuses such as algorithmic racism or sexism.

She outlines how the term ‘bias’ (and the related idea of ‘stereotypes’) is rooted in a theory that focuses on individual perception rather than on structural oppression. When speaking of ‘algorithmic bias,’ she warns that by “using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.”

Similarly, in Questioning the assumptions behind fairness solutions, Seda Gurses et al. point out to the limitations of fairness solutions for what they call optimization systems: i.e. “systems that interact with the environment in which they are deployed, and are optimizing over variables that are constantly changing.” As examples of these systems, they mention routing apps, such as Waze, and the content governance systems that control what we see on our social media timelines.

They point out that beyond issues of bias within such systems, they also create significant externalities: i.e. situations in which the actions of some groups of agents have serious impacts on agents outside of that group. As examples, they point to how routing apps that optimize travel time for users of the app end up directing heavy traffic into normally quiet residential streets, or how some systems optimize for majorities and end up impacting minority users.

Importantly, they stress that “an “unbiased” algorithm can still have unfair consequences or externalities.” In her piece, The Seductive Diversion of 'Solving' Bias in Artificial Intelligence, Julia Powels gives the example of a facial recognition system that performs poorly on women of colour. She notes that “[a]lleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.” After all, a 'perfectly fair' facial recognition system can still be used by a racist police department or by a government looking to persecute minorities.

Moreover, Gurses et al. point out that aiming to achieve fairness can in certain cases exacerbate a problem. They give the example of a credit scoring algorithm designed to grant subprime loans. Ensuring that the algorithm is ‘fair’ could simply result in more subprime loans being granted and lead to unjust consequences.

To properly address the harms caused by these systems, we need to approach them from a perspective that takes into account the full range of historical and sociological considerations that are relevant for each case. As

computational depth without historic or sociological depth is superficial learning...An ahistoric and asocial approach to deep learning can capture and contain, can harm people. A historically and sociologically grounded approach can open up possibilities. It can create new settings. It can encode new values and build on critical intellectual traditions that have continually developed insights and strategies grounded in justice. My hope is we all find ways to build on that tradition"

We cannot understand how AI systems cause harm to individuals and groups in society if we only see the problem from the perspective of a technologist, and we cannot 'solve' such problems without incorporating the depth and breadth of historical and sociological knowledge that is relevant to these problems.

While it is obviously important that we do what we can to prevent existing AI systems from causing harm, we must also continue to question the need for AI systems in all domains, to push back against AI inevitability. As Powels says:

In accepting the existing narratives about A.I., vast zones of contest and imagination are relinquished. What is achieved is resignation — the normalization of massive data capture, a one-way transfer to technology companies, and the application of automated, predictive solutions to each and every societal problem.

Bias and discrimination are, at the end of the day, societal and human problems. Replacing ‘flawed human decision making’ with AI systems will not address the root causes of these problems, and will arguably turn these social and political questions into technical debates. Framing these problems as technical issues of bias mitigation excludes non-technical perspectives from the table, which often means excluding the very people most affected by these systems.

We have seen, then, that AI systems are not objective, and that there are innumerable ways in which biases can distort the predictions and calculations of these systems. While it is of course necessary and good to improve the systems we use and ensure that they don’t replicate or create biased and discriminatory outcomes, we have also seen that it is important to question the existence of and need for such systems. At the end of the day, even achieving perfect algorithmic fairness in AI systems won't solve the complex social and political problems we face.

Please get in touch to share comments, criticisms, or other resources that we might have missed!

Bibliography & Resources

Videos:

Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision - Timnit Gebru & Emily Denton

Politics of AI - Kate Crawford

For more video resources and a list of books, check out this list from the UCLA Center for Critical Internet Inquiry

Interactive games/tools:

Survival of the Best Fit

MIT Tech Review game on AI bias

Introductory explainers on bias in AI

MIT Tech Review: This is how AI bias really happens—and why it’s so hard to fix

Why do ML systems exhibit bias?

Bias in the Vision and Language of Artificial Intelligence

The Hill - why are AI systems biased?

The foundations of AI bias - Zachary Lipton

IBM: ML and bias

World Economic Forum: How to Prevent Discriminatory Outcomes in Machine Learning

What is the Fairness Accountability and Transparecy (FAccT) machine learning model?

The quest to make AI less prejudiced

Understanding and Reducing Bias in Machine Learning

UK Government Interim report on Automated Decision Making bias

UK Government: Landscape summary - Bias in Automated Decision Making

AI and racism/sexism

Data Racism - European Network Against Racism

Social Inequality Will Not Be Solved By an App - Safiya Noble

Google Ad Portal Equated “Black Girls” with Porn

The Algorithmic Colonization of Africa

Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence

How surveillance has always reinforced racism - Simone Browne

Algorithms that run our lives are racist and sexist. Meet the women trying to fix them

The enduring anti-black racism of Google search

  • "Framing the problems as “pipeline” issues instead of as an issue of racism and sexism, which extends from employment practices to product design. “Black girls need to learn how to code” is an excuse for not addressing the persistent marginalization of Black women in Silicon Valley"

Defund Facial Recognition

Policing’s problems won’t be fixed by tech that aids—or replaces—humans

AI Now report on diversity in AI industry

E. Tendayi Achiume: Racial discrimination and emerging digital technologies: a human rights analysis - Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance

What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

The Great White Robot God - Artificial General Intelligence and White Supremacy

MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs

Indigenous AI

How afrofuturusm can help the world mend

The racist history behind facial recognition

Our Face Recognition Nightmare Began Decades Ago. Now It’s Expanding - Os Keyes

The activist dismantling racist police algorithms

Why racial bias is still inherent in biometric tech

Beyond the Algorithm: Pretrial Reform, Risk Assessment, and Racial Fairness

Can racist algorithms be fixed?

Gender bias in GPT2

Famous cases of bias in AI systems

Gender Shades

How racial bias infected a major health-care algorithm

AI can fix bias

Data Innovation: Could AI help reduce gender bias in Europe?

Biased Algorithms Are Easier to Fix Than Biased People

An AI hiring firm says it can predict job hopping based on your interviews

Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices

Against the bias framing

The seductive diversion of trying to solve AI bias - Julia Powles

Systemic algorithmic harms - Kinjal Dave

A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics

Technology can’t fix algorithmic injustice - Annette Zimmermann, Elena Di Rosa, Hochan Kim

The Long History of Algorithmic Fairness

Timnit Gebur & Emily Denton - Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision

Ruha Benjamin on deep learning: Computational depth without sociological depth is ‘superficial learning’

The False Promise of Risk Assessments: Epistemic Reform and the Limits of Fairness

Fairness, Equality, and Power in Algorithmic Decision-Making

Questioning the assumptions behind fairness solutions - Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, Seda Gürses

The full force of the state - Fieke Jansen

  • "...this article argues that in the decision-making around whether or not to use predictive policing, it is crucial to look beyond the issues of the tool itself and critically reflect on its perceived added value and the issues around the fairness and legitimacy of the entire intervention, not just the tool. Debating the issue, we must ask whether the desire to innovate or tackle a specific security problem can come at the expense of individual and collective fundamental human rights. Furthermore, when challenging these technologies it is equally important to understand the incentives that drive police to turn to these technologies."

Why Hundreds of Mathematicians Are Boycotting Predictive Policing

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI - Sandra Wachter

Decentering technology in discourse on discrimination

Technical explainers on bias

Bias in Artificial Intelligence (from a practicioner perspective)

Do You Believe in FAIR-y-tales? An Overview of Microsoft’s New Toolkit for Assessing and Improving Fairness of Algorithms

The bias-variance tradeoff

Fair ML book

Algorithmic bias in autonomous systems

  • Explains the ‘neutral’ technical use of bias as deviation from a standard
  • “The word ‘bias’ often has a negative connotation in the English language; bias is something to be avoided, or that is necessarily problematic. In contrast, we understand the term in an older, more neutral way: ‘bias’ simply refers to deviation from a standard. Thus, we can have statistical bias in which an estimate deviates from a statistical standard (e.g., the true population value); moral bias in which a judgment deviates from a moral norm; and similarly for regulatory or legal bias, social bias, psychological bias, and others. More generally, there are many types of bias depending on the type of standard being used.”

Academic articles on bias

Timnit Gebru & Eun Seo Jo - Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning

Bias in Computer Systems - Helen Nissenbaum

Big Data’s Disparate Impact - Andrew Selbst

Bringing the People Back In: Contesting Benchmark Machine Learning Datasets

A Survey on Bias and Fairness in Machine Learning

Predictive Biases in Natural Language Processing Models:A Conceptual Framework and Overview

Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning - Mireille Hildebrant

Fairness and Abstraction in Sociotechnical Systems - Andrew Selbst et al:

The Woman Worked as a Babysitter: On Biases in Language Generation

Feminist AI: Can We Expect Our AI Systems to Become Feminist?

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries

Inherent Trade-Offs in the Fair Determination of Risk Scores

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments - Alexandra Chouldechova

Fairer and more accurate, but for whom? - Alexandra Chouldechova, Max G'Sell

Large image datasets: A pyrrhic win for computer vision?

Gender bias in natural language processing

Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries

On the illusion of objectivity in natural language processing

Fairness Definitions Explained

Feminist claims to technical language