AI Myths
We can't regulate AI
We can't regulate AI

We can't regulate AI

In this section, we'll explore two common myths about the regulation of AI: firstly, that AI is simply too complex to regulate; secondly, that any regulation of AI will kill innovation

As the hype about AI intensifies, we hear more and more talk about AI regulation: on the one hand, questions arise about whether we can regulate a technology that is supposedly developing so rapidly and that operates so opaquely that we barely understand it; on the other, questions about whether we should regulate AI when any regulation risks stifling innovation or robbing the country that regulates of its competitive edge. Both of these claims rest on a number of misconceptions that we'll explore below.

‘Politicians don’t know anything about AI, so governments can’t regulate it’

This section was written by Rachel Jang as part of a project for the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Further edits were made by Daniel Leufer.

Among the arguments against regulating AI, we often hear that politicians are too inexperienced with complex AI technologies to regulate them. The argument usually leads to the conclusion that clueless governments will only make things worse by attempting to regulate AI, so they should leave things to the companies with expertise in the area. Some ‘prolific futurists' even go as far as to argue that AI regulation by humans may be impossible, and that we ultimately need AI to regulate itself as it will be free of human imperfections.

However, as innovative and novel as current AI technology may be, governments regulating new technologies is nothing new. Governments have been regulating novel technologies throughout history, and the results have often been successful. Some examples include regulation of automobiles, regulation of railroad technology, and regulation of telegraph and telephone.

AI systems, like these other technologies, are tools used by humans. The impacts of AI systems in society largely depend not on the complex code that underlies them, but on who uses them, for what purposes they're used, and who they're used on. And these are all things that can be regulated.

Past experience with the successful regulation of emerging technology demonstrates that we should focus on its effects and applications. For example, it took no understanding of the technology underlying steam engines for the U.S. Congress to require in 1887 that railroad fares be “reasonable and just.” Similarly, although the vastness and speed of AI development may seem to make regulating AI challenging, today’s policymakers can simplify their task by focusing on how to best regulate its impacts.

To take an example, recent advances in machine learning have made facial recognition systems much cheaper and more accurate to the point that we see a strong push from many countries attempting to implement live facial recognition in public spaces. While we could get caught up in the technical intricacies of which algorithms are used in such instances, we can also shift our focus to the impact: ubiquitous mass surveillance.

Given that this impact violates a number of fundamental rights, including the right to privacy and the right to free assembly, we can see quite clearly that any such application of AI should be subject to the strictest regulation, if not banned outright.

Indeed, we have even seen a change of tack from Big Tech recently, with a number of companies publicly warming to the idea of government regulation for AI. Sundar Pichai, chief executive of Alphabet and Google, has argued that existing rules like the European Union’s General Data Protection Regulation can serve as a “strong foundation” for governments’ AI regulation efforts. Microsoft’s Brad Smith has also admitted that certain uses of facial recognition technology increase the risk of discriminatory outcomes or can lead to intrusions into people’s privacy, and that these issues must be addressed by the government.

At the same time, these pleas from Big Tech must be taken with a grain of salt, given their long histories of ‘moving fast and breaking things,’ anti-regulation lobbying, and investment in controversial technologies that seem to conflict with high-minded company visions.

Companies such as Microsoft and Google do set forth their own AI principles and guidelines, but relying on private companies to regulate themselves will not be enough to prevent and mitigate all the harms that can be caused by AI systems, especially in cases where the necessary measures are in conflict with these companies’ business models.

These companies have their own private interests, and different companies will likely have different rules and standards if there is little to no government regulation. At the same time, we should be highly suspicious of companies attempting to craft government regulation so that it serves their aims over the aims of citizens and communities.

Ultimately, government regulation sets a floor for all companies so that they can invest in responsible AI without having to worry about their competitors having more competitive prices. Without appropriate government regulation, this can easily become a race to the bottom, where companies are forced to focus only on reducing prices to successfully compete in the market

‘Regulation of AI will kill innovation’

This section was written by Kathryn Mueller as part of a project for the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Further edits were made by Daniel Leufer.

It's a common refrain we cannot or should not regulate AI. We hear that regulation will kill innovation and that companies should be allowed to develop new technologies unburdened. Here, we’ll explain how regulation has actually supported innovation in the past, but it’s worth starting by acknowledging that much regulation already applies to AI, such as data protection laws - think the EU’s General Data Protection Regulation (GDPR) or the US’s Health Insurance Portability and Accountability Act (HIPAA). Moreover, governments and policymakers can add onto or reform existing regulation, meaning that we are not starting entirely from scratch in regulating AI.

Even though AI is a rapidly developing technology, it's being employed in heavily regulated fields, from healthcare to finance. Compliance is not a new concept for companies using AI in these contexts. Moreover, companies are already subject to general obligations, such as those in the UN Guiding Principles on Business and Human Rights, which have bearing on their use of AI. Indeed, many companies are already voluntarily conducting, and in some cases publicly releasing the results of, human rights due diligence.

In thinking about this issue, it's worth considering who stands to benefit from a lack of regulation on AI. Although we often hear that regulation will stifle innovation and thus deprive the world of life-changing AI applications, what we have seen up to now is that the negative consequences of AI systems and automation tend to fall on underprivileged groups, and that many developments in AI research have been used to bolster repressive practices of surveillance and discrimination.

Will we kill innovation?

Coupled into fears about regulation killing innovation is the idea that if we regulate new technologies, we might lose out to a competitive country without regulation. There is a fear from countries that other countries that 'do not share their values' could beat them to the innovation, thus shaping the trajectory of a new technology. If we want to maintain the influence of our value system, we cannot stifle innovation through unnecessary and burdensome regulation, or so the myth goes.

Such a framing rests on an sort of technodeterminism; the idea all technology that can be developed, will be developed, and we will ultimately be forced to adopt it. In reality, societies are free to shape the development of technology and refuse to develop and deploy technologies that undermine certain fundamental values.

If applications of AI such as facial recognition are deemed to be incompatible with human rights or social justice, then they can be banned outright and never adopted. After all, most people would probably be happy to live in a country that's losing the 'race' to develop dystopian surveillance tools.

In the AI space, some companies have made genuine commitments to

, and have made sincere calls for regulation. Importantly, they argue that in the absence of regulation, they can be undercut by AI companies that do not value transparency and avoid devoting resources to it. In such cases, regulation mandating transparency measures would actually foster innovation in safer and more open approaches to AI development.

Is innovation always good?

One of the key assumptions underlying this myth is that innovation is inherently a positive thing. But is it? For a classic example of this thinking, the Information Technology and Innovation Foundation (ITIF) - a prominent pro-innovation and anti-regulation think tank - has a piece on AI regulation which claims that overly broad regulation will:

  1. Make AI development slower,
  2. Reduce innovation,
  3. Reduce AI quality,
  4. Reduce AI adoption,
  5. Reduce economic growth,
  6. Reduce consumer options,
  7. Raise prices,
  8. Reduce customer experience,
  9. Reduce positive social impact,
  10. Reduce a country’s competitiveness and security.

Hoever, all of these arguments inherently rest on the idea that AI is mostly positive, and may only need to be regulated in a few small, problematic areas. By contrast, we have seen abundant evidence of application of AI technology that are deeply troubling, such as Clearview AI’s facial recognition system and Cambridge Analytica’s microtargeting of voters, to name just two of the most infamous.

Indeed, both Clearview AI and Cambridge Analytica are small, innovative startups or SMEs, the typical stakeholder whose innovative potential we are told we need to foster, and yet they have both developed technology which undermines our human rights. Add to this how some governments are using AI to facilitate surveillance and the picture gets even dimmer.

While we should rightfully celebrate innovation done well, we must be careful not to assume that any and all technological innovation is something we want. At the end of the day, innovation is neither inherently good nor bad, and we need proper regulation to ensure that the innovation that causes harms is nipped in the bud to allow truly useful innovations to flourish.

How to regulate

In the United States, the White House released AI policy guidelines in 2019 that strongly advocated against heavy-handed regulation. The White House Office of Science and Technology released this statement at the time:

Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach. The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.

This statement raises common refrains in the argument that regulation will kill innovation:

  1. Heavy handed innovation is inherently bad;
  2. Regulation could kill a country’s dominance as an innovation hub; and
  3. In order to have technology that matches a country’s values, that country must take a hands-off approach.

Similarly, calls from various tech sector sources warn of overregulation. For example, in a Bloomberg Law piece on regulation of AI, we hear the following argument:

AI technology has enormous potential benefits, from improving health outcomes to enhancing cybersecurity to making our lives more efficient, but concern about potential harmful effects will continue to drive scrutiny by regulators. This will be an important year for determining whether the regulatory approach veers toward overregulation, or instead focuses on practical approaches while allowing innovative uses of the technology to flourish.

Again, this piece puts forward the idea that overregulation will kill innovation. The suggestion is that there is some mild regulatory sweet spot, but if regulators cross that line, innovation will cease.

Looking to Europe, the EU released a white paper on its regulation of AI, focusing on high-risk sectors such as healthcare or transportation. These proposals have been

. Although the EU guidelines are perhaps weaker than expected, notably the EU does not shy away from attempting to carve out areas to regulate. While the US guidelines shied away from regulation, the EU guidelines seem to at least recognize the potential harm of AI in high-risk sectors.

Since the EU received responses to its consultation on the Whitepaper, there has been talk of some applications of AI being banned, with Margrethe Vestager, European Commission’s Vice-President for Digital policy, warning that applications such as predictive policing are ‘not acceptable’ in the EU. In a similar spirit, the European Data Protection Supervisor has announced that he is aiming to convince the European Commission to institute a moratorium on the use of facial recognition and other biometric surveillance technology in public spaces.

Interestingly, big tech companies have recently started calling for regulation, although the motivations behind these calls are up for debate. While they may only be calling for regulation to entrench their status as dominant players and make it too expensive and burdensome for new entrants to compete, it's worth noting their position here.

Sundar Pichai of Alphabet/Google, Elon Musk of Tesla, Amazon and Microsoft (with facial recognition specifically), and Facebook have

. At the same time, critics have pointed out that these 'calls' for regulation typically involve these companies lobbying governments so that the regulations suit their agendas.

Moroever, following the Black Lives Matter protests in the US in the summer of 2020, and the spotlight they have shone on abusive police practices, a number of companies have pulled back from supplying facial recognition to police forces. However, critics have pointed out that this doesn't capture the whole story.

In the case of Mircosoft, the American Civil Liberties Union has pointed out although the company announced a moratorium on its sale of face recognition technology to law enforcement, it is still involved in "efforts to advance legislation that would legitimize and expand the police use of facial recognition in multiple states nationwide" and thereby working against the civil rights community.

Beyond these often weak commitments from companies, activists have called for governments to step up and ban these applications outright. As Malkia Devich-Cyril notes in a piece, Defund facial recognition:

In an era when policing and private interests have become extraordinarily powerful—with those interests also intertwined with infrastructure—short-term moratoriums, piecemeal reforms, and technical improvements on the software won’t defend Black lives or protect human rights...In my vision for a nation that invests in Black life and dignity, facial recognition and other forms of biometric policing don’t need more oversight, or to be reformed or improved. Facial recognition, like American policing as we know it, must go.

Powerful calls such as this hammer home the fact that there might just be some forms of innovation, such as mass surveillance, that would be better off dead, and that we need regulation to ensure that.

Regulation can foster innovation

When we look across different industries, the idea that regulation will kill innovation outright has not played out, and there is no reason it should in the AI context either. Regulation has been successfully implemented without killing all innovation across a variety of industries.

In many of those cases, industry leaders bemoaned regulation, only to find out later that it allowed for new forms of innovation:

  1. The automobile industry: Car makers did not want regulation that would have demanded seatbelts and other safety features. In the end, these are staple features of cars today and have successfully reduced fatality rates -- redounding, one can only assume, to the popularity of the car and the benefit of the industry on the whole -- without killing the car industry.
  2. The environmental industry: Regulation in this industry can even spark innovation by creating space for new technologies. Regulation can create a “supply curve” that helps to make room for innovative methodologies and better outcomes.
  3. Targeted advertising: Following GDPR, the Dutch advertising agency Ster.nl changed how they performed targeted advertising and switched to a contextual advertising model. This change actually increased profits and has led to the use of innovative machine learning, such as natural language processing. GDPR, instead of killing innovation, actually sparked the use of more innovative technologies and increased profits.
  4. The telecom industry: Regulation played a role in the telecom industry both in addressing monopolies and in telecom equipment.
  5. FinTech: Researchers have also called for regulation in the FinTech sector to help provide financial stability. Payment intermediaries have proven to be another interesting, innovating sector where regulation can help - not just hurt.

As these examples show, regulation has not killed innovation in other industries, and in some cases may have sparked it. And, as we discussed earlier, if we move away from the idea of AI and innovation as an inherently positive force and view it with more nuance, then the fear that regulation will kill innovation rests on false principles to begin with.

While we always prefer good regulation to bad regulation, there is no reason to take a fully hands-off approach in favor of innovation, especially when some of the innovation happening in AI is violating our human and civil rights.

Please get in touch to share comments, criticisms, or other resources that we might have missed!

Bibliography & Resources

In addition to the links provided in the text above, here are some further pieces about AI and regulation.

The Regulation of Artificial Intelligence — A Case Study of the Partnership on AI

  • How self regulation could or could not work with AI. Self-regulation has proven successful in some other innovative industries, such as peer-to-peer marketplaces.

Regulation of Artificial Intelligence in Selected Jurisdictions

  • Overview of current AI regulation to get a comprehensive state of play.

The case for a federal robotics commission

  • A call for a federal robotics commission to regulate various AI/technological innovations. Piecemeal approaches have their downsides and comprehensive regulation leaves fewer regulatory gaps.

U.S. White House’s AI regulatory guidance – 10 principles

  • Pushes for light regulation

EU Regulation Toolkit - Tool #21 Research & Innovation

Don't regulate AI - starve it

Artificial intelligence won't rule the world so long as humans rule AI

Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence

Library of Congress – Regulation of Artificial Intelligence

How governments are beginning to regulate AI

OECD Principles on AI

European Commission’s White Paper on Artificial Intelligence - A European approach to excellence and trust