Whether it's claims about AI revolutionizing the insurance industry or enabling Orwellian mass surveillance, everyone seems to be talking about 'artificial intelligence' these days. Unfortunately, much of this talk is riddled with myths, misconceptions and inaccuracies.
The aim of this website is to help disentangle and debunk some of these misleading ideas. We'll explore how these ideas appear in the media, and point you towards high quality resources for further reading.
The myths to tackle were chosen in two ways. First, a group of stakeholders from civil society, academia, government and industry discussed the problem of AI bulls**t at RightsCon 2019 in Tunis. Together, we brainstormed a list of the most insidious misconceptions and myths about AI.
Based on this preliminary list, a survey was sent out to allow people to rank these myths, and also to contribute additional ones. A combination of these survey results and additional consultations with the target audience led to the choice of these final 8 topics.
It should be clarified that in no case is there a simple, straightforward refutation of the misconceptions and myths. Rather, the ideas are explored, obvious misconceptions are refuted and more complex perspectives are introduced. The aim of these resources is to guide you towards further materials as presented in the bibliographies and guides at the end of each section.
If you have any comments, criticisms, or requests, please don’t hesitate to get in touch. This is a live resource, and we're happy to update it based on your feedback.
This project was funded and supported as part of Daniel Leufer's Mozilla Open Web Fellowship - a collaboration between the Ford Foundation and Mozilla. It would not have been possible without the incredible support, encouragement and direction provided by the Mozilla Fellowship team, but a special thanks must go to Amy Schapiro Raikar for her unfailing support and direction.
For the duration of Daniel's fellowship, he was hosted by the digital rights organisation Access Now. Enormous thanks must go to all the staff at Access Now for their fantastic work which contributed to and inspired this project. Fanny Hidvégi in particular deserves the utmost gratitude for inspiring and guiding this project from its initial conception through to its completion. Her commitment to combatting AI bulls**t and protecting people's rights has been a constant source of inspiration.
This project also benefitted enormously from the collaboration of the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Jessica Fjeld provided support for this project from the get go and organised the collaboration with two students from the Cyberlaw Clinic, Rachel Jang and Kathryn Mueller, whose work was instrumental to the project.
The material on this site has been immesurably improved thanks to the thoughtful, insightful, and critical comments provided by a number of reviewers, including Agata Foryciarz, Sarah Chander, Alexa Steinbrück, and especially Vera Tylzanowski, who read every bit of the site more than once. Any faults or typos that remain are entirely Daniel's responsibility.
Lastly, the utmost gratitude must be expressed to all the authors whose work inspired and is featured on this site. We have drawn endless inspiration from the community of people working to make AI systems safer, to protect people's rights and freedoms and to combat AI bulls**t. We hope that this site does justice to all of that work, and helps guide readers in further study.
This website was put together as part of Daniel Leufer’s Mozilla Fellowship project. From October 2019 to July 2020, Daniel was hosted by the digital rights organisation, Access Now. Daniel’s background is in philosophy, and he has a PhD from KU Leuven in Belgium. He is also a member of the Working Group on Philosophy of Technology at KU Leuven. You can read more about his work here.
Alexa Steinbrück is a software developer, artist and design researcher. She has a degree in Artificial Intelligence from the University of Amsterdam. Her research interest is the representation and perception of AI in the public discourse and consumer products like voice assistants. She runs a lab for Artificial Intelligence & Robotics at the University of Art and Design 'Burg Giebichenstein' where she researches creative applications of AI technologies. You can read more about her work here.
Zuzana is freelance graphic designer who's always looking for new challenges and learning new things. She transforms client's needs into bold visuals. See her portfolio to let the work speak for itself.
From January 2020 to May 2020, Kathryn was a clinical student in the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. She is currently a Juris Doctor candidate at Harvard Law School and majored in political science at Tufts University. You can read more on Kathryn's LinkenIn page
From January 2020 to May 2020, Rachel was a clinical student at the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Her background is in law and international studies, and she is a Juris Doctor candidate at Harvard Law School. You can read more on Rachel's LinkedIn page