Is it okay for machines of silicon and steel or flesh and blood to erase our contributions? Is it okay for a machine to erase you and me? Is it okay for machines to portray women as subservient? Is it okay Google and others to capture data without our knowledge? These questions and new research led by Allison Koenecke inspired the creation of “Voicing Erasure”: a poetic piece recited by champions of women’s empowerment and leading scholars on race, gender, and technology.
A recent research study led by Allison Koenecke reveals large racial disparities in the performance of five popular speech recognition systems, with the worst performance on African American Vernacular English speakers. See Original Research
Voices recognition devices are known for "listening" into our conversations and storing that information often without our knowledge.
These systems are frequently given women voices and subservient "personalities", which further accentuates the negative stereotype about women being submissive.
A New York Times article highlighting the research about biases in speech recognition systems, failed to include the lead researcher, Allison Koenecke, and all the other women that were part of the research team.
Help us shed light on the impact of AI harms on civil rights and people’s lives around the world. You can share your story using the hashtag #CodedBias or send us a private message.
share your storyWe cannot let the promises of AI overshadow real and present harms. Like facial recognition, voice recognition systems also reflect the biases of its creators and our society. Now, more than ever, we must fight back. If you're aware of any algorithmic biases that impact you or others in your communities, please share it with the world.
Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health care. Over the last several years, the quality of these systems has dramatically improved, due both to advances in deep learning and to the collection of large-scale datasets used to train the systems. There is concern, however, that these tools do not work equally well for all subgroups of the population. Here, we examine the ability of five state-of-the-art ASR systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with 42 white speakers and 73 black speakers.
Coded Bias illuminates our mass misconceptions about AI and emphasizes the urgent need for legislative protection, and follows the Algorithmic Justice League’s journey to push for the first-ever legislation in the U.S to place limits to facial recognition technology. Coded Bias weaves the personal stories of people whose lives have been directly impacted by unjust algorithms. You can make an impact by helping us spread the word about the film, hosting a screening, and/or sharing your #CodedBias with your network.
Stay up to date with the movement towards equitable and accountable AI.
SIGN UPLead Author of “Racial Disparities in Automated Speech Recognition” Study
@allisonkoe
Professor of Law at UCLA and Columbia Law School
@sandylocks
CEO of Shift7
@smithmegan
Author of Algorithms of Oppression
@safiyanoble
“Automated speech recognition (ASR) systems are now used in a variety of applications to convert spoken language to text, from virtual assistants, to closed captioning, to hands-free computing. By analyzing a large corpus of sociolinguistic interviews with white and African American speakers, we demonstrate large racial disparities in the performance of five popular commercial ASR systems.”
Please contact comms@ajlunited.org or download our media kit. We appreciate every opportunity that helps us unmask the imminent harms and biases of AI.