We now live in a world where AI governs access to information, opportunity and freedom. However, AI systems can perpetuate racism, sexism, ableism, and other harmful forms of discrimination, therefore, presenting significant threats to our society - from healthcare, to economic opportunity, to our criminal justice system.
We believe in the power of storytelling for social change. We tell stories that galvanize action with both research and art. We follow a scientific approach to our research, experiments and policy recommendations. We rely on art, freedom and creativity to spread the word, generate awareness about the harms in AI, and amplify the voice of marginalized communities in today’s AI ecosystem. Most importantly, we know making change is a team effort. Fighting for algorithmic justice takes all of us.
A poet of code and AI researcher motivated by personal experiences of algorithmic discrimination, Joy shared her story in a TED featured Talk that has over 1.2 million views and launched the Algorithmic Justice League.
A fundraiser and art lover, Nicole leads our Creative Communication Projects forming collaborations with partners around the world to amplify our impact.
A scholar, activist, designer, and media-maker, Sasha oversees our research and provides strategic guidance while leading participatory community explorations like Drag Vs AI. Read their new book Design Justice.
Everyone should have a real choice in how and whether they interact with AI systems.
It is of vital public interest that people are able to understand the processes of creating and deploying AI in a meaningful way, and that we have full understanding of what AI can and cannot do.
Politicians and policymakers need to create robust mechanisms that protect people from the harms of AI and related systems both by continuously monitoring and limiting the worst abuses and holding companies and other institutions accountable when harms occur. Everyone, especially those who are most impacted, must have access to redress from AI harms. Moreover, institutions and decision makers that utilize AI technologies must be subject to accountability that goes beyond self-regulation.
We aim to end harmful practices in AI, rather than name and shame. We do this by conducting research and translating what we’ve learned into principles, best practices and recommendations that we use as the basis for our advocacy, education and awareness-building efforts. We are focused on shifting industry practices among those creating and commercializing today’s systems.
Joy Buolamwini, Founder of the Algorithmic Justice League, came face to face with discrimination. From a machine. It may sound like a scene from a sci-fi movie, but it carries meaningful real-world consequences.
While working on an engineering project, a facial analysis software struggled to detect Joy’s face. She knew this was more than a technical blunder, but rather than surrender, she responded with curiosity. Her MIT peers with lighter skin color didn’t have the same issues, so Joy tried drawing a face on the palm of her hand. The machine recognized it immediately. Still, no luck with her real face, so she had to finish her project coding with a white mask over her face in order to be detected. Many questions surfaced giving Joy the motivation and insights to start the Algorithmic Justice League.
In the early days, Joy and the AJL team committed their research to “unmasking bias” in facial recognition technology. They discovered large gender, race, and skin color bias in commercially sold products from reputable companies including Amazon, IBM and Microsoft. They then gained support from hundreds of other researchers to advocate for more equitable and accountable technology. Exclusion and discrimination extend well beyond the facial recognition technologies and affect everything from healthcare and financial services to employment and criminal justice.
We need to stop taking systems for granted. While technology can give us connectivity, convenience and access, we need to retain the power to make our own decisions. Are we trading convenience for shackles? People must have a voice and a choice in how AI is used.
In the U.S., the teams designing all systems are not inclusive. Less than 20% of people in technology are women and less than 2% are people of color. Also, one in two adults (that’s more than 130 million people) has their face in a facial recognition network. Those databases can be searched and analyzed by unaudited algorithms without any oversight, and the implications are massive.
In addition to inclusive and ethical practices in designing and building algorithms, we demand that there is more transparency when the systems are being used. We need to know what the inputs are and how were they sourced, how performance is measured, the guidelines for testing, and the potential implications, risks, and flaws when applying them to real-life situations. This isn’t a privacy preference. It’s a violation of our civil liberties, where corporations are making money off of people’s faces and people’s lives are put at risk without their consent.
Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine. at times it means respecting people’s privacy by not collecting any data. and it always means respecting the dignity of an individual."
As an organization highlighting critical issues in commercial systems, we constantly face the risk of retaliation and attempts at silencing. While some companies react positively to our findings, others see us as a threat. Thankfully, we’ve grown as a movement and have the support of the most respected AI researchers, organizations and thousands of “Agents of Change” that believe in our mission.
Recently, we saw the power of collaboration come together in a dispute with Amazon after they tried to discredit our research. Following our Founder’s rebuttals (published here and here), we had more than 70 researchers defend our work and the National Institute of Standards and Technology released a comprehensive study showing extensive racial, gender, and age bias in facial recognition algorithms that validate the concerns raised by our research.
If you believe we all deserve equitable and accountable AI, then you can become an agent of change too. Whether you’re an enthusiast, engineer, journalist, or policy maker, we need you. Contact us or act now.