Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalkspolicy/advocacyexhibitionseducationPRESS
OUR MISSION

We’re leading a cultural movement towards EQUITABLE and ACCOUNTABLE AI

We now live in a world where AI governs access to information, opportunity and freedom. However, AI systems can perpetuate racism, sexism, ableism, and other harmful forms of discrimination, therefore, presenting significant threats to our society - from healthcare, to economic opportunity, to our criminal justice system.

The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence.

AJL’s mission
is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.

AJL is a fiscally sponsored project of Code for Science & Society.

We want the world to remember that who codes matters, how we code matters, and that we can code a better future.
THE ALGORITHMIC JUSTICE LEAGUE’S TEAM

Igniting the power of research, art and storytelling

We believe in the power of storytelling for social change. We tell stories that galvanize action with both research and art. We follow a scientific approach to our research, experiments and policy recommendations. We rely on art, freedom and creativity to spread the word, generate awareness about the harms in AI, and amplify the voice of marginalized communities in today’s AI ecosystem. Most importantly, we know making change is a team effort. Fighting for algorithmic justice takes all of us.

Founder and Artist-in-Chief
Dr. Joy Buolamwini (she/her)

A poet of code and AI researcher motivated by personal experiences of algorithmic discrimination, Dr. Joy shared her story in a TED featured Talk that has over 1.4 million views and launched the Algorithmic Justice League in 2016.

Senior Research Advisor
Sasha Costanza Chock (they/she), PhD

A researcher, designer, and troublemaker, Sasha led AJL's inaugural research and product design teams. Read Sasha's latest book, Design Justice, freely available here.

Director of Policy and Advocacy
Tawana Petty (she/they)

A mother, social justice organizer, poet and author, Tawana represents AJL in national and international processes shaping AI governance. She has served as a program committee member for ACM FAcct as well as an Ethics Reviewer for NeurIPS.

Communications
Meagan Adele Lopez (she/her)

Founder of Lady Who Productions, Adele is an author, social media expert, and filmmaker. She and her team manage social media and special events for AJL.

Senior Advisor
Dr. José Ramón Lizárraga (he/him)

Learning Scientist and multi-time Webby Award winner, Dr. Lizárraga provides AJL with expertise on creative and innovative approaches to mass communications and public pedagogy.

Lead Excoded Experiences Analyst
Berhan Taye (she/her)

Co-host of Terms and Conditions Podcast, Berhan is an independent researcher, analyst and facilitator. She leads the analysis of AI harms reports, develops response strategies, and identifies resources for individuals acutely impacted by AI systems.

Senior Advisor
Dr. Ruha Benjamin (she/her)

Professor of African American Studies at Princeton University, founding director of the Ida B. Wells JUST Data Lab and author of three books, Viral Justice (2022), Race After Technology (2019), and People’s Science (2013), and editor of Captivating Technology (2019). Dr. Benjamin advises AJL on  the relationship between innovation and inequity, knowledge and power, race and citizenship, health and justice.

‍

Joy Buolamwini
Joy Buolamwini

A poet of code and AI researcher motivated by personal experiences of algorithmic discrimination, Joy shared her story in a TED featured Talk that has over 1.2 million views and launched the Algorithmic Justice League.

Nicole Hughes
Nicole Hughes

A fundraiser and art lover, Nicole leads our Creative Communication Projects forming collaborations with partners around the world to amplify our impact.

Sasha Costanza-Chock (they/she), PHD
Director of Research & Design

A researcher, designer, and troublemaker, Sasha leads AJL's research and product design teams, and can be found facilitating participatory projects like #DragVsAI and crash.ajl.org. Read Sasha's latest book, Design Justice, freely available here.

Dana Tzegaegbe
Project Manager and Operations Lead

Dana is responsible for creating a healthy and equitable team culture, ensuring operational excellence across the team and supporting development and budget oversight.

Grace Foster, MBA
Director of Development

Grace’s experiences of inequity, marginalization and lack of access to wealth and opportunity has fueled her ambitions to be a part of the change. She champions equality and inclusivity for marginalized communities and empowering those who have not had access to opportunities.

Core Members of the League
Joy Buolamwini

A poet of code and AI researcher motivated by personal experiences of algorithmic discrimination, Joy shared her story in a TED featured Talk that has over 1.2 million views and launched the Algorithmic Justice League.

Nicole Hughes

A fundraiser and art lover, Nicole leads our Creative Communication Projects forming collaborations with partners around the world to amplify our impact.

Sasha Costanza-Chock (they/she), PHD, Director of Research & Design

A researcher, designer, and troublemaker, Sasha leads AJL's research and product design teams, and can be found facilitating participatory projects like #DragVsAI and crash.ajl.org. Read Sasha's latest book, Design Justice, freely available here.

Dana Tzegaegbe, Project Manager and Operations Lead

Dana is responsible for creating a healthy and equitable team culture, ensuring operational excellence across the team and supporting development and budget oversight.

Grace Foster, MBA, Director of Development

Grace’s experiences of inequity, marginalization and lack of access to wealth and opportunity has fueled her ambitions to be a part of the change. She champions equality and inclusivity for marginalized communities and empowering those who have not had access to opportunities.

Select Research Collaborators

Dr. Timnit Gebru, Dr. Margaret Mitchell, and Inioluwa Deborah Raji.

CORE funders

Ford Foundation, MacArthur Foundation, and individual donors and supporters like YOU.

Extended Crew

Casa Blue, Yancey Consulting, BU/MIT Technology Law Center, and Bocoup.

AdVISORY COMMITTEE

-Dr. Joy Buolamwini
-Megan Smith
-Brenda Darden Wilkerson

OUR PRINCIPLES

We mitigate the harms and biases of AI by promoting 4 core principles.

Affirmative consent

Everyone should have a real choice in how and whether they interact with AI systems.

meaningful transparency

It is of vital public interest that people are able to understand the processes of creating and deploying AI in a meaningful way, and that we have full understanding of what AI can and cannot do.

Continuous oversight and accountability

Politicians and policymakers need to create robust mechanisms that protect people from the harms of AI and related systems both by continuously monitoring and limiting the worst abuses and holding companies and other institutions accountable when harms occur. Everyone, especially those who are most impacted, must have access to  redress from AI harms. Moreover, institutions and decision makers that utilize AI technologies must be subject to accountability that goes beyond self-regulation.

Actionable critiquE

We aim to end harmful practices in AI, rather than name and shame. We do this by conducting research and translating what we’ve learned into principles, best practices and recommendations that we use as the basis for our advocacy, education and awareness-building efforts. We are focused on shifting industry practices among those creating and commercializing today’s systems.

4:59
OUR ORIGINS

The inspiration for the Algorithmic Justice League

Dr. Joy Buolamwini, Founder of the Algorithmic Justice League, came face to face with discrimination. From a machine. It may sound like a scene from a sci-fi movie, but it carries meaningful real-world consequences.

While working on a graduate school project, facial analysis software struggled to  detect her face. She suspected this was more than a technical blunder, but rather than surrender, she responded with curiosity. Her MIT peers with lighter skin color didn’t have the same issues, so Joy tried drawing a face on the palm of her hand. The machine recognized it immediately. Still, no luck with her real face, so she had to finish her project coding with a white mask over her face in order to be detected. Many questions surfaced giving Joy the motivation and insights to start the Algorithmic Justice League.

2:41

In the early days, Joy committed her research to “unmasking bias” in facial recognition technologies.  As a graduate student at MIT, she discovered large gender and skin type bias in commercially sold products from reputable companies including  IBM and Microsoft. She then co-authored the highly influential Gender Shades paper with Dr. Timnit Gebru, and the follow up Actionable Auditing paper with Agent Deb Raji that put Amazon on its toes. As an artist she began creating pieces to humanize AI harms, with her award winning visual spoken word poem "AI, Ain't I A Woman?" shown in exhibitions around the world. This combination of art and research gained support from hundreds of other researchers to advocate for more equitable and accountable technology. Exclusion and discrimination extend well beyond the facial recognition technologies and affect everything from healthcare and financial services to employment and criminal justice.

The deeper we dig, the more remnants of prejudice we will find in our technology. We cannot afford to look away this time because the stakes are simply too high.  We risk losing the gains made with the civil rights movement and other movements for equality under the false assumption of machine neutrality.

MORE EXAMPLES

Automated systems discriminate on a daily basis

In the US, a widely-used healthcare algorithm falsely concludes that black patients are healthier than equally sick white patients. AI that is used to determine hiring decisions has been shown to amplify existing gender discrimination. Law enforcement agencies are rapidly adopting predictive policing and risk assessment technologies that reinforce patterns of unjust racial discrimination in the criminal justice system. AI systems shape the information we see on social media feeds and can perpetuate disinformation when they are optimized to prioritize attention-grabbing content. The examples are endless.

index

AI HArms

Individual Harms, Illegal Discrimination, Unfair Practices, Collective Social Harms
Credit: Courtesy of Megan Smith (former Chief Technology Officer of the USA)
CONSENT AND TRANSPARENCY

We’re still entitled to basic Human Rights.

We cannot take systems for granted. While technology can give us connectivity, convenience and access, we need to retain the power to make our own decisions. Are we trading convenience for shackles? People must have a voice and a choice in how AI is used.

In the U.S., the teams designing all systems are not inclusive. Less than 20% of people in technology are women and less than 2% are people of color. Also, one in two adults (that’s more than 130 million people) has their face in a facial recognition network. Those databases can be searched and analyzed by unaudited algorithms without any oversight, and the implications are massive. 

Beyond inclusive and ethical practices in designing and building algorithms, we demand that there is more transparency when the systems are being used. We need to know what the inputs are and how were they sourced, how performance is measured, the guidelines for testing, and the potential implications, risks, and flaws when applying them to real-life situations. This isn’t a privacy preference. It’s a violation of our civil liberties, where corporations are making money off of people’s faces and people’s lives are put at risk without their consent.

Michelle Obama
1:15

Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine. at times it means respecting people’s privacy by not collecting any data. and it always means respecting the dignity of an individual."

- Dr. Joy Buolamwini, Poet of Code.
AGENTS OF CHANGE

Calling collaborators, contributors, and volunteers.

As an organization highlighting critical issues in commercial systems, we constantly face the risk of retaliation and attempts at silencing. While some companies react positively to our findings, others do not. Thankfully, we’ve grown as a movement and have the support of the most respected AI researchers, organizations and thousands of “Agents of Change” that believe in our mission.

We saw the power of collaboration come together in a face off with Amazon after they tried to discredit peer-reviewed research. Following Dr. Joy Buolamwini's rebuttals (published here and here), we had more than 70 researchers defend this work and the National Institute of Standards and Technology released a comprehensive study showing extensive racial, gender, and age bias in facial recognition algorithms that validate the concerns raised by the research.

If you believe we all deserve equitable and accountable AI, then you can become an agent of change too. Whether you’re an enthusiast, engineer, journalist, or policy maker, we need you. Contact us or act now.

ACT NOW
Contact us
hashtags

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
©Algorithmic Justice League 2022
Powered by Casa Blue