Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook

Harms Resources

AI Deepfakes
Unmasking AI

Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) by Dr. Joy Buolamwini details AI harms in emerging technologies. The book provides examples of deepfake harms (See Chapter 9, Pages 106-112).

AI and Transportation
AJL’s Freedom Flyers Campaign

The #FreedomFlyers campaign raises awareness about the TSA's expanding use of facial recognition at airports. Make your voice heard by filling out a TSA Scorecard.

AI and Transportation
Design Justice by Sasha Costanza-Chock

In Design Justice, Dr. Sasha Costanza-Chock, Senior Research Advisor to AJL, talks about designing technology that works for everyone. She touches on how airport security scanners are biased against transgender people, because they are based on outdated social ideas.

AI Surveillance
AJL’s Gender Shades Justice Award

The Gender Shades Justice Award recognizes individuals who experienced an AI harm, spoke out about their experiences, and worked to prevent future harm. The inaugural award was presented to Robert Williams for his efforts in addressing Detroit PD’s use of facial recognition technologies.

AI and Education
Coded Bias Documentary

The Coded Bias film explores the fallout of AJL founder Dr. Joy Buolamwini’s discovery that facial recognition technologies don’t always work well for darker skin tones or female-appearing faces. Through the story of a Houston teacher who was almost fired, it warns about the risks of over-relying on automated tools to make important decisions about education.

AI and Housing
Coded Bias Documentary

The Coded Bias documentary, released in 2020, tells the story of Dr. Joy Buolamwini’s discovery that facial recognition does not see dark skinned faces accurately. The film highlights the story of a building management company in Brooklyn that planned to implement facial recognition technology to allow tenants to enter their homes.

AI and Housing
Amicus Letter In Support Of The Brooklyn Tenants

In 2019, algorithmic bias researchers, including Dr. Joy Boulamwini, Dr. Timnit Gebru, and Inioluwa Deborah Raji, submitted an amicus support letter in support of the Brooklyn Tenants who were pushing back against the use of facial recognition technology in their building.

AI and Employment
#MyWorkMyRights Campaign

In 2024, AJL launched the #MyWorkMyRights Campaign to advocate for Consent, Compensation, Control, and Credit for writers in the age of generative AI. Share your story with AJL, post social media content online, or add your name to the Author's Guild Open Letter.

AI and Employment
Coded Bias Documentary

Coded Bias explores the fallout of AJL founder Dr. Joy Buolamwini’s discovery that facial recognition struggles to see dark-skinned faces accurately. The film underscores the dangers of relying on AI to make employment decisions through the story of an award-winning teacher who nearly loses his job because of a poor assessment from an automated tool.

AI and Employment
U.S. Equal Employment Opportunity Commission

In 2021, the EEOC launched an initiative to examine the effects of  employment- related AI tools and offer guidance on how to ensure algorithmic fairness. Their joint statement outlines how federal agencies will ensure employers use these tools fairly and responsibly.

AI and Employment
Upturn

Upturn is a nonprofit that drives policy change to advance equity in the design, governance, and use of technology. protect people’s opportunities. Their 2018 report on fairness in hiring algorithms is a key resource for understanding the landscape of different tools.

AI and Employment
Georgetown Law Draft Legislation

Georgetown Law’s Center on Privacy and Technology drafted the Worker Privacy Act bill which outlines protections against the invasive collection of employees’ data.

AI and Employment
Have I Been Trained

Have I Been Trained allows users to discover if their work has been used to train AI. Users can then opt out of future training by adding their work to the Do Not Train registry.

AI and Employment
The Worker Info Exchange

The Worker Info Exchange helps gig workers access data related to their employment at companies such as Uber, Amazon Flex, Bolt, and others. Their published research on tech and the gig economy provides insights and recommendations for advocacy.

AI and Employment
Coworker.org

Coworker.org also published a framework called, Little Tech is Coming for Workers, for reclaiming and building worker power. Their Bossware and Employment Tech database compiles more than 500 tech products impacting employees.

AI Deepfakes
The DEFIANCE Act

The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act allows victims of deepfake abuse to sue the creators of nonconsensual deepfakes. Passed by the senate in 2024, this would be the first federal law protecting against harmful deepfakes if it passes.

AI Deepfakes
No Fakes Act

The No Fakes Act, introduced to the U.S. Senate in Fall 2023, sets new standards for image rights, protecting people from from having deepfake digital twins of them made without consent.

AI Deepfakes
NSA Cybersecurity Information Sheet

The National Security Agency, Federal Bureau of Investigation, and Cybersecurity Infrastructure Security Agency released a Cybersecurity Information Sheet providing information about synthetic media threats and how deepfake technology can be used for malicious purposes.

AI Deepfakes
Learn to Detect Deepfakes Projects

Detect Fakes from the MIT Media Lab plus Northwestern University researchers and Media Literacy in the Age of Deepfakes by the MIT Center for Advanced Virtuality are two projects that help people better understand and identify AI-created content.

AI Deepfakes
Ban Deepfakes Petition

ControlAI’s Ban Deepfakes Campaign calls for making deepfakes illegal and holding creators accountable. Individuals can sign an open letter to urge lawmakers to take steps to protect people from deepfake harms.

AI Deepfakes
Another Body

The Another Body film tells the story of Taylor Klein (pseudonym), a college student who discovers that a classmate created an explicit deepfake using her image. Taylor’s story led to the creation of the #MyImageMyChoice movement, which raises awareness about explicit deepfakes and supports victims of online image abuse.

AI and Transportation
Racial Bias in Object Detection

In Predictive Inequity in Object Detection, Georgia Tech researchers showed that self-driving cars' systems work differently depending on pedestrians’ skin type.

AI and Transportation
Bias in Autonomous Driving

In Bias Behind the Wheel, researchers from China, London, and Singapore analyzed how self-driving cars sometimes have trouble detecting pedestrians based on things like age and gender.

AI and Transportation
Alabama Automated Vehicle Law

In 2024, Alabama passed a law to regulate the use of self-driving cars, requiring companies and drivers to register vehicles with automated driving systems.

AI and Transportation
EFF Car Data Resources

The EFF has compiled a list of resources that allows individuals to figure out what data their cars are tracking and how to opt-out of sharing if possible.

AI and Transportation
Mozilla’s Privacy Not Included

This project helps people track the privacy practices of car manufacturers, providing ratings of car companies and their privacy habits to help consumers make informed choices.

AI and Transportation
Examining ALPR Data

The Electronic Frontier Foundation (EFF) studied eight days of Automated License Plate Reader (ALPR) data to show how ALPRs work and push for more police accountability.

AI Surveillance
White Collar Crime Risk Zones

White Collar Crime Risk Zones is a machine learning-enabled map that predicts where financial crimes are likely to happen in the U.S. It was created in response to predictive policing systems that unfairly target communities of color.

AI Surveillance
StopSpying.org

StopSpying.org is a project with Amnesty International’s Ban the Scan Campaign that informs people about global surveillance tech. You can sign their petition against mass surveillance worldwide.

AI Surveillance
Project for Privacy and Surveillance

The Project for Privacy and Surveillance Accountability works to protect privacy and civil rights. Their Scorecard rates members of Congress on their privacy and surveillance policy.

AI Surveillance
Electronic Frontier Foundation Privacy Resources

The Electronic Frontier Foundation is a nonprofit helping individuals defend their online privacy with tools like the Privacy Badger browser add-on to block online trackers, the Spot the Surveillance AR tool that teaches you to identify surveillance technologies, and the Atlas of Surveillance.

AI Surveillance
Fight for the Future Campaigns

Fight for the Future is a group of activists and technologists who work on AI and data privacy projects like Stop Data Broker Abuse, Cancel Ring Nation, Stop Endangering Abortion Seekers, and Cancel Amazon + Police Partnerships.

AI Surveillance
American Dragnet

The American Dragnet report from Georgetown Law’s Center on Privacy and Technology investigates how U.S. Immigration and Customs Enforcement (ICE) uses surveillance data.

AI Surveillance
NYU Policing Project

The NYU School of Law’s Policing Project promotes fairness in law enforcement by working on legal and policy solutions to help regulate the use of AI by the police.

AI Surveillance
Predictive Policing Explained

In 2020, the Brennan Center released a predictive policing report explaining what the technology is and major concerns with its increased use.

AI and Education
Federal Trade Commission

The FTC published a statement on EdTech and the Children’s Online Privacy Protection Act, making it clear that it’s illegal for companies to compromise children’s privacy rights when using educational technology.

AI and Education
Center for Democracy and Technology

The Center for Democracy and Technology has published numerous resources on protecting student privacy.

AI and Education
Defend Digital Me

Defend Digital Me is a non-profit organization that provides research about student privacy and the use of AI. In 2022, they released “The State of Biometrics 2022: A Review of Policy and Practice in UK Education” report.

AI and Education
Digital Promise

Digital Promise, a global nonprofit, created the AI Digital Equity Framework, to help schools make informed decisions about using AI technology responsibly.

AI and Education
The Student Data Privacy Project

This project, organized by parent advocates, provides templates parents can use to ask their children’s schools for information on how data is being used by educational technology.

AI and Education
EdTech Equity Project

The EdTech Equity Project offers toolkits to help schools, tech developers, and community members to make sure AI technology is fair and works for everyone.

AI and Education
The Red Flag Machine

The Electronic Frontier Foundation’s Red Flag Machine quiz, and accompanying research report show how student monitoring software, like GoGuardian, can display significant errors in the online content they flag.

AI and Education
Office of Educational Technology

After the release of Biden’s Executive Order on Artificial Intelligence, the U.S. Department of Education began publishing guidance to help schools use AI technology to benefit all students while also protecting their privacy.

AI and Education
National Disabled Law Students Association Report

The NDLSA’s Report on Concerns Regarding Online Administration of Bar Exams highlights the challenges disabled students face with e-proctoring tools when remotely taking bar exams during the COVID-19 pandemic. It focuses on concerns like AI bias and privacy.

AI and Housing
National Low Income Housing Commission

In 2023, the National Low Income Housing Commission (NLIHC) submitted a report on unjust automated screening processes to the Consumer Financial Protection Bureau and Federal Trade Commission. The report outlines the various ways that these systems discriminate particularly against low-income renters.

AI and Housing
Countering Tenant Screening Initiative

The Countering Tenant Screening Initiative collects tenant screening reports to hold tenant screening algorithms accountable and to teach individuals about how tenant screening works.

AI and Housing
National Fair Housing Alliance

The National Fair Housing Alliance published the Method for Improving Mortgage Fairness report on how to improve mortgage fairness by underwriting data models through methods like Distribution Matching.

AI and Healthcare
Unmasking AI

Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) This book by Dr. Joy Buolamwini details AI harms and oppression; the book provides examples of healthcare biases (See Chapter 5, Pages 46-55)

AI and Healthcare
Coalition for Health AI

‍CHAI is a non-profit focused on the appropriate creation, evaluation, and use of AI in healthcare, particularly for health equity. They publish reports about how to drive high quality healthcare by developing credible, fair, and transparent health AI systems.

AI and Healthcare
Department of Health and Human Services

The Department of Health and Human Services recently shared its plan for Promoting the Responsible Use of Artificial Intelligence in the Administration of Public Benefits and Guiding Principles to Address the Impact of Algorithmic Bias on Racial and Ethnic Disparities in Health and Health Care.

AI and Healthcare
Department of Veteran Affairs

The Department of Veterans Affairs, which is a part of the Department of Health and Human Services, has established the National Artificial Intelligence Institute to expand on leveraging AI research and development for the improvement of the health of veterans.

AI and Healthcare
The World Privacy Forum

The World Privacy Forum has published several reports related to health privacy, healthcare and biometrics including Risky Analysis: Assessing and Improving AI Governance Tools and Covid-19 and HIPAA: HHS’s Troubled Approach to Waiving Privacy and Security Rules for the Pandemic.

AI and Finance
#NoFaceNoCase Campaign

In 2023, AJL launched the No Face, No Case campaign to challenge the IRS’s use of ID.me for identity verification. Using ID.me requires you to waive legal rights and give up personal data. However, refusing the service could keep you from accessing critical services and benefits.

AI and Finance
Federal Trade Commission

In a 2023 statement, the FTC addressed privacy, data security, and bias concerns with new machine learning systems. They warned that unproven claims, lack of accountability, privacy violations, and other bad business practices can violate the FTC act and can be reported at ReportFraud.ftc.gov.

AI and Finance
Consumer Federation of America Research

In a series of studies on disparities in access to insurance, the CFA found that state-mandated auto coverage was more expensive for some drivers depending on their income, race, and geographic location.

AI and Finance
Consumer Financial Protection Bureau

The CFPB provides protections for borrowers, requiring creditors to have specific and accurate reasons for adverse actions, like denying credit. This transparency requirement provides some protections against the use of complex and opaque crediting algorithms.

Join the Algorithmic Justice League Newsletter.

Stay up to date with the movement towards equitable and accountable AI.

SIGN UP

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue