Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook

research

AllresearchprOJECTSTALKS/EVENTSadvocacyEXHIBITIONSeducationPRESS
Research
Race and Surveillance Brief

The CCSRE discusses activist strategies for challenging surveillance technology, a literature review on racialized surveillance, and provides recommendations for researchers and community organizers.

View More
Research
UNESCO Paper: Towards Credible Third-Party Audits Of AI Systems

Seeking to address the responsible ongoing development of AI, UNESCO released a timely paper, "Missing Links in Governance."

View More
Research
Who Audits the Auditors

The "Who Audits the Auditors?" paper was presented at the ACM FAccT Conference on June 21, 2022. Algorithmic Audits (or AI AUdits) have increased in popularity, however, they remain poorly defined.

View More
Research
Bug Bounties for Algorithmic Harms Report

Lessons from cybersecurity vulnerability disclosure for algorithmic harms discovery, disclosure, and redress, led by AJL researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Dr. Joy Buolamwini.

View More
Research
Facial Recognition Technologies: A Primer

Facial Recognition Technologies: A Primer provides a basic introduction to the terminology, applications, and difficulties of evaluating this complex set of technologies.

View More
Research
Facial Recognition Technologies in the Wild: A Call for a Federal Office

In Facial Recognition Technologies in the Wild: A Call for a Federal Office, researchers Erik Learned-Miller, Joy Buolamwini, Vicente Ordóñez, and Jamie Morgenstern propose an FDA-inspired model that categorizes facial recognition technologies.

View More
Research
Actionable Auditing: Investigating the Impact of Biased Performance Results of Commercial AI Products

Although algorithmic auditing has emerged as a key strategy to expose systematic biases in software platforms, we struggle to understand the real impact of these audits.

View More
Research
Gender Shades: Uncovering gender and skin-type bias in commercial AI products

Gender Shades is an excavation of inadvertent negligence that will cripple the age of automation and exacerbate inequality if left to fester.

View More

Join the Algorithmic Justice League Newsletter.

Stay up to date with the movement towards equitable and accountable AI.

SIGN UP

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue