Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook
Amplify your voice. Make a donation to AJL.
new report from AJL

who audits the auditors?

Read Research paper
Share research

Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

Authors: Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini ‍

Algorithmic audits (or `AI audits') are an increasingly popular mechanism for algorithmic accountability; however, they remain poorly defined. Without a clear understanding of audit practices, let alone widely used standards or regulatory guidance, claims that an AI product or system has been audited, whether by first-, second-, or third-party auditors, are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this knowledge gap, we provide the first comprehensive field scan of the AI audit ecosystem. We share a catalog of individuals (N=438) and organizations (N=189) who engage in algorithmic audits or whose work is directly relevant to algorithmic audits; conduct an anonymous survey of the group (N=152); and interview industry leaders (N=10).

We identify emerging best practices as well as methods and tools that are becoming commonplace, and enumerate common barriers to leveraging algorithmic audits as effective accountability mechanisms. 

We outline policy recommendations to improve the quality and impact of these audits, and highlight proposals with wide support from algorithmic auditors as well as areas of debate. Our recommendations have implications for lawmakers, regulators, internal company policymakers, and standards-setting bodies, as well as for auditors.

Read Research paper
Share research
Policy recommendation

#1 require the owners and operators of AI systems to engage in independent algorithmic audits against clearly defined standards

Policy recommendation

#2 notify individuals when they are subject to algorithmic decision-making systems

Policy recommendation

#3 mandate disclosure of key components of audit findings for peer review

Policy recommendation

#4 consider real-world harm in the audit process, including through standardized harm incident reporting and response mechanisms

Policy recommendation

#5 directly involve the stakeholders most likely to be harmed by AI systems in the algorithmic audit process

Policy recommendation

#6 formalize evaluation and, potentially, accreditation of algorithmic auditors

Join the Algorithmic
Justice League Newsletter.

Stay up to date with the movement towards equitable and accountable AI.

SIGN UP

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue