Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook

AI and Healthcare

Diagnosing Algorithmic Bias For Equity In Access and Outcomes

AI and Healthcare
overviewexamples in the worldsupportResourcesSimilar harms
OVERVIEWExamples in the worldSUPPORTResourcesSimiliar harms

AI and Healthcare

overview

REport harm

In healthcare and insurance companies, automated algorithms assist in clinical diagnoses, determine the order in which patients are seen, and evaluate the need for financial assistance. Additionally, the use of biometrics for patient identification, including fingerprints, facial recognition, and DNA matching raises concerns about privacy violations and data misuse.

The lack of diversity in clinical trials may result in biased outcomes when the tools are applied in real life. There is also a lack of transparency about where and how clinical algorithms are used, which makes it difficult to identify harmful practices.

On one hand, Aadhaar is seen as an intervention in everyday practices of corruption. At the same time, Aadhaar has been seen as a violation of privacy, a way of strengthening surveillance techniques.

TIME MAGAZINE

TIME MAGAZINE

India Has Been Collecting Eye Scans and Fingerprint Records From Every Citizen. Here’s What to Know

Read more

AI and Healthcare

In the World

Financial Assistance

Government programs that provide financial assistance to patients may allocate resources using automated decision-making tools.

  • In 2024, plaintiffs won a lawsuit against the TennCare Connect system, an algorithmic tool to determine eligibility for Medicaid programs that illegally denied coverage to thousands of Tennesseans.
  • Humana, a health insurance provider, is being sued by the Department of Justice for using NaviHealth’s nH Predict algorithm to wrongfully deny care to elderly people on the company’s Medicare Advantage plans.

Clinical Diagnoses

Healthcare institutions can use clinical algorithms to diagnose patients, though there are concerns about these systems being trained on datasets that lack diversity and representation.

  • In 2021, Google released a dermatology app that was found to work poorly on people with dark skin tones.
  • 🏆Melalogic was started as a platform to provide people of color  with a single source of skin health information. Users can submit images of skin conditions and receive suggestions for treatments and solutions from Black skin care professionals.

Critical Care

Automated decision-making algorithms have been used to determine the order in which patients are seen based on their medical needs.

  • 🏆Dr. Sandra Looby fought for her son, ill with COVID-19, to get the care he needed despite normal pulse oximeter readings. Her advocacy helped expose how pulse oximeters can be inaccurate for patients with darker skin tones.

Patient Identification and Matching

Fingerprints, voice recognition, facial recognition, retinal scanning, and DNA matching may be used for patient matching and patient identification.

  • In 2018, India implemented Aadhaar, the world’s largest biometric national identification system. Despite its promised benefits, it raised issues about privacy concerns, instances of exclusion, technical AI glitches, and the right to live.

REPORT HARM
REPORT HARM
REPORT HARM
REPORT HARM

Amplify Your Voice. Support the Movement.

REport harm

You believe you have been
harmed by AI

If you believe you’ve been harmed by Artificial Intelligence, please fill out our harms report form. We will get back to you within 48 hours on weekdays and 72 over the weekend.

You are seeking advice

If you are seeking legal advice or representation, consider reaching out to an ACLU office in your respective state.

REport harm

AI and Healthcare

resources

Harms resource
Unmasking AI

Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) This book by Dr. Joy Buolamwini details AI harms and oppression; the book provides examples of healthcare biases (See Chapter 5, Pages 46-55)

Harms resource
Coalition for Health AI

‍CHAI is a non-profit focused on the appropriate creation, evaluation, and use of AI in healthcare, particularly for health equity. They publish reports about how to drive high quality healthcare by developing credible, fair, and transparent health AI systems.

Harms resource
Department of Health and Human Services

The Department of Health and Human Services recently shared its plan for Promoting the Responsible Use of Artificial Intelligence in the Administration of Public Benefits and Guiding Principles to Address the Impact of Algorithmic Bias on Racial and Ethnic Disparities in Health and Health Care.

Harms resource
Department of Veteran Affairs

The Department of Veterans Affairs, which is a part of the Department of Health and Human Services, has established the National Artificial Intelligence Institute to expand on leveraging AI research and development for the improvement of the health of veterans.

Harms resource
The World Privacy Forum

The World Privacy Forum has published several reports related to health privacy, healthcare and biometrics including Risky Analysis: Assessing and Improving AI Governance Tools and Covid-19 and HIPAA: HHS’s Troubled Approach to Waiving Privacy and Security Rules for the Pandemic.

SIMILAR Harms

harm
AI and Finance
harm
AI Surveillance
See all harms

Join the Algorithmic Justice League Newsletter.

Stay up to date with the movement towards equitable and accountable AI.

SIGN UP

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue