Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook

AI and Finance

Fighting Fraud, Protecting Privacy, And Dismantling The Digital Poorhouse

AI and Finance
overviewexamples in the worldsupportResourcesSimilar harms
OVERVIEWExamples in the worldSUPPORTResourcesSimiliar harms

AI and Finance

overview

REport harm

Companies use proprietary algorithms in finance for risk assessments, to determine credit worthiness, and to verify identity. However, these uses carry the risk of exposing consumers to identity theft, furthering discriminatory lending practices, and driving fraud.

Discriminatory lending can disproportionately impact vulnerable people who are already vulnerable, including low-income borrowers, women, and people of color, women, and low-income borrowers. Additionally, the growing prevalence of generative AI-powered scams can be used by bad actors to manipulate and defraud consumers.

The use of education data in credit decisions is particularly troublesome given the continuing pattern of disparate access to education

Consumer Reports

Consumer Reports

Where you attend college could be costing you more to borrow and refinance education loans, report says

Read more

AI and Finance

In the World

Credit Score Bias

Credit scoring models that use AI may inherit biases from historical datasets which penalize groups who were previously denied credit scores more often and may overlook alternative factors that demonstrate credit worthiness.

  • A case study report by the Student Borrower Protection Center found that a community college borrower would pay $1,314 more on a $10,000 loan than a student with the same credit profile pursuing a bachelors’ degree.

Predatory Lending

Predatory lending targets and imposes unfair or abusive loan terms on a borrower, often through deceptive or coercive actions. AI can identify targets and create content for these scams.

  • 🏆In 2021, a report from The Markup showed how advertisers were targeting people for home equity loans, credit cards, and other financial services based on their age. Their work exposing this practice led to Facebook promising to take action against these advertisers.
  • 🏆Google also began cracking down on predatory loan advertisements with fraudulent terms, preventing corporations from using personalized ad targeting for finance ads.

Discriminatory Pricing

Insurance companies and banks can use models that take location, race, and other demographic information into account. These tools risk amplifying discriminatory outcomes that already harm vulnerable communities.

  • A report by ProPublica and Consumer Reports explored how auto insurance rates for a Chicago resident in East Garfield Park was nearly 4 times as much as a Lakeview resident across town, despite lower vehicle crime rates in Lakeview. The analysis pointed to differences in rates for zip codes with larger Black and LatinX populations.

Scams and Fraud

Bad actors can use artificial intelligence to create realistic fake identities to open accounts or conduct fraudulent transactions.

  • In 2023, an investor in Florida was the target of an deepfake voice scam that mimicked his voice to trick his banker into handing over his money.

REPORT HARM
REPORT HARM
REPORT HARM
REPORT HARM

Amplify Your Voice. Support the Movement.

REport harm

You believe you have been
harmed by AI

If you believe you’ve been harmed by Artificial Intelligence, please fill out our harms report form. We will get back to you within 48 hours on weekdays and 72 over the weekend.

You are seeking advice

If you are seeking legal advice or representation, consider reaching out to an ACLU office in your respective state.

REport harm

AI and Finance

resources

Harms resource
#NoFaceNoCase Campaign

In 2023, AJL launched the No Face, No Case campaign to challenge the IRS’s use of ID.me for identity verification. Using ID.me requires you to waive legal rights and give up personal data. However, refusing the service could keep you from accessing critical services and benefits.

Harms resource
Federal Trade Commission

In a 2023 statement, the FTC addressed privacy, data security, and bias concerns with new machine learning systems. They warned that unproven claims, lack of accountability, privacy violations, and other bad business practices can violate the FTC act and can be reported at ReportFraud.ftc.gov.

Harms resource
Consumer Federation of America Research

In a series of studies on disparities in access to insurance, the CFA found that state-mandated auto coverage was more expensive for some drivers depending on their income, race, and geographic location.

Harms resource
Consumer Financial Protection Bureau

The CFPB provides protections for borrowers, requiring creditors to have specific and accurate reasons for adverse actions, like denying credit. This transparency requirement provides some protections against the use of complex and opaque crediting algorithms.

SIMILAR Harms

harm
AI and Housing
harm
AI and Transportation
See all harms

Join the Algorithmic Justice League Newsletter.

Stay up to date with the movement towards equitable and accountable AI.

SIGN UP

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue