Equitable AI requires that people have agency and control over how they interact with an AI system. To have agency, people must first be aware of how these systems are used all around them — for example, at airports, stadiums, schools, hospitals and in hiring and housing — who is involved in creating the system — from business, government and academia — and the risks and potential harms.
Equitable AI requires securing affirmative consent from people on how or whether they interact with an AI system. The idea here is that people understand exactly how their data will be used and if consent is given then their data is limited only to that permitted use. The defaults for affirmative consent are “opt-in,” and if people elect not to opt-in, affirmative consent requires that they will not suffer any penalty or denial of access to platforms or services as a result. Unlike the terms of service that tech companies require people to click through to use their platforms, affirmative consent for AI cannot be coerced.
In addition to providing agency, equitable AI respects human life, dignity and rights.
For an AI system to demonstrate meaningful transparency it must provide an explanation of how the system works, how it was designed, and for what specific purpose. Critically, meaningful transparency allows people to clearly understand the intended capabilities and known limitations of the AI.
To demonstrate this standard, companies and governments must share information about how AI is being used in their own decision-making processes and sold to others. The goal is for people to understand the societal risks every time they encounter an AI system and how their data is being used by people in power to make decisions that affect them. Sharing this information may be supported by reporting requirements that are mandated through law or agreed to through codes of conduct.
AI systems are constantly evolving. As a result, Accountable AI requires continuous oversight by independent third parties. To support continuous oversight there must be laws that require companies and government agencies deploying AI to meet minimum requirements, for example: maintaining on-going documentation, submitting to audit requirements, and allowing access to civil society organizations for assessment and review.
There are many different terms that have been used to describe a policy approach to AI. We want to be clear about what we mean by equitable and accountable as separate from these other approaches.
The notion of “Ethical AI” has been leveraged by big tech companies — investors and executives strategically aligned with academia and government — to push for voluntary principles over government regulation. The idea of using ethics is not problematic in itself, but has led to a proliferation of “AI Principles” with limited means for translating these principles into practice. A system of AI ethics allows companies to be accountable only to rules that they have set for themselves. The ball is in their court from beginning to end. Appeals to ethical AI can also be leveraged by the government to justify questionable policies that have not been settled in law. This is a limited approach from our perspective because it does not create any mandatory requirements or ban certain uses of AI. Our focus is instead on creating action that bridges the gap from principles to practice.
While calls for Inclusive AI may be well-intended, inclusion alone does not support progress towards mitigating harm. At times, respecting life, dignity and rights may require that a system gather more data with affirmative consent, for example to support accuracy in precision medicine across diverse groups. At other times, including more data can mean improving upon a system that unfairly subjects vulnerable populations to additional targeted scrutiny. In service of “inclusion,” data may also be collected in violation of privacy and without consent.
Technical standards are not enough to ensure that AI systems will not be deployed to threaten civil liberties and amplify existing patterns of discrimination. The use of facial recognition in surveillance threatens the civil liberties of all citizens, including freedom of expression, freedom of association and due process.
All of our recommendations supporting equitable and accountable AI are intended to focus on the process embedded at every level of operations over any individual product. An effective process can provide a reliable standard for outside evaluation and be applied no matter what the specific technology in question. It also allows for the voice and choice for the people who will be impacted to be incorporated throughout the lifecycle and ultimate decisions regarding the use of the AI. Companies and governments must evaluate their practices with respect to these priorities.