AllAI Consulting Logo

AI is not meant to be fair


Predictive AI as an inequity entrenching technology


We aren't there yet

Objective AI that is fairer than mankind is the hype, it's what we envision for the future. But we're no closer to building that version of AI today than you are to flying your jetpack to work next week. AI, in its current form, has only one goal - to maintain the status quo.

'Predictive' AI systems don't predict. We feed the model data that it uses to understand the state of the world, then it uses the insights from that data to maintain that state. Whoever wasn't doing well during the timeframe that the training data captured is never going to do well in the AI system. Which hurts nearly everyone. Because, while we all like to think we're doing well, we are not all doing well and we haven't been for quite some time.

Where we are

AI as Pattern Recognition


AI in general, and deep learning specifically, are pattern recognition machines. They comb through massive amounts of data and boil it down to core relationships between variables. If we feed it a ton of cats, it determines the core features of cats and uses that knowledge to determine how likely it is that the next picture it sees is a cat. If we feed it a series of X's and O's, it determines how many times an X comes after an O and uses that knowledge to determine how likely it is that the next letter it sees will be an O. If we show it a variety of pegs succeeding and failing to be placed in a hole, it determines the size and shape of objects that have and haven't been successfully placed in the hole and uses that knowledge to determine how likely it is that the next object we show it will successfully fit in the hole.

Kid placing legos

While the results aren't rocket science, the process is extremely complicated to program. Which is why technologists tend to oversell the outcomes. The benefit of AI isn't that the tech is making complex decisions, it's that it processes large streams of data to calculate exponential possibilities at almost incomprehensible speeds. This is a significant technological advance. But clearly not as exciting as what we've been sold. An AI that doesn't actually think cannot produce results that are fair. It can only give us the probability of the next item following the established pattern. Misrepresentation of this capability often results in very public disconnects:

AI Alters School Grades
We fed the AI data that indicated that students from elite schools produce high performers and students from disadvantaged schools produced low performers. The AI understood that to be the core relationship between the entities. When we asked it to 'predict' student performance at the end of the school year, it produced results that maintained the status quo:

- It raised grades for low performing students from elite schools
- It lowered the grades for high performing students from disadvantaged schools

Healthcare App Favors Healthier White People
We fed the AI data that white patients incur more medical costs than black patients for the same illnesses. The AI understood that to be the core relationship between the entities. When we asked it to 'predict' which patients would benefit from increased medical spending, it produced results that maintained the status quo:

- It allocated more money to healthier white patients
- It allocated less money to sicker black patients

Criminal Justice Apps Target Minorities
We fed the AI data that Black people and people who live in minority-heavy communities are stopped and arrested more than white people or people who live in white communities. The AI understood that to be the core relationship between the entities. When we ask to 'predict' which neighborhoods and people should we send our police forces to, it produces results that maintain the status quo:

- It allocates more police resources to minority communities
- It allocates less resources to white communities
- It labels Black offenders as higher risk for recidivism
- It labels white offenders as lower risk for recidivism

Why we're here

AI Can't Think


AI, like all technology, takes time to develop. We had the house phone for decades before we had cellphones, and we had cellphones for years before we had smart phones. There was a lot of technological development between each of those milestones. AI is no different.

Is a Walrus a cat or a dog?
The problem we have at the moment is that, while AI processes massive amounts of data, it only 'understands' it at a rudimentary level - as symbols. AI cannot understand the world around it. It has no knowledge of the existence of any data that wasn't present in its training set. Which is why an AI trained to recognize cats from dogs is unable to say that a walrus is neither. All it can tell you is the statistical probability that the walrus is a cat and the statistical probability that the walrus is a dog.

Brittle AI

Beating a soccer AI by making the goalie fall

AI only learns in the most rudimentary sense of the word. It doesn't create a mental model of the data it's trained on, it just pulls what it thinks to be relevant features out of it. But many of those features aren't 'relevant' in any true sense of the word. And any information that isn't included will cause the AI to break. This concept is termed "brittle" AI. It's an acknowledgement that the AI isn't actually learning, which is why it can be fooled in surprising and ridiculous ways.

We can't know all the things

AI is designed to complete the task we ask of it, while maximizing the rewards we've created for it, using only the data we've provided it. It is dependent on us to provide every piece of information that is relevant to its world: task, training data, reward function. In determining what behaviors we want to reward the AI for as it navigates the process of completing its task (how we prefer it to behave), we need to determine all of the possible ways that the task can be completed and tell the AI how it should act in each situation. Which is a monumental task that we honestly cannot conquer.

If I want somone to clean my kitchen, I have to explain all of the little tasks that go into completion of the overall task And I will need to explain my preferences for how each of those little tasks are completed. For example, cleaning the kitchen includes the task of wiping the stove. Do I have a preference for how thoroughly the stove is wiped, or what parts need to be wiped? Or whether the stove is on or off during task completion? I would need to specify these and a myriad of other preferences in the code for an AI kitchen cleaner.

But what happens when I want the AI to perform more complex tasks? Can you imagine the dizzying series of preferences for programming a self-driving car? Or an automated doctor? Or judge? We can't. Not fully. Which is the problem.

How we get there

Teaching AI to 'think'


Much of the information we learn about the world is taught to us in the form of similes and metaphors. We're taught to see situations and objects in relation to others. Essentially, that's what we've programmed our models to do. But to 'think', AI will need to able to understand increasingly complex relationships and concepts and be capable of using that knowledge to formulate mental models that it utilizes across a variety of different endeavors.

AI and the Metaphor

Cow jumping over the moon

When speaking figuratively, we substitute the literal meaning of a phrase with a context-specific meaning. For example, if I say "she was over the moon with joy", it doesn't mean that a woman strapped herself to a rocket because she was happy. We've replaced the literal meaning of 'over the moon' as an indication of vicinity with the contextual meaning of a feeling of unbridled happiness. In order to 'think' about the world in the way that humans understand it, AI needs to be able to recognize a symbol within a symbol. In addition to the literal definitions of the words in the phrase "over the moon," it needs to understand that the phrase itself is a representation of a figurative state of being. People have been working on this capability, but it's difficult.

Understanding concepts
The desired outcome, from thinking figuratively, is the understanding of concepts. Concepts are mental models. They are how we map the world, the relationships in it, and our relationship to it. The field of AI dedicated to studying the idea of concepts is called knowledge representation and reasoning (KRR). While it is one of the oldest fields of AI study, it has been largely neglected. Although new research has been published.

Transfer learning

Demis Hassabis: Transfer learning is key to AGI

The tasks above are stepping stones on the path to creating AI that can engage in transfer learning - the ability to apply lessons learned in one situation to another. If AI can understand the concept of fairness in variety of different contexts, then maybe it can accept, or formulate, some generalized notions of fairness to apply in its predictive processes.

While we're currently making significant progress in foundational transfer learning (utilizing a model that was trained on a specific task to perform the same task using a different dataset), 'thinking' will require much more progress. AI will need the ability to transfer concepts from a model and the ability to use those concepts to perform different tasks than the original model was trained for.