Responsible AI: Fairness | Quisitive
AI Readiness Blog Feature Image: Profile of a woman overlaid with computer code
Responsible AI: Fairness
August 27, 2020
Quisitive
A responsible AI system should affect similarly situated people in the same way. Is this actually being implemented in today's world?

The concept of fairness is crucial to a well-functioning society. We all hope that we will be treated fairly by other people, by institutions, and by systems such as the criminal justice system. When this breaks down, bad publicity or even civil unrest can quickly follow. Responsible AI is a powerful technology, but it too needs to be able to treat people fairly. But what do we mean by fairness in this context?

A good principal is:

A fair AI system should affect similarly situated people in the same way

Some examples:

  • A model to help with disease diagnosis should treat people with similar symptoms and medical history in a similar way
  • A model to help with loan adjudication should treat people with similar financial circumstances in a similar way
  • A model to help with criminal justice sentencing should treat people with similar criminal histories the same way

Put another way, we don’t want a machine learning model to give discriminatory outcomes based on protected characteristics such as:

  • Race
  • Gender
  • Religious affiliation
  • Sexual orientation

There are, of course, exceptions – for example gender is an important (and non-controversial) factor in breast cancer screening.

Also, the concept of fairness is not a static concept – ideas or fairness change over time. A key example here is race: modern ideas of racial equity are very different to those that were prevalent 200 years ago. And as we see in our daily lives, they are still evolving today. This means that technological solutions need to be combined with social solutions.

Why Can AI Systems Be Unfair?

At its core, the idea behind a machine learning model is quite simple – find patterns between inputs (the data) and outputs (an outcome).

Despite all the hype, the data a machine learning model learns from is chosen by humans, and that data exists in a society that may be inherently unfair. This leads to some major causes of bias in data:

  • Appropriate data that covers the entire range of use cases isn’t chosen
  • The data contains societal biases, from which faulty inference may be drawn

Let’s consider an example. Say we want to build a model to predict how much someone is likely to get paid in their next job. How might this model be made to be biased? Some ways include:

  • Not choosing a representative range of jobs – eg choosing only male investment bankers and female retail workers
  • There is a known gender pay gap, and even a well selected set of data is likely to exhibit this if the person building it is not careful.
  • People from different cities, with different racial populations, get paid different amounts and the model may use this to make discriminatory inferences based on race

How Can We Make Our Responsible AI System Fairer?

The good news is that a large ongoing R & D effort exists to develop technologies to mitigate bias as much as possible. One example which we will look at in a future post is the Fairlearn python package. Here are just some ways in which in which data scientists can look to design fairer AI systems:

De-biasing datasets

Before building our model, we can perform an analysis on a range of different groupings such as gender and race, to determine if there is some undesirable difference in outcome driven by that characteristic. For example, is the average pay of a female workers in the dataset systematically lower than of males. The dataset can then be rebalanced to reduce this impact.

Analysis of Models

Data scientists will often report on metrics like accuracy and precision for the entire dataset, but you can also do this for subgroups. We should analyze our model after it is built to make sure that the average predicted salary is equal (within error) for both male and female employees, if all other variables are the same. If it fails this validation, we need to take additional mitigation measures.

Debiasing During Training

These techniques are pretty new and on the cutting edge. These algorithms attempt to build active de-biasing into the model training process. However, at this point in time these are not a replacement for good quality dataset construction and model analysis, but can be a powerful complement.

In conclusion, creating fair and equitable AI is a crucial component of a responsible AI strategy. If you’d like to discuss this more, feel to free to reach out via social media, or connect Quisitive here.