Artificial Intelligence: how to avoid bias in AI

We see daily new smart technology applications that influence everything: from the content we access on social media to the development of autonomous cars. However, the same algorithm that facilitates various aspects of our life and increase business productivity, may also carry negative consequences thanks to bias.

Bias in artificial intelligence make a system that is in theory prejudice-free to reproduce some forms of discrimination against certain groups.

In the documentary “Coded Bias”, available on Netflix, the MIT researcher Joy Buolamwini points out how facial recognition systems are not precise when identifying people of specific ethnicities. Such flaw increases the chances of failure and risks when applying such technology for the police, for example.

Here we will present what are the biases in artificial intelligence and the strategies used to avoid them.

Bias in artificial intelligence

When we approach bias in technology application, we are referring to practices of systematic discrimination against some individuals or groups, based on the unproper use of data or their characteristics.

A large portion of discussion around bias is linked to attributes such as race, gender and sexual orientation, but can include any other data that may be sensitive to discrimination.

In one hand, it was expected that artificial intelligence could avoid harmful bias associated with human decision-making. Although this is possible, the technology itself is not neutral, as it is developed in specific social contexts that shape its attributes.

Bias in AI systems may come up in several ways, whether because they employ biased algorithms or because they use biased data for training.

One simple example of AI bias can be found in online advertising. A person signed up as “women” in a Google account has less chances of receiving ads for higher paying jobs. Additionally, an experiment from Facebook showed that Ads with opportunity of housing and employment are also distorted according to racial and gender characteristics.

How to avoid bias in artificial intelligence

Fighting bias in artificial intelligence is essential to guarantee people’s trust on these systems. That is the only way AI can reach its full potential to generate benefits for companies, countries and society.

Team diversity

A way to avoid bias in Ais is to hire inclusively, creating teams with different races, genders, sexual orientations, ages, economic conditions, among other factors.

According to Forbes, diverse teams create better artificial intelligence, since they are keener to detect discrimination and anticipate potential problems. Besides, team diversity supports creativity and the capacity to dimension solutions in the entire company.

New perspective for inclusive design

The involvement of social scientists and other specialists relevant to the product may help expand the comprehension of different perspectives in AI development.

It is important to consider how technology will impact different user cases: whose visions are represented? Which results are expected and how they are compared in different communities? Which preconception, negative experiences or discriminatory results may occur?

Representative data base for training

Assess the impartiality on the datasets used for AI training, identifying limitations and possible harmful correlations. Visualization techniques, clustering and data annotation may help in this evaluation. Sets of public data usually need some modification to better reflect reality.

Test system bias

“Counterfactual fairness” is a technique that guarantees AI’s impartiality by determining whether a decision is fair – or if it is the same both in the real world as in an alternative reality, where sensible attributes such as race, gender or sexual orientation have been altered.

Additionally, it is important to test the system with a diverse group of people, adding a great variety of entries.

Ethics for the future of AI

Biases are harmful to those who are discriminated, but also for the entire society, as it encourages mistrust and produces distorted results.

Keep your head on the end goal and let us help you develop a bias-free artificial intelligence, that is prepared to face adversities and make fair, prejudice-free decisions. Talk to one of our consultants today!

We use cookies to give you the best experience.