AI translated with prejudice

The Artificial Intelligence (AI) is already part of our everyday life. Smartphones, smart homes and smart cities accompany our lives. AI systems are also used in recruitment procedures, medical diagnoses and court rulings. Is that any positive?

Opinions differ on this. One thing is most important: there are risks. Here you can find out what it brings us and why we may have to watch out for automatic translations once again.

Future of AI: Killer Robots or Nirvana?

Many researchers paint black: Soon, robots become self-sufficient and erase the whole of humanity. Well, that may be a little pessimistic, but we certainly need to take into account the impact on our society. More optimistic forecasts suggest that AI will contribute USD 15 trillion to the global economy by 2030. What does this mean? Social Nirvana for all of us!

AI systems increase social biases

We have always struggled with social differences. AI systems are contributing more to this. Need examples?

  • Automated translations deliver sexist results.
  • Image recognition with AI classifies dark-skinned people as gorillas.
  • If police data for arrest is automated with an AI system and until now the police officers have discriminated against foreigners, the system discriminates against the same type of person.

Such cases are devastating and arise from the systems using mathematical models that use large amounts of human data. If these are socially distorted, these prejudices are inevitably reproduced. Social imbalance is exacerbated.

Clear example: prevent distorted translations – is that possible?

All modern machine translation systems (e.g. Google Translate) are trained on set pairs. There are many gender biases. In general, 70 percent of gender-specific pronouns in translation machines are male, while only 30 percent are female.

Projections say aI produces more than a billion sexistly translated statements. These would all have to be removed from the data set if we were to prevent the system from spitting out discriminatory sentences. Sounds simple?

If a person spent just five seconds reading each of the mistranslated sentences in the training data, it would take 159 years – day and night, without a break.

Is there a solution?

It’s a pity that it hasn’t worked out yet. Indeed, if AI technologies were less biased than we are, they could help us to recognize and combat our own prejudices.
We should certainly work to ensure that AI systems expose our prejudices. AI developers therefore need to deal intensively with the social consequences of the systems they create. They probably underestimate what can become of it.

Martin

Martin ist Geschäftsführer der Kies-Media GmbH und Betreiber von ManOnAMission.de. Er interessiert sich für alles was mit IT, Logik und Männersport zu tun hat.