Gender Bias in AI: ‘Where are all the women?’

Yennie Jun knew the results were worrisome. A machine learning (ML) engineer by day and a hobbyist blogger of content about ML and artificial intelligence (AI) by night, in a recent experiment, she asked two Large Language Models (LLMs) about what each considered the most important people in history. She repeated the process 10 times for 10 different languages. Some names, like Gandhi and Jesus, appeared frequently. Other names, like Marie Curie or Cleopatra, less frequently. Compared to the number of male names generated by the models, there were few female names overall.

“The biggest question I had was: Where were all the women?” Jun says in a recounting of the experiment in her blog.

(This feature is part of a larger content package honoring SC Media’s 2023 Women in IT Security. Click here for full coverage)

Jun says even when prompted in several different languages, such as Russian, Korean, and Chinese, the historical figures were overwhelmingly male, Jun tells SC Media. The phenomena occurred for two different LLMs – Anthropic and OpenAI – probed by Jun.

Read the Full Story Here

Source: SC Media