AI can blind us with its apparent objectivity when in reality, it’s deeply subjective and liable to bias.
Numerous studies highlight AI-related bias, with the majority of effects felt by darker-skinned individuals and women.
There are many historical lessons in biased AI, including Amazon’s failed recruitment model, which failed to recommend women fairly for tech roles, and facial recognition models, which mistakenly identified several black men, resulting in protracted legal issues.
As AI becomes deeply embedded in our day-to-day lives, the issue of bias has become critical, as some of these systems are making life-changing decisions on our behalf.
Women are concerned that AI systems don’t benefit or serve them equally to men, resulting in poor and inequitable decision-making.
Catherine Flick, a Computing & Social Responsibility researcher from De Montfort University, UK, commented, “It’s a reinforcement of a technocentric patriarchal society that once again encodes gender bias into seemingly ‘neutral’ technology.”
Flick points out that even technologies we regard as neutral, like cars or modern medicine, may not necessarily cater to women’s needs as effectively as they do to men’s.
McKinsey’s recent projections indicate that AI could displace jobs, with women being 1.5 times more vulnerable to job displacement by AI than men.
Speaking of Amazon’s failed recruitment project that directly prejudiced applications containing the word “women” or “women’s,” Dr. Sasha Luccioni, from the AI development company HuggingFace, says, “Women were so underrepresented in the data that the model would essentially reject any time ‘women’ appeared.”
Some classic datasets used for machine learning (ML) benchmarking have been exposed as consisting of predominantly white men.
Luccioni continued, “There are very few women working in the field, very few women having a seat at the table.” This lack of representation matters because algorithms, she explains, carry values. Those values may not reflect a diverse perspective if the creators are predominantly male.
This gender imbalance is evident in AI training data as well. Many large language models, like ChatGPT, use datasets from platforms like Reddit for training, where around two-thirds of users are male. As a result, these AI systems may produce results reflective of male-biased conversations.
Addressing AI’s biases requires a holistic approach, argues Flick.
From data collection to the engineering stage, the process requires greater diversity. “If the data going in isn’t good enough, it shouldn’t even get to the engineering stage,” she insists.