New Bank of England research argues that financial systems might be threatened by biased AI decision-making.
Emerging research from the Bank of England suggests that algorithms can mirror biases inherent in datasets and manifest unfair treatment of consumers or professionals.
Kathleen Blake, an analyst at the Bank of England, voiced her concerns regarding this trend. She commented that AI-fueled discrimination and bias could jeopardize financial systems by undermining trust.
Furthermore, she pointed out that companies deploying such “biased or unfair AI” may expose themselves to significant legal and reputational risks, leading to scrutiny by regulatory bodies.
Adding to this, the Department for Science, Innovation, and Technology remarked, “We are at a crossroads in human history, and to turn the other way would be a monumental missed opportunity for mankind.”
Elaborating on her point, Ms. Blake referred to certain notable AI-related incidents that incited bias. For instance, an algorithm Apple and Goldman Sachs developed for credit card application evaluations came under fire for reportedly offering women reduced credit limits compared to their male counterparts.
This discrepancy drew the attention of the New York State Department of Financial Services in 2021.
Their findings stated that while the act was not intentional, it “showed deficiencies in customer service and transparency.”
Blake also touched upon Amazon’s recruitment algorithm, highlighting its inherent bias. She described, “Female applicants were negatively scored because the algorithm was trained on resumes submitted to the company over a 10-year period and reflected the male dominance of the industry.”
This led to Amazon discontinuing the tool in 2018 after realizing its unfair penalization of candidates using terms such as “women’s chess club captain.”
Amazon’s biased recruitment bot is part of a whole ensemble of predjudiced and biased AI models that tend to fail minority groups and women.
Integrating AI into the finance infrastructure without rigorous scrutiny could perpetuate issues of bias and discrimination.