Skip to main content

AI risks replicating tech’s ethnic minority bias across business

Written by: Aliya Ram
Published on: 15 Aug 2018

Source: Financial Times, published on 31 May 2018

‘We used to talk about garbage in, garbage out,” says Wendy Hall, author of a review into artificial intelligence commissioned by the UK government. “Now, with AI, we talk about bias in, bias out.”

Ms Hall, a professor of computer science at Southampton university, is referring to a popular cliché in computing that bad inputs lead to bad outputs. With the spread of artificial intelligence to employment functions such as recruitment, she says, bad inputs can mean biased outputs, which led to repercussions for women, the disabled and ethnic minorities.

“There’s a huge problem of bias in the [technology] workforce,” she says. “But if you correct for it, you are manipulating things. Dealing with this is a big issue for how artificial intelligence is designed.”

The technology sector has come under the spotlight for its lack of diversity after a series of high-profile cases of sexual harassment in Silicon Valley companies, including Uber. A series of scandals in the US about predictive policing and facial recognition software that cannot recognise black people have also raised concerns about ethnic discrimination.

Because machine learning technology is deployed by governments and companies in fields ranging from criminal justice to recruitment, workforce biases in the tech sector are amplified across other industries and the public sector.

“Artificial intelligence is often constructed from grossly biased and decontextualised information and ideas that can be harmful to the public when turned into automated decision making systems,” says Safiya Noble, assistant professor at the University of Southern California Annenberg School of Communication and author of Algorithms of Oppression.

“It is unrecognisable to many engineers who are working with it — they often do not understand with nuance the social ramifications of their projects, from predictive policing to high-quality news and information, or access to education, financial aid, mortgages, and bank loans,” she says. “Artificial intelligence is generating, sustaining, and potentially deepening racial, ethnic and gender discrimination and it is increasingly tied to the distribution of goods and services in society.”

There is limited data about minority ethnic workers in UK and European technology companies. Only three are in the FTSE 100. But as more businesses such as Google offer machine learning algorithms for analysing job postings, lawyers and activists have raised concerns that hiring biases will be preserved and reproduced rather than challenged. In the US a report from the Equal Employment Opportunity Commission showed high-tech companies employed a larger share of white people, Asian Americans and men than other private companies, but fewer African Americans, Hispanic people and women.

After garbage in, garbage out, we now talk about bias in, bias out

Wendy Hall

Anecdotal information indicates black and non-Asian minority ethnic representation is even lower at the most prominent tech companies in the US with Facebook reporting only 3 per cent black employees, compared with between 7 and 14 per cent for the sector overall. At Google the figure was just 2 per cent.

Michael Sippitt, a director of Forbury People, a UK based HR consultancy, says “tech races ahead of people working out how to use it”. He predicts there will be lawsuits citing discrimination in the future because of bias in automated hiring. This is because AI algorithms learn from historic data sets, he adds, so they are more likely to hire in the image of previous staff instead of helping to tackle unfair under-representation. For example, a survey of the existing educational background, age or experience of staff in a particular industry could encourage machine learning technology to exclude candidates that did not fit a particular profile.

“A lot of the CVs and historic profiles will be of one kind of candidate,” says Kriti Sharma, vice-president of artificial intelligence at Sage, the UK’s largest listed technology company. “If you were hiring a chief technology officer for a company and the algorithm was learning from historic data sets then what would you expect?”

Computer scientists argue that the technology can be modified to correct for such biases, for example by introducing constraints so that an algorithm selects as many people from each ethnicity, or the same fraction of applicants in each subgroup. However, this remedy is controversial and unlawful in some jurisdictions when taken to the extreme.

Adrian Weller, programme director for AI at the Alan Turing Institute, says the homogeneity of computer scientists makes it impossible to control for all biases, as there are few technical solutions to mitigate the problem. “Because artificial intelligence is going to affect all of our lives,” he says, “it is very important that we have a diverse set of stakeholders designing and building them.”

Copyright The Financial Times Limited 2018

© 2018 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.