Robo-racism: the truth about AI bias

Artificial? Definitely. Intelligence? Well, that depends...

Jon Deery
11th November 2021
Poet and programmer Joy Buolamwini finds that a programme she herself designed does not recognise her as a face unless she wears a white mask. Image: YouTube
Can a robot be racist? We tend not to think so - in fact, police departments, schools, employers and many other sectors of society have implemented artificial intelligence software in order to combat the biases of their human employees. But our belief that computers are an ‘objective’ alternative to our flawed human worldviews is a fundamental misunderstanding of what AI actually is.

Mathematician and data scientist Cathy O’Neil has written about what she calls ‘Weapons of Math Destruction’ (‘WMDs’). This term refers to all algorithms that are used to direct important parts of people’s lives, without their knowledge or explicit consent, that inherently discriminate against certain people.

The vast majority of WMD designers are white men. [...] They will probably miss stuff that women and minorities would have picked up on straight away.

In her 2016 book, O’Neil outlines the many areas of our lives that are currently being tainted by WMDs: she talks about how education is now increasingly geared towards appeasing league-table ranking algorithms rather than giving the best for students, how policing algorithms are exacerbating racial prejudice by sending even more officers to black neighbourhoods, how CV-profiling algorithms are making it infuriatingly difficult for certain people to even have their applications read by a human being, how algorithms are keeping thousands trapped in unpayable debt, etcetera, etcetera, across pretty much all of society.

O'Neil features among others in a brilliant 2020 documentary called Coded Bias, which centres around the story of Joy Buolamwini, a programmer who found that an application she herself designed would only recognise her face if she wore a white mask over it.

But how is this happening? Surely maths can’t be racist? Surely maths can't discriminate?

Can it?

Computer systems are tools, not entities of themselves. They are designed by specific people for specific purposes, and the failures of their designers become failures in the products.

Fairness is squishy and hard to quantify... Programmers don't know how to code for it

Cathy O'Neil, Weapons of Math Destruction

The vast majority of WMD designers, for example, are white men. Even if these designers aren’t racist, the very fact that they’re white and male means they will probably miss stuff that women and minorities would have picked up on straight away in the designing process.

“WMDs,” O’Neil writes, “tend to favour efficiency… they feed on data that can be measured and quantified. But fairness is squishy and hard to quantify… Programmers don’t know how to code for it, and few of their bosses ask them to.”

Given how ‘the data’ has recently become synonymous with objectivity, with scientific policymaking, it is more important than ever to remind ourselves that ‘the data’ does not exist. Data exists, in fact it’s probably the most common resource on the planet right now, but it’s all being collected by specific groups of people, with specific agendas and prejudices. The phrase ‘listen to the data’, while totally agreeable in its frustration with politicians who ignore science, represents data as objective, and all data collectors as somehow in agreement with each other.

Poet and programmer Joy Buolamwini performs a poem about the mis-gendering of Black icons by facial recognition software

One of the most damaging aspects of WMDs is that they seem unquestionable; and, for their victims, they are. In the introduction to Weapons of Math Destruction, O’Neil discusses a teacher assessment tool called IMPACT, which scored teachers based on the achievement of their students, and dictated employment policy at a certain middle school in America. This tool was directly cited in the firing of several teachers in the school, some of whom were seen by their colleagues as excellent members of staff and praised by the families of students.

When the teachers asked why they had been fired, they were merely told that the algorithm had decided they weren’t good enough. Naturally, the teachers weren’t allowed to see what information the algorithm had used to make that assessment.

Eventually, when the teachers found out they could earn bonuses and keep employed within this system if they maximised their students’ scores, they did the obvious: they started lying, and correcting kids’ test papers. Which meant the honest teachers got fired, regardless of their hard work or talent, and the cheaters of the system got promoted.

O’Neil is a programmer with a focus on social justice - she does not think that algorithms are inherently harmful. They are merely tools that are always used to further a certain agenda, and if that agenda is positive, an algorithm could be an incredibly useful means of furthering it.

As long as the humans making it acknowledge that it is as flawed as themselves.

Leave a Reply

Your email address will not be published. Required fields are marked *

ReLated Articles
magnifiercross
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Copy link
Powered by Social Snap