Culture

The racial bias with AI

What are the implications of a creative tool that is built with bias?
Now Reading:  
The racial bias with AI

Ali Munro, Creative at Amplify has recently been playing with the latest AI tools that have emerged and dominated the headlines. He’s been inspired by the new tools that democratise creativity, allowing him to bring his ideas to life in new ways and to express himself quickly and easily, leaving him to concentrate on big-idea thinking and creative direction – however whilst experimenting with MidJourney over Christmas to create a “New Year, New Me” concept, he discovered for himself that these new AI tools are built on the same gender, racial and sexual biases today – perpetuating issues in today’s society. We sat down with Ali to better understand his experience and to spotlight the issues. 

Ali is a mixed white, a south Asian man and after uploading a picture of his face into MidJourney (a generative AI that converts text prompts to imagery) and asking for it to reimagine his occupation, almost all the images veered towards white-centric faces. 

To understand why this happened, we need to look at what MidJourney’s dataset is (the big library of imagery that the AI references to create it’s new content) in an interview with Forbes, MidJourney founder David Holtz described the dataset that MidJourney is built from is hundreds of millions of images sourced in “... just a big scrape of the internet”.

Though we can’t see the dataset for ourselves, what we all know and are mindful of is the internet has inherently been built with gender, racial and sexual preference biases, from the majority of those building within it historically being pale, male (and stale) in silicon valley – through to the content and imagery perpetuating stereotypes. 

Therefore, MidJourney is no different since it’s built on that foundation – therefore, we need to make sure people are aware of this issue, so Ali decided to experiment to test some of the issues it presents. 

Experiment: Good Vs Bad

Typing into MidJourney “good person” or “bad person” as text prompts, he wanted to see how the machine might render this imagery. The issue doesn’t seem to be that Midjourney is racist, and casts people of colour in a negative way, but that it's unrepresentative, regardless of the prompt it seems to imagine every avatar as a white person, usually male.

Good Person (left), Bad Person (Right)

Experiment: Jobs

He then experimented with job roles, typing occupations typically associated with high incomes and ‘success’, such as doctors and lawyers, and those associated with lower incomes and viewed pejoratively such as a street cleaner or bin man. Once again these produced white faces. 

Doctor (left), Lawyer (Centre), Garbage Collector (Right)

Experiment: Happy vs Sad

Finally, looking into more emotional responses and asking for imagery to represent a “Happy person” or “sad person” yielded some interesting results. Clearly, there's a bias here in the gender roles being depicted.

Happy person (left), Sad Person (right)

Conclusion:
Unless prompted, the AI will default to white-centric people's faces and features. This clearly supports the theory that the issue is the lack of representation of the data.  Western culture is unrepresentative and unequal and white cis-gender men make up the majority of high-paying status jobs. 

What we need to be mindful of is these biases when using these tools to jumpstart any creativity and how it might be perpetuating the same societal issues we have today. We also should consider the implications for any non-white person when it comes to having to add conditions or specificity to have an intent that is more representative, essentially another moment of ‘othering’ or marginalised cultures and people.