The Impact of Colorism on AI and the Need for Bias-Free Technology
(Image generated by Mid journey AI)
(Command to the AI- /Imagine a Beautiful Black Mum holding the children in an urban area. Render 4k etc)
The new 1000 Kenyan Shillings note has a nickname used to refer to ladies of lighter skin tones... "Color ya Thao": This color, known informally as "Color ya Thao," stands in for the idealized light skin tone prized by many in our culture. That got me thinking: what if the color codes employed in AI technology contribute to the perpetuation of colorism biases and have an effect on our society as a whole.
As a graduate student in communication strategy, I've learned about the nuanced effects of colorism in the media. One of our Professors stressed the need of writing or produce content for a worldwide audience and use the internet to reach out to different demographics yet seek to have local productions that appeal to the local population. Because of this, I became interested in how AI affects our ideas of what is attractive and desirable.
Colorism, for the layperson, refers to the discriminatory practice of treating someone differently because of their skin tone. It frequently coincides with racism and shows up everywhere from the workplace to private interactions. As a culture, we tend to favor those with lighter skin tones, which contributes to the persistence of colorism.
In light of recent research, it's clear that AI isn't immune to inherent prejudices like racism. As an example, Google's computer vision algorithms now use the Monk Skin Tone (MST) scale to categorize skin tones rather than the Fitzpatrick scale. The concept of "coded bias," in which racism is embedded in technology, prompted this change. Google Photos incorrectly classifying black people as gorillas is just one example. Racist soap dispensers and computer-generated stereotyped images are two others. The Google skin lesion detection algorithm also did not work on those with dark skin. Autonomous vehicles have been demonstrated to have trouble identifying people of color compared to those with lighter skin tones, according to research. (I have since learned that a lot of this is still in research and corrections have been continuously updated)
I decided to do an experiment with Mid journey AI and a wide range of user-provided command prompts to learn more about the effect of colorism on AI. Although the study's findings are still being reviewed, they show that there is a significant amount of work to be done to eliminate colorism bias in AI.
Let's explore the various images generated by command prompts:
(The command prompt by @Fantancy 2022- on Mid-journey AI was as follows- romantic full-length portrait of a woman, a stunning woman in magical white flowing embellished dress, flowers, crystals, and a handsome modern man in evening wear.....)
When I looked at the command prompt that @Fantancy 2022 on Mid-journey AI had provided, I couldn't help but note that the produced image didn't have very much variety in its appearance. There was no hint of the skin tone that the generator had in mind for the prompt, which asked for a passionate full-length portrait of a couple holding each other's hands. This has left me with a question: why did the people in the picture only have white faces? Is it to imply that white people, in general, are the only beautiful men and women?
Lets now examine the same command prompt with black added to it
There is in fact a white face with light skin included as part of the image that was made for an additional upgrade!
With
DALL·E 2 - OpenAI
was not any better.
To the industry experts:
ChatGPt postulates the following as a way of dealing with biases:
Collect diverse and representative data: The data used to train an AI model should be diverse and representative of the population it will be used on. This is important to ensure that the model can perform well on a wide range of inputs and that it does not perpetuate existing biases.
Pre-processing: Data pre-processing is important to ensure that the data is cleaned and ready for training. It includes removing outliers, duplicates, and irrelevant data, as well as handling missing values.
Annotate data: Annotating data is critical for supervised learning. It is the process of adding labels or tags to the data, which the AI model uses to learn the relationship between inputs and outputs.
Fairness evaluation: Use fairness metrics and evaluation methods to evaluate the performance of the model on different subgroups of the data to ensure that it does not perpetuate existing biases.
Monitor and iterate: Monitor the model's performance during and after training and make adjustments as needed. This may include collecting more data, adjusting the model's architecture, or fine-tuning the model's parameters.
Explainability: Make sure that the model is interpretable and that its decisions can be explained. This is important to understand how the model makes its predictions and identify potential sources of bias.
Ethical considerations: consider the ethical implications of using the model and how it could potentially harm certain groups of people.
As communicators, we have a responsibility to advocate for a more fair and inclusive future in which technology accurately represents the variety of the world in which we live. It's time for accurate storytelling and the creation of images that include people of various races, ethnicities, and skin colors. Let us paint a bright future for ourselves.
Signed: