ikp | PR Agentur Österreich: Wien Salzburg Vorarlberg Graz

Blog

The Limitations of AI: Stereotypes in Image Generation

Artificial intelligence as an unbiased "miracle cure"? Midjourney tested for bias

Stereotypes significantly shape human thinking and behaviour. In this context, artificial intelligence is often portrayed as an unbiased “miracle cure”. A short Midjourney experiment, among other things, shows that this attribution does not correspond to reality.

The human brain is unable to cope with the constant flood of stimuli in the fast-paced modern world – (social) categorisation helps us better understand and process the complex world.[1]  Although this coping mechanism is essential for finding one’s way in contemporary society, it also has negative effects: It plays a key role in the formation of stereotypes and prejudices, for instance.[2] Although these cognitive processes are largely automatic and subconscious, they can be influenced and even prevented.[3] Implicit/unconscious bias training that aims to raise awareness for and address subconscious biases in order to reduce them can be beneficial in this context.

 

Artificial intelligence as a miracle cure for biases?
A short experiment on the image generation platform Midjourney shows that AI tools could also use such training.

When the tool is prompted to create “a pr consultant from vienna austria”, the results are largely deceptively realistic and could hardly be more stereotypical – a young, fair-skinned, male-read person without any visible disability smiles from the screen, conforming to the beauty ideals common in our part of the world in every way. The results are similar with a prompt for “a communications team from vienna austria” – even though three of the six people are generated female-read, the focus is still on a male-read individual, and the group lacks any kind of diversity.

It becomes clear that even AI tools do not operate as impartially and “neutrally” as often hoped. This is certainly an expected result, as such programmes are “fed” and trained with real data, and the development and training of the tools is in the hands of people, whose own thought patterns (often unconsciously) flow into the data sets and thus the resulting creations.[4]

How to handle AI-generated images
So, would it be better not to use artificial intelligence in communication at all then? That’s not quite the case, as AI tools offer an excellent opportunity for assisted brainstorming. However, it is advisable to keep a watchful eye on the diversity, equity & inclusion (DE&I) components of the results, and to be particularly careful in choosing words and images so as not to perpetuate any harmful stereotypes.[5]  This can contribute significantly to more inclusive, anti-discriminatory communication, which not only makes the intended messages appealing to a broader target group. At the same time, it can also optimise one’s own employer branding, enabling the recruitment of qualified talent from a more diverse pool.

Ultimately, everyone has a part to play when it comes to DE&I topics, but the communications industry as an opinion maker holds a particularly important place. In the industry, there is certainly still considerable room for improvement when it comes to DE&I, not only in the images generated by AI tools, but also in everyday working life – as a recent PRVA survey shows, inter alia.[6] All too often, communication teams still resemble the abovementioned Midjourney results – this is neither in line with the times nor sustainable and excludes a substantial number of innovative pioneers.

We actively work on promoting all dimensions of diversity – in our own team as well as in the work we do with our clients. We do so out of passion – with the aim of bringing together the best minds in our company, regardless of their origin, appearance, and personal background to craft future-proof communication for everyone.

 

[1] https://link.springer.com/chapter/10.1007/978-3-322-95434-3_2; https://archiv.ub.uni-marburg.de/es/2022/0052/pdf/35_Soziale_Kategorisierung.pdf

[2] https://archiv.ub.uni-marburg.de/es/2022/0052/pdf/35_Soziale_Kategorisierung.pdf

[3] https://archiv.ub.uni-marburg.de/es/2022/0052/pdf/35_Soziale_Kategorisierung.pdf

[4] https://medium.com/@zaida.rivai/when-ai-mirrors-our-flaws-unveiling-bias-in-midjourney-1d5ef73b8e99

[5] https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/

[6] https://prva.at/news/presse/3539-branche-hat-aufholungsbedarf-ins-sachen-diversitaet

 

Did we pique your interest in diversity, equity and inclusion in a communications context? Click here to find out more about our services.

Photo from Midjourney