How Body Standards are Embedded in Generative AI Image Models

Aisha Sobey

Hier klicken, um das externe Video (YouTube) anzuzeigen. Mehr Informationen

Zusammenfassung
When prompted to show a person, generative AI image models tend to show thin, able-bodied, beautiful people. This talk takes a whistle-stop tour around AI and data bias, fat liberation, and disability studies to ask: why is this happening? Why is this a problem? And what can we do about it?
Kurz-Vortrag
Englisch
Conference

While generative AI (genAI) image models are increasingly popular, they are not without critique for their biased outputs. Building on assessments of Dall-E’s prejudiced and homogenising production of race, this talk explores how fat bodies are presented compared to straight-size bodies across nine different, free-to-use genAI image models. The images are examined through reflexive thematic analysis and presented to the audience. They highlight that if not explicitly prompted to show larger bodies, none of the models create fatness or disability. Secondly, zooming in on what is created when the models were prompted to show a larger body size, the outputs are found to contravene content guidelines, show fewer positive facial expressions, and have higher rates of mistakes and anomalies compared to images without a body size prompt. This talk questions the social imaginaries created through genAI and argues that such systems, through the power of aesthetics, create new standards of personhood that explicitly exclude people who exist in socially deviant bodies. It closes by asking what alternative options exist, or could exist, to resist the co-option of the body.