re:publica 25
26th-28th May 2025
STATION Berlin
Massive efforts are made to reduce biases in both data and algorithms to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. The talk is a critical comment on current fairness research in AI. It describes the ‘speciesist bias’ along with my empirical research in this field which pertains to several case studies in the field of image recognition, word embedding, and language models. During the talk, I will provide evidence for speciesist biases in all the mentioned areas of AI. My take home message is that AI technologies currently play a significant role in perpetuating and normalizing violence against animals. To change this, AI fairness frameworks must widen their scope and include mitigation measures for speciesist biases.