Do AI systems discriminate against animals, too?

Thilo Hagendorff

Click here to view the external video (YouTube). More information

Summary
A lot of efforts are made to reduce biases in AI. However, up to now, all these efforts have been anthropocentric and exclude animals. In the talk, I elaborate on “speciesist bias” in many AI applications (especially language models and image recognition systems) and stress the importance of widening the scope of AI fairness frameworks.
Lightning Talk
English
Conference

Massive efforts are made to reduce biases in both data and algorithms to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. The talk is a critical comment on current fairness research in AI. It describes the ‘speciesist bias’ along with my empirical research in this field which pertains to several case studies in the field of image recognition, word embedding, and language models. During the talk, I will provide evidence for speciesist biases in all the mentioned areas of AI. My take home message is that AI technologies currently play a significant role in perpetuating and normalizing violence against animals. To change this, AI fairness frameworks must widen their scope and include mitigation measures for speciesist biases.

Profile picture of Thilo Hagendorff
Research Group Leader