"Moral AI?! Navigating Ethical Decisions with Large Language Models

Lara Kirfel

Zusammenfassung
Should we rely on AI for (ethical) guidance? Can Large Language Models understand and replicate the nuanced moral reasoning inherent to humans? In my talk, I examine the allure and the pitfalls we face if we rely on AI for ethical decisions.
Lightning Box 1
Vortrag
Englisch
Conference

People love to turn to the digital sphere with their ethical questions, as ten thousands posts in the subreddit "r/AmItheAsshole?" demonstrate. These days, however, we've leveled up from seeking advice on internet forums to consulting with Artificial Intelligence. We can just ask ChatGPT whether it's morally okay to cancel dinner plans last minute because we're just not feeling it.

This raises some intriguing questions: Should we rely on AI for ethical guidance? Can Large Language Models (LLMs) truly grasp the complexities of intuitive human morality? And importantly, how do we maintain our own moral judgment in a world where AI can offer a quick fix? Human moral judgment is rich and complex, emerging through the interplay of reason and emotion— can we expect the same from LLMs? In my talk, I will delve into what we know about the "moral psychology" of current state-of-the-art Large Language Models. We will examine the allure and the pitfalls of seeking moral counsel from artificial intelligence from a scientific perspective. We will explore current psychological research showing how ChatGPT, while being a sophisticated tool, falls short as a reliable moral sage due to its lack of consistent ethical reasoning. 

So, what does this mean for us? Ultimately, my talk invites the audience to contemplate and reflect on a future in which AI assistants might just become our ethical advisers, for better or worse.

Profile picture Lara Kirfel
Cognitive & Behavioral Scientist