Artificial Illusion: what it means to be human for the future of AI

1 August 2025 · Christoph Purschke · 2 minute read · #conversations

CuCo Lab Conversations | Dirk Hovy (Milano)

INFORMATION

Why do some AI models stand out in a sea of them, while others do not? Most of the models we interact with, for email sorting, movie recommendations, and schedule organization, go unnoticed. Only a few captivate our attention, and then often resemble humans in their capabilities. Anthropomorphizing models, such as those in robots and text-generators like Chat-GPT, helps us understand and accept complex systems and provides parameters for interacting with them. However, it results in exaggerated claims, overlooked issues, and misplaced fears. This talk examines the state of AI models, dispelling myths by exploring their actual capabilities and limitations. Through signing apes, flirtatious smart speakers, scheming octopuses, and algorithmic painters, we will explore why expecting AI to behave like humans can lead to challenges and misunderstandings.

Understanding AI objectively allows us to appreciate both the incredible feats achieved by these models without human bodies and brains, as well as our complex and often underappreciated human abilities. This investigation also helps address current societal concerns: Will artificial intelligence (AI) replace jobs? Could it pose a threat? How do AI’s stereotyping tendencies reflect our own social biases, and how does this impact responsible technology use? Dipping into psychology, philosophy, machine learning, and AI history to separate fact from fiction, we will explore why AI is not inherently good or evil, but rather a tool that reflects our own complexities, and raises the questions: what do we mean by AI, and what does it mean to be human?

SPEAKER

COORDINATES


Close Terminal E-Mail Download GitHub Alternate GitHub Menu Check Circle Space Shuttle angle-right Warning