PANEL

XR: Ethics, logics, representation

Ethical Principles in Symbiotic AI: Challenges and Formalization

As AI systems become increasingly integrated into human environments, the concept of symbiotic AI - in which humans and intelligent agents collaborate in dynamic, evolving relationships - raises pressing ethical questions. Ensuring that these systems act in alignment with human values requires not only normative guidance but also formal mechanisms capable of reasoning about ethical principles. This talk explores the role of formal argumentation as a promising framework for representing, learning, and implementing ethical norms in symbiotic AI. We discuss the challenges of formalizing ethical behavior and outline how argumentation can serve as a bridge between abstract ethical theory and concrete AI decision processes.

PANELIST

Fabio Aurelio D'Asaro Fabio Aurelio D'Asaro

Fabio Aurelio D'Asaro

University of Salento, Italy

Fabio Aurelio D'Asaro is a postdoctoral researcher in Logic at the University of Salento. Trained in Computer Science, Mathematical Logic, and Artificial Intelligence, he has worked in both the UK and Italy, including positions at University College London, the University of Milan, Verona and Naples. His research explores epistemic and probabilistic reasoning, logic programming, and formal argumentation, with publications in leading journals in AI and logic. In recent years, he has turned his focus to explainable and transparent AI, combining formal tools with ethical concerns to promote trustworthy systems, e.g., in medical and scientific contexts. His current work addresses issues in epistemology, logic, and the ethics of AI.

Ethics and Aesthetics of Virtual Humans

In recent years, the emergence of virtual humans – digital identities that simulate human appearance, behavior, and communication – has attracted increasing attention. Among them, a particularly prominent category is that of virtual influencers – artificially generated personas designed to be active on social media platforms to entertain audiences and influence the tastes, desires, and choices of their followers. Figures such as Lil Miquela, Aitana Lopez, Emily Pellegrini, and Rebecca Galani appear almost indistinguishable from real humans and are crafted with carefully constructed personalities capable of engaging in empathetic and persuasive conversations. This phenomenon raises significant ethical concerns. First, there is the risk that users may not be fully aware they are interacting with artificial agents, despite formal disclaimers. Second, we are witnessing a renewed form of the ELIZA effect – the tendency to anthropomorphize artificial entities by attributing thoughts and emotions to them – now amplified by the use of hyper-realistic images, human-like voices, and AI-generated videos. Another key issue concerns the predominantly female design of these virtual humans – often characterized by gentle voices and idealized beauty standards. This choice – rooted in market research – reflects and perpetuates gender stereotypes, suggesting that women, whether real or virtual, should be subservient, always available, and eager to please. In some cases, as developers and analysts have reported, interactions with chatbots become sexually explicit or abusive, pointing to the urgent need for ethical, cultural, and regulatory reflection on these new forms of digital representation and human-AI interaction. This talk offers a critical analysis of the phenomenon, combining aesthetic and moral insights in light of the growing capabilities of generative artificial intelligence.

PANELIST

Claverini Corrado Claverini

Corrado Claverini

University of Salento, Italy

Corrado Claverini is a postdoctoral researcher in Moral Philosophy at the University of Salento. He obtained his PhD in Philosophy at the Vita-Salute San Raffaele University in Milan. He is a member of the Ethics in the Wild Research Lab, an interdisciplinary research unit within the Center for Digital Humanities at the University of Salento. He is currently working on the Horizon project AInCP – Clinical validation of Artificial Intelligence for providing a personalized motor clinical profile assessment and rehabilitation of upper limb in children with unilateral Cerebral Palsy (Grant agreement ID: 101057309).

Partnerships and Sponsors

springer1
unisalento_logo
unina
pageso
AVR
AVR
XR-Tech-max-1
arhemlab
cirmis
res4net