Conversational Agents (CA) are now widely prevalent, found in various aspects of our daily interactions, including computers (e.g., ChatGPT), smartphones (e.g., Siri), homes, and websites. These systems may be rule based or statistical based machine learning models, and in the latter case, CAs are trained on conversation corpora that are either gathered in real life settings or created using the Wizard-Of-Oz method. When the corpora are gathered from real life sources like online platforms such as Reddit, the large variety of data ensures generalization in the CA responses, but it may introduce implicit or explicit biases related to racial, sexual, political, or gender matters. The presence of such biases in the data poses a challenge to the construction of CAs, as it may amplify the potential risks that biases pose to society. Therefore, it is crucial to ensure that CAs exhibit responsible and safe behavior in their interactions with users.
The aim of this workshop is to address the challenges posed by biased data in both Machine Learning and society. We invite researchers to compare different chatbots/corpora on sexual, political, racist, or gender topics; study methods to create or mitigate bias during the construction of datasets; define approaches to assess and/or remove bias present in the corpora; or handle bias at the chatbot level through NLP or Machine Learning techniques.
We also welcome submissions that take a theoretical approach to addressing the issue of bias in CAs, without necessarily involving the creation of a corpus, implementation of an agent, or exploration of Machine Learning techniques.
The workshop solicits contributions including but not limited to the following topics
Authors are invited to submit original, previously unpublished research papers.
We encourage the submission of:
Abstracts and papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files, and the copyright form can be downloaded here. All papers must be converted to PDF prior to electronic submission.
All papers need to be ‘best-effort’ anonymized. We strongly encourage making code and data available anonymously (e.g., in an anonymous GitHub repository via Anonymous GitHub or in a Dropbox folder). The authors may have a (non-anonymous) pre-print published online, but it should not be cited in the submitted paper to preserve anonymity. Reviewers will be asked not to search for them.
At least one author of each accepted paper must have a full registration and be in-presence to present the paper. Papers without a full registration or in-presence presentation won't be included in the post-workshop Springer proceedings.
Select: Biased Data in Conversational Agents
Gender Bias and Conversational Agents: an Ethical Perspective
In my talk, I intend to discuss ethical problems raised by the implementation of gender-related biases in the design of Embodied Conversational Agents (ECAs). Mainly, I argue that considerable moral risks are attached to this design practice, so that great caution is advised. As artificial conversational agents are increasingly adopted and their linguistic skills perfected, it is important to critically assess related design choices from an ethical point of view as well. In particular, it is pivotal to shed light on the ethical risks of deliberately exploiting pre-existing social biases in order to build technologies that successfully meet user expectations, engender trust, and blend in with their context of use. Accordingly, I address the question whether it is ethical permissible to align the design of ECAs to gender biases in order to improve interactions and maximize user satisfaction. After some introductory considerations on the rationale underlying the design strategy of bias alignment, possible answers to doubts on its ethical permissibility are investigated and their respective contributions to the effort of aligning ECA technology to relevant ethical standards is evaluated. Finally, some conclusive remarks are drawn in terms of design ethics and possible policy recommendations.
Bio: Fabio Fossa is assistant professor (RTDA) in moral philosophy at the Department of Mechanical Engineering of Politecnico di Milano, Italy. His main research areas are philosophy of technology, robot and AI ethics, applied ethics, and the philosophy of Hans Jonas. His current research deals with the philosophy of artificial agency and the ethics of social robotics and driving automation. He is Editor-In-Chief of the Italian Journal InCircolo – Rivista di filosofia e culture and a member of META – Social Sciences and Humanities for Science and Technology. Among his publications: Ethics of Driving Automation. Artificial Agency and Human Values, Springer 2023.
Room 1i - Main Campus - Politecnico of Turin (via Castelfidardo 39, Turin)
16:30 - 16:35 | Opening |
16:35 - 17:20 |
Keynote Speech: Fabio Fossa (Politecnico of Milan) |
17:20 - 17:35 |
How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses |
17:35 - 17:50 |
Stars, Stripes, and Silicon: Unravelling the ChatGPT’s All-American, Monochrome, Cis-centric Bias |
17:50 | Closing |