1st Workshop on

Biased Data in Conversational Agents

ECML-PKDD | 22 September 2023 | Turin, Italy

News! The program is now available.



Conversational Agents (CA) are now widely prevalent, found in various aspects of our daily interactions, including computers (e.g., ChatGPT), smartphones (e.g., Siri), homes, and websites. These systems may be rule based or statistical based machine learning models, and in the latter case, CAs are trained on conversation corpora that are either gathered in real life settings or created using the Wizard-Of-Oz method. When the corpora are gathered from real life sources like online platforms such as Reddit, the large variety of data ensures generalization in the CA responses, but it may introduce implicit or explicit biases related to racial, sexual, political, or gender matters. The presence of such biases in the data poses a challenge to the construction of CAs, as it may amplify the potential risks that biases pose to society. Therefore, it is crucial to ensure that CAs exhibit responsible and safe behavior in their interactions with users.


Workshop Objectives


The aim of this workshop is to address the challenges posed by biased data in both Machine Learning and society. We invite researchers to compare different chatbots/corpora on sexual, political, racist, or gender topics; study methods to create or mitigate bias during the construction of datasets; define approaches to assess and/or remove bias present in the corpora; or handle bias at the chatbot level through NLP or Machine Learning techniques.

We also welcome submissions that take a theoretical approach to addressing the issue of bias in CAs, without necessarily involving the creation of a corpus, implementation of an agent, or exploration of Machine Learning techniques.

Topics of Interest

The workshop solicits contributions including but not limited to the following topics

  • Comparison and Evaluation of Corpora/Conversational Agents on biased data
  • Assessing and mitigating biased data in corpora
  • Personalized NLP and Information Retrieval
  • Sexist, Racist, Political and Gender Dictionary and Ontology
  • NLP and Machine Learning methods to recognize and handle biased data
  • Impact of biased data on Conversational Agents
  • Topic recognition and repair strategies in biased conversations
  • Mental Models for biased data
  • Corpora creation and corpora annotation (automatic methods are accepted)

Submission Instructions

Authors are invited to submit original, previously unpublished research papers.
We encourage the submission of:

  • extended abstracts: 2 pages,
  • short papers: between 5 to 7 pages,
  • regular papers: between 8 to 14 pages.
The space for references is unlimited.

Abstracts and papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files, and the copyright form can be downloaded here. All papers must be converted to PDF prior to electronic submission.

All papers need to be ‘best-effort’ anonymized. We strongly encourage making code and data available anonymously (e.g., in an anonymous GitHub repository via Anonymous GitHub or in a Dropbox folder). The authors may have a (non-anonymous) pre-print published online, but it should not be cited in the submitted paper to preserve anonymity. Reviewers will be asked not to search for them.

At least one author of each accepted paper must have a full registration and be in-presence to present the paper. Papers without a full registration or in-presence presentation won't be included in the post-workshop Springer proceedings.

Select: Biased Data in Conversational Agents

Important Dates

  • Paper Submission Deadline: extended to 23 June 2023
  • Paper Author Notification: 12 July 2023
  • Paper Camera Ready: 1 October 2023

Keynote Speaker:


Fabio Fossa

Fabio Fossa

Politecnico of Milan, Italy
fabio.fossa@polimi.it

Gender Bias and Conversational Agents: an Ethical Perspective

In my talk, I intend to discuss ethical problems raised by the implementation of gender-related biases in the design of Embodied Conversational Agents (ECAs). Mainly, I argue that considerable moral risks are attached to this design practice, so that great caution is advised. As artificial conversational agents are increasingly adopted and their linguistic skills perfected, it is important to critically assess related design choices from an ethical point of view as well. In particular, it is pivotal to shed light on the ethical risks of deliberately exploiting pre-existing social biases in order to build technologies that successfully meet user expectations, engender trust, and blend in with their context of use. Accordingly, I address the question whether it is ethical permissible to align the design of ECAs to gender biases in order to improve interactions and maximize user satisfaction. After some introductory considerations on the rationale underlying the design strategy of bias alignment, possible answers to doubts on its ethical permissibility are investigated and their respective contributions to the effort of aligning ECA technology to relevant ethical standards is evaluated. Finally, some conclusive remarks are drawn in terms of design ethics and possible policy recommendations.

Bio: Fabio Fossa is assistant professor (RTDA) in moral philosophy at the Department of Mechanical Engineering of Politecnico di Milano, Italy. His main research areas are philosophy of technology, robot and AI ethics, applied ethics, and the philosophy of Hans Jonas. His current research deals with the philosophy of artificial agency and the ethics of social robotics and driving automation. He is Editor-In-Chief of the Italian Journal InCircolo – Rivista di filosofia e culture and a member of META – Social Sciences and Humanities for Science and Technology. Among his publications: Ethics of Driving Automation. Artificial Agency and Human Values, Springer 2023.

Organizing Committee

Francesca Grasso

Francesca Grasso

University of Turin, Italy

fr.grasso@unito.it

Giovanni Siragusa

Giovanni Siragusa

University of Turin, Italy

siragusa@di.unito.it

Program Committee Members

  • Kolawole Adebayo - Adapt Center, Ireland
  • Federica Cena - University of Turin, Italy
  • Luigi Di Caro - University of Turin, Italy
  • Shohreh Haddadan - Zortify, Luxembourg
  • Justin Edwards - University of Oulu, Finland
  • Michael Fell - Zortify, Luxembourg
  • Davide Liga - University of Luxembourg, Luxembourg
  • Alessandro Mazzei - University of Turin, Italy
  • Emmanuel Papadakis - University of Huddersfield, United Kingdom
  • Livio Robaldo - Swansea University, Wales
  • Marco Viviani - University of Milan - Bicocca, Italy

Program

Location:

Room 1i - Main Campus - Politecnico of Turin (via Castelfidardo 39, Turin)


Schedule:

16:30 - 16:35 Opening
16:35 - 17:20

Keynote Speech: Fabio Fossa (Politecnico of Milan)
Title: Gender Bias and Conversational Agents: an Ethical Perspective

17:20 - 17:35

How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses
Stefanie Urchs, Veronika Thurner, Matthias Aßenmacher, Christian Heumann, and Stephanie Thiemichen

17:35 - 17:50

Stars, Stripes, and Silicon: Unravelling the ChatGPT’s All-American, Monochrome, Cis-centric Bias
Federico Torrielli

17:50 Closing