What makes you change your mind? An empirical investigation in online group decision-making conversations

dialogue systems
Karadzhov, Georgi and Stafford, Tom and Vlachos, Andreas
arXiv preprint arXiv:2207.12035, 2022
Publication year: 2022

DeliData: A dataset for deliberation in multi-party problem solving

dialogue systems
Karadzhov, Georgi and Stafford, Tom and Vlachos, Andreas
arXiv preprint arXiv:2108.05271, 2021
Publication year: 2021

Evaluating Variable-Length Multiple-Option Lists in Chatbots and Mobile Search

dialogue systems
Pepa Atanasova, Georgi Karadzhov, Yasen Kiprov, Preslav Nakov, Fabrizio Sebastiani
Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval
Publication year: 2019

In recent years, the proliferation of smart mobile devices has lead to the gradual integration of search functionality within mobile platforms. This has created an incentive to move away from the” ten blue links” metaphor, as mobile users are less likely to click on them, expecting to get the answer directly from the snippets. In turn, this has revived the interest in Question Answering. Then, along came chatbots, conversational systems, and messaging platforms, where the user needs could be better served with the system asking follow-up questions in order to better understand the user’s intent. While typically a user would expect a single response at any utterance, a system could also return multiple options for the user to select from, based on different system understandings of the user’s intent. However, this possibility should not be overused, as this practice could confuse and/or annoy the user. How to produce good variable-length lists, given the conflicting objectives of staying short while maximizing the likelihood of having a correct answer included in the list, is an underexplored problem. It is also unclear how to evaluate a system that tries to do that. Here we aim to bridge this gap. In particular, we define some necessary and some optional properties that an evaluation measure fit for this purpose should have. We further show that existing evaluation measures from the IR tradition are not entirely suitable for this setup, and we propose novel evaluation measures that address it satisfactorily.

BibTex:

@inproceedings{Atanasova:2019:EVM:3331184.3331308,
 author = {Atanasova, Pepa and Karadzhov, Georgi and Kiprov, Yasen and Nakov, Preslav and Sebastiani, Fabrizio},
 title = {Evaluating Variable-Length Multiple-Option Lists in Chatbots and Mobile Search},
 booktitle = {Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval},
 series = {SIGIR'19},
 year = {2019},
 isbn = {978-1-4503-6172-9},
 location = {Paris, France},
 pages = {997--1000},
 numpages = {4},
 url = {http://doi.acm.org/10.1145/3331184.3331308},
 doi = {10.1145/3331184.3331308},
 acmid = {3331308},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {chatbots, evaluation measures, mobile search},
}