Clarification and User Simulation in Mixed-Initiative Conversational Search

Staff - Faculty of Informatics

Date: 27 February 2024 / 13:30 - 15:00

USI East Campus, Room D0.03

You are cordially invited to attend the PhD Dissertation Defence of Ivan Sekulic on Tuesday 27 February 2024 at 13:30 in room D0.03.

Abstract:
The primary goal of an information retrieval (IR) system is to satisfy user's information need. Recently, with the recent rise in development of conversational assistants, conversational search, also referred to as conversational information retrieval (CIR), has gained significant attention from research community. While the primary goal of a CIR system is the same, conversational setting, however, poses significant challenges compared to traditional ad hoc search. For example, CIR system needs to keep track of the conversational context and previous user utterances, as the user can refer to conversational history at any point in a conversation. Moreover, CIR is often carried out in limited-bandwidth scenarios, such as voice only or mobile search, thus requiring appropriate user interfaces. Mixed initiative, paradigm, where search system can be proactive and take initiative at any point in our conversation, first, some potential solutions to the aforementioned challenges. This dissertation is presented in two parts. First part deals with modeling specific tasks in mixed-initiative conversational search. Specifically, we address the issue of constructing clarifying questions with the purpose of elucidating user's underlying information need. To this end, we propose a novel method for generating clarifying questions, based on query facets. Further, we analyze the possibility extracting the facets from a list of documents retrieved in response to the initial query. The findings show promising direction for clarifying question construction in conversational search. We additionally address the issue of providing appropriate responses in a conversational setting. To this aim, we propose an entity-based response rewriting approach, which provides explanation of salient entities (or offers the user to learn about them in a follow-up question), thus making the response self-contained. The second part of dissertation concerns itself with user simulation. Evaluation of conversational search systems is arduous. The challenge arises from the fact that usually expensive and time-consuming user studies are required to properly evaluate our system. User simulation presents itself as a solution to this problem, as the simulator assumes the user role in its interaction with the system. For example, simulated user should be able to express its information need through queries, answer clarifying questions, and provide feedback to system's response. In this dissertation, we present novel large language model (LLM) based approach to user simulation for conversational search. Specifically, we first design a simulator capable off, given an information need description, answer potential clarifying questions posed by the search system. Moreover, we expand on this approach by proposing an extension, capable of multi-turn interactions and able to provide explicit feedback. We show that our simulated user can be used for interactions with conversational search systems and help in their evaluation. Further, we demonstrate the effectiveness of our approach by conducting human annotation to show that author and says generated by dissimulator are both useful and natural. Finally, we discuss applications of our approach and possible extensions for future work.

Dissertation Committee:
- Prof. Fabio Crestani, Università della Svizzera italiana, Switzerland (Research Advisor)
- Prof. Cesare Alippi, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Laura Pozzi, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Krisztian Balog, University of Stavanger, Norway (External Member)
- Prof. Emine Yilmaz, UCL, London, UK (External Member)
 

Faculties