Collecting User Satisfaction Ratings for Dialogue Systems

2020/03/20

Mickey, a master student I was co-supervising, recently published a paper based on his thesis work in ACM’s Conference for Human Information Interaction and Retrieval. His work focused on interfaces for collecting high-quality user satisfaction ratings for dialogue systems.

User satisfaction is an important indicator in the design, evaluation and adaptation of dialogue systems. Establishing user satisfaction ratings for conversation-based systems, however, remains challenging. User questionnaires may yield biased results and typically have low response rates (~1%). Third-party raters may be costly and gathering consistent and high-quality ratings from them is challenging. Mickey identified the issues that arise here, developed and open-sourced a tool with solutions to these issues and evaluated it in a case study on a public dataset.

The tool received positive feedback and can be used to obtain high-quality ratings for written conversation data. It can be used by anyone interested in high user satisfaction for conversational interfaces.