Can You be More Polite and Positive? Infusing Social Language into Task-Oriented Conversational Agents

    Abstract

    Goal-oriented conversational agents are becoming ubiquitous in daily life for tasks ranging from personal assistants to customer support systems. For these systems to engage users and achieve their goals in a more natural manner, they need to not just provide informative replies and guide users through the problems but also to socialize with users. To this end, we extend the line of style transfer research on developing generative deep learning models to control for a specific style such as sentiment and personality. This is especially useful and relevant to dialogue generation of conversational agents. In this paper, we first apply statistical modeling techniques to understand human-human conversations. We report that social language used by humans is related to user engagement and task completion. After that, we propose a conversational agent model which is capable of injecting social language into agent responses given user messages as input while still maintaining content. This model is based on a state of the art end-to-end dialogue model using a sequence to sequence deep learning architecture, extended with sentiment and politeness features. We evaluate the model in terms of content preservation and social language level using both human judgment and automatic linguistic measures. The results show that the model can generate social responses that enable agents to address users’ issues in a more socially conscious way.

    Authors

    Yi-Chia Wang, Runze Wang, Gokhan Tur, Hugh Williams

    Conference

    ConvAI @ NeurIPS 2018

    Full Paper

    ‘Can You be More Polite and Positive? Infusing Social Language into Task-Oriented Conversational Agents’ (PDF)

    Uber AI

    Comments