We propose an online, end-to-end, neural generative conversational model for open-domain dialogue. It is trained using a unique combination of offline two-phase supervised learning and online human-in-the-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on hamming-diverse beam search for response generation and one-character user-feedback at each step. Experiments show that our model inherently promotes the generation of semantically relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.
Published as a short paper in the Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM), 2017.
This work was done in Collaboration with Xin Jiang, Hang Li from Huawei Noah's Ark in Hong Kong and Pascal Poupart.