BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Towards novel evaluation methods for neural dialog systems
DTSTART:20190709T140000
DTEND:20190709T160000
DTSTAMP:20260407T164047Z
UID:3d868836ffc2a878c7bdc6409dfb3cd85807d4b1a1dd24a3912a4a3a
CATEGORIES:Conferences - Seminars
DESCRIPTION:Ekaterina Svikhnushina\nEDIC candidacy exam\nExam president: D
 r. Martin Rajman\nThesis advisor: Dr. Pearl Pu Faltings\nCo-examiner: Prof
 . Robert West\n\nAbstract\nRecent success of sequence-to-sequence neural n
 etworks has inspired intensive research on human-like dialog-generation ta
 sk. But evaluation of response-generation models remains an impeding facto
 r: a reliable automatic metric is unavailable while human experiments are 
 expensive. As a result\, establishing a decent evaluation metric for open-
 domain dialog systems is still an open research problem\, which we aim to 
 address in our thesis. In this proposal\, we first introduce the context o
 f neural-based dialog generation. Then we examine why evaluation metrics f
 rom other natural language processing domains are inapplicable for this ta
 sk. Finally\, we discuss strengths and weaknesses of a recently proposed a
 utomatic evaluation metric.\n\nBackground papers\nA Neural Conversational 
 Model. (2015)\, by O. Vinyals and Q. Le.\nHow NOT To Evaluate Your Dialogu
 e System: An Empirical Study of Unsupervised Evaluation Metrics for Dialog
 ue Response Generation. (2016)\, by C. Liu\, R. Lowe\, et al.\nRuber: An u
 nsupervised method for automatic evaluation of open-domain dialog systems.
  (2018). by C. Tao et al. 
LOCATION:INR 212 https://plan.epfl.ch/?room=INR212
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
