BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Commonsense Fact Linking for Natural Language Inference and Unders
 tanding
DTSTART:20220824T160000
DTEND:20220824T180000
DTSTAMP:20260510T000028Z
UID:9095cf6778f3f77e5caeebd932abbdb2affa41e94a3f7d0e15c82811
CATEGORIES:Conferences - Seminars
DESCRIPTION:Silin Gao\nEDIC candidacy exam\nExam president: Prof. Karl Abe
 rer\nThesis advisor: Prof. Antoine Bosselut\nCo-examiner: Prof. Robert Wes
 t\n\nAbstract\nNatural language processing (NLP) systems require commonsen
 se knowledge to perform better inference and understanding on the contexts
  (e.g.\, dialogs and narratives) being processed. However\, most NLP syste
 ms link facts from commonsense knowledge graphs based on simple heuristics
 \, which fail to achieve promising performances and therefore leave curren
 t commonsense fact linking under-explored. Our goal is to conduct more rig
 orous study on the commonsense fact linking to enable stronger NLP systems
 . In this proposal\, we first introduce three papers that are related to o
 ur research topic. The first paper provides background on commonsense know
 ledge graphs\, while the second and third paper further illustrate how com
 monsense facts from knowledge graphs are applied to current NLP applicatio
 ns. Then we present our proposed commonsense fact linking benchmark facing
  current challenges in this technique\, and finally point out potential di
 rections for future work.\n\nBackground papers\n1. COMET-ATOMIC 2020: On S
 ymbolic and Neural Commonsense Knowledge Graphs\nJena D. Hwang\, Chandra B
 hagavatula\, Ronan Le Bras\, Jeff Da\, Keisuke Sakaguchi\, Antoine Bosselu
 t\, Yejin Choi\nhttps://arxiv.org/pdf/2010.05953.pdf\n\n2. Think Before Yo
 u Speak: Explicitly Generating Implicit Commonsense Knowledge for Response
  Generation\nPei Zhou\, Karthik Gopalakrishnan\, Behnam Hedayatnia\, Seokh
 wan Kim\, Jay Pujara\, Xiang Ren\, Yang Liu\, Dilek Hakkani-Tur\nhttps://a
 rxiv.org/pdf/2110.08501.pdf\n\n3. A Knowledge-Enhanced Pretraining Model f
 or Commonsense Story Generation\nJian Guan\, Fei Huang\, Zhihao Zhao\, Xia
 oyan Zhu\, Minlie Huang\nhttps://arxiv.org/pdf/2001.05139.pdf\n 
LOCATION:BC 233 https://plan.epfl.ch/?room==BC%20233
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
