BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Commonsense injection in large language models
DTSTART:20230607T110000
DTEND:20230607T120000
DTSTAMP:20260407T164044Z
UID:febd9bc9443ce1af39976b4a704328abbcfef7d0f9e8f943b8d54e6d
CATEGORIES:Conferences - Seminars
DESCRIPTION:Niket Tandon is a Senior Research Scientist at the Allen Ins
 titute for AI in Seattle. His research interests are in commonsense reason
 ing and natural language guided reasoning. He works at the Aristo team res
 ponsible for creating AI which aced science exams. He obtained his Ph.D. f
 rom the Max Planck Institute for Informatics in Germany in 2016\, where he
  was supervised by Professor Gerhard Weikum\, resulting in the largest au
 tomatically extracted commonsense knowledge base at the time\, called WebC
 hild. He is also the founder of PQRS research\, an organization providing 
 research opportunities to undergraduate students from underrepresented ins
 titutes. Homepage: https://niket.tandon.info/\nAbstract: Large LMs\, whil
 e powerful\, are not immune to mistakes which are obvious to humans\, but 
 they can be prohibitively costly to retrain. Our goal is to effectively co
 rrect language model mistakes by injecting knowledge via user interactions
  with the system but without retraining. Our approach is a memory-augmente
 d architecture\, where user feedback is used to make the model generate a 
 better answer or to post hoc correct the errors such that the model does n
 ot repeat similar mistakes. We will discuss efficient solutions to designi
 ng this memory of knowledge\, and leveraging it in the model. This is a st
 ep in the direction of never ending learning\, and we will present a futur
 e roadmap to what open research problems need to be addressed to get to ne
 ver ending learning language models.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420 https://epfl.zoom.us/
 j/67891526358?pwd=VGJoRmZRQmlsZ0xrU091VVRiZG42QT09
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
