BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Extending Language Models toward more Human-Like Language Understa
 nding
DTSTART:20210615T130000
DTEND:20210615T150000
DTSTAMP:20260406T211402Z
UID:74553f9d70103d26aa2631801a065073ed7eddbdb1a383123944c570
CATEGORIES:Conferences - Seminars
DESCRIPTION:Martin Josifoski\nEDIC candidacy exam\nexam president: Prof. B
 oi Faltings\nthesis advisor: Prof. Robert West\nco-examiner: Prof. Tanja K
 äser\n\nAbstract\nTransformer-based language models\, pretrained on large
 -scale data\, followed by fine-tuning on a specific task\, have establishe
 d state-of-the-art performance on many NLP benchmarks. Despite their succe
 ss\, recent studies suggest many tasks with which they struggle due to the
 ir lack of reading comprehension\, common sense\, memory or basic reasonin
 g. When accompanied with a benchmark dataset\, task specific solutions to 
 address the problem have been developed\, but crucially\, these solutions 
 do not generalize\, and only solve the dataset without solving the general
  task. In contrast\, humans build from previous experience\, and can quick
 ly learn to solve a new language task from only a few examples or instruct
 ions. How can we make language understanding models more human-like? First
 \, the right level of abstraction needs to be employed -- humans reason in
  terms of situations and not token correlations. Situations are representa
 tion models that specify the objects/entities of interest\, their properti
 es and the relations between them. Second\, models should follow a modular
  design that allows for learning independent mechanisms that can be flexib
 ly reused\, composed and re-purposed -- humans break down complex tasks in
 to smaller\, more fundamental sub-tasks. Building  on  these  observati
 ons  we  propose  a  knowledge-augmented  neuro-symbolic  system  w
 ith  a  modular  design. More  specifically\,  the  modules  will 
  communicate  via  situations\, and  the  knowledge  source  will  
 be  organized  around entities (objects)\, their properties and the know
 n relations be-tween them.\n\nBackground papers\nFacts as Experts: Adaptab
 le and Interpretable Neural Memory over Symbolic Knowledge\nAutoregressive
  Entity Retrieval\nInvariant Risk Minimization\n 
LOCATION:
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
