BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Learning models: Explainability for Black-Box Neural Network Learn
 ers in the Education Domain
DTSTART:20210726T160000
DTEND:20210726T180000
DTSTAMP:20260504T123226Z
UID:5bbfe5e7d1f26cd9a15dc40b9b038e84f2bd8b04aaedc9b2c19cded6
CATEGORIES:Conferences - Seminars
DESCRIPTION:Vinitra Swamy\nEDIC candidacy exam\nexam president: Prof. Pier
 re Dillenbourg\nthesis advisor: Prof. Tanja Käser\nthesis coadvisor: Pror
 . Martin Jaggi\nco-examiner: Prof. Antoine Bosselut\n\nAbstract\nDigital l
 earning environments are commonplace in the modern classroom. The rise of 
 educational technology has been subsequently mirrored by the rise of appli
 ed machine learning research in areas like student learner modeling\, auto
 grading\, dropout prediction\, and curriculum design. Although the amount 
 of research in this sphere has grown significantly\, the adoption of neura
 l network models in learning platforms has not yet become ubiquitous. Crit
 ics of machine learning in education are concerned about the interpretabil
 ity of black box models and the privacy of student data. Other educators d
 o not see how large scale models can have an impact on student performance
  prediction in their small\, ongoing\, or first-time course.\n\nIn this do
 ctoral candidacy proposal\, we present three papers aimed at overcoming th
 e gap between real-world educational data challenges and intelligent predi
 ctors of student performance. The first paper addresses the problem of unk
 nown student outcomes in ongoing courses using transfer learning to demons
 trate that knowledge can be shared across MOOCs for student dropout predic
 tion. The second paper addresses the small classroom size problem with an 
 active-learning approach\, showing that highly performant student affect d
 etectors can be trained using a minimal set of data points. The third rese
 arch paper is a landmark work in the explainable AI field\, focusing on tr
 aditionally interpretable local models (LIME) to explain black box model b
 ehavior. Although LIME has enjoyed great popularity in the ML research com
 munity\, there has not been much work in neural network explainability for
  education. We build upon these works to propose a research agenda overcom
 ing practical adoption concerns in the machine learning for education fiel
 d through transfer learning\, active learning\, and interpretability.\n\nB
 ackground papers\n\n	Transfer Learning using Representation Learning in Ma
 ssive Open Online Courses\n	\n		Authors: Mucong Ding\, Yanbang Wang\, Erik
  Hemberg\, Una-May O'Reilly\n		Link: https://arxiv.org/pdf/1812.05043.pdf\
 n	\n	\n	LIME: "Why Should I Trust You?": Explaining the Predictions of Any
  Classifier\n	\n		Authors: Marco Tulio Ribeiro\, Sameer Singh\, Carlos Gue
 strin\n		Link: https://arxiv.org/pdf/1602.04938.pdf\n	\n	\n	Active Learnin
 g for Student Affect Detection\n	\n		Authors: TY Yang\, RS Baker\, C Stude
 r\, N Heffernan\, AS Lan\n		Link: https://www.research-collection.ethz.ch/
 bitstream/handle/20.500.11850/461351/19EDM-al.pdf?sequence=1&isAllowed=y\n
 	\n	\n
LOCATION:
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
