Toward Trustworthy Large Language Models for Scalable, Personalized Education

Thumbnail

Event details

Date 22.08.2025
Hour 13:0015:00
Speaker Fares Mahmoud Fawzi
Location
Category Conferences - Seminars
DIC candidacy exam
Exam president: Prof. Nicolas Flammarion
Thesis advisor: Prof. Tanja Käser
Co-examiner: Prof. Martin Jaggi

Abstract
Education at scale, in MOOCs and large classrooms, has expanded access to learning for thousands of students, but often at the expense of personalised guidance and feedback. Large Language Models (LLMs) offer a promising solution, with applications in automated question generation, interactive tutoring, and real-time feedback that adapts to individual learners. However, LLMs are prone to hallucinations, producing plausible yet incorrect or pedagogically misaligned responses. Their opaque decision-making processes make it difficult for educators to audit outputs or steer behaviour, reinforcing perceptions of LLMs as "black boxes" and undermining trust. A dual strategy is needed to address these challenges. Grounding techniques, such as in-context learning augmented with external tool use, can constrain LLM outputs by anchoring them in external sources. Interpretability methods, including representation engineering and mechanistic interpretability, can reveal how LLMs encode and act on educational signals and enable targeted behavioural steering. Together, these approaches support the development of scalable, adaptive, and trustworthy AI-assisted education systems.

Selected papers
coming soon

Practical information

  • General public
  • Free

Tags

EDIC candidacy exam

Share