Improving the metacognitive skills of students using AI-based methods


Event details

Date 21.06.2024
Hour 11:0013:00
Speaker Seyed Parsa Neshaei
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Robert West
Thesis advisor: Prof. Tanja Käser
Co-examiner: Prof. Pierre Dillenbourg

Possessing metacognitive skills has been shown to improve the academic progress and achievement of students. However, students regularly encounter challenges in effectively employing metacognitive strategies, such as reflective writing, in their educational journeys. While machine learning models embedded in intelligent tutoring systems can help students in their metacognition, they use extensive training data to perform well, highlighting the need for lengthy and costly human annotation processes. Large Language Models (LLMs) can mitigate this issue; however, they come with challenges including low controllability and the possibility of hallucination, as well as communicating their outputs to the users in a transparent and useful manner. Such challenges have limited their use in real-world educational environments.

This doctoral candidacy proposal integrates insights from three papers to address these challenges with the goal of helping students improve their metacognitive skills, specifically focusing on reflective writing.
The first paper proposes an adaptive knowledge distillation approach to use the vast knowledge embedded in LLMs for improving the performance of small efficient models in a task of educational interest.
The second paper introduces a co-training approach of a text generation and a task-specific knowledge tracing model to generate relevant exercises that are coordinated with the current learning status of students.
The third paper conducts a design process for a reflective writing chatbot, including interviews with domain experts and a real-world field study, and also provides the key benefits and limitations of embedding LLMs without smaller task-specific models in a user-facing application.
Our proposed research plan, inspired by these papers, aims to utilize machine learning models to improve the metacognitive and reflective skills of students, by A) using LLMs to improve the performance of smaller task-specific models, B) steering and controlling language model generations via the smaller models, and C) finding the best ways to communicate and visualize the generated suggestions and predictions to students and their instructors.

Background papers
Paper 1:
MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients’ Journaling
Authors: Taewan Kim, Seolyeong Bae, Hyun Ah Kim, Su-woo Lee, Hwajung Hong, Chanmo Yang, and Young-Ho Kim

Paper 2:
Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation
Authors: Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, Ashwin Kalyan

Paper 3:
Adaptive and Personalized Exercise Generation for Online Language Learning
Authors: Peng Cui, Mrinmaya Sachan

Practical information

  • General public
  • Free


EDIC candidacy exam