Talk by Richard Lee Davis, Stanford Graduate School of Education


Event details

Date and time 15.01.2020 11:1512:00  
Place and room
Speaker Richard Lee Davis is a doctoral student in Learning Sciences and Technology Design at the Stanford Graduate School of Education. His research interests include using machine learning to understand student learning in open-ended, project-based learning environments. He received an M.S. in computer science at Stanford with a focus in artificial intelligence, and is currently working with the Piech Lab at Stanford on the development of variational Bayesian methods for estimating student knowledge. He is a recipient of the Stanford Interdisciplinary Graduate Fellowship, and holds a B.A. in Philosophy and Studio Art with a minor in Physics from the University of Virginia. Prior to joining Stanford, he worked as a software and firmware developer while co-directing an artist collective in Philadelphia.
Category Conferences - Seminars
Specialized Machine Learning Methods for Attacking the Small-Data Problem in Education

Over the past decade there have been a number of breakthroughs in machine learning, including achieving human-level performance on image classification, speech recognition, strategic game playing, and text generation. These methods have enormous potential to transform education by making it easier for teachers to understand classroom dynamics, monitor student learning, grade more quickly and with fewer mistakes, provide feedback to students in real time, support collaboration, and to measure learning during activities taking place in situated, active learning environments. However, there are a number of obstacles to overcome before these promises can be realized. One of the most significant has to do with the amount of data available—most education datasets are simply too small to properly train the types of deep neural networks that have led to recent breakthroughs. In this talk I will discuss promising approaches to overcoming this obstacle. I will describe how combining theoretical insights from the learning sciences with unsupervised learning methods made it possible to find meaningful structure in a small dataset of 40 students working on hands-on problems in a makerspace. I will also discuss how deep probabilistic programming languages can be used to design models of students based on domain knowledge, and then used to perform inference about student knowledge and question characteristics in small-data regimes. 

Practical information

  • Informed public
  • Free




machine learning education chili