IC Colloquium: General Architectures for Specialized AI Interactions
By: Michelle Lam - Stanford University
IC Faculty candidate
Abstract
Using a large language model is like talking to a personal assistant, confidant, topic expert, and copyeditor all at once: and that’s a problem. When an LLM is one big mixed metaphor, people must manage incompatible mental models about how it works, hindering their ability to interpret and control model behavior. In this talk, I argue that redesigning AI abstractions for end-user control can enable users to redirect generic AI systems to realize their specific, distinctive goals. I demonstrate this concept by introducing just-in-time (JIT) objectives, which specialize outputs to a particular user by inducing user objectives from observed interaction traces. JIT objectives enable on-the-fly generation of software tools that visualize a research statement’s logical argument, test alternate color palettes for a figure, or provide feedback based on relevant academic experts. I then demonstrate how this approach also enables interventions into other AI systems such as social media rankers (Societal Objective Functions) and concept induction from unstructured text data (LLooM). This work argues that user-controllable AI objectives are a viable strategy to combat the issues of generic AI interactions.
Bio
Michelle Lam is a Computer Science PhD candidate at Stanford University in the Human-Computer Interaction Group. She designs, builds, and deploys novel systems for user-controllable AI, shifting the work of defining and evaluating AI objectives closer to the time and place where end users interact with AI. Michelle publishes at top HCI venues such as ACM CHI, UIST, and CSCW, where she has received Best Paper Awards (CSCW ’24, CHI ’22), an Impact Recognition (CSCW ’24), and a Best Paper honorable mention (CSCW ’23). She was recognized as a Rising Star in EECS, Stanford Interdisciplinary Graduate Fellow, and Siebel Scholar.
More information
IC Faculty candidate
Abstract
Using a large language model is like talking to a personal assistant, confidant, topic expert, and copyeditor all at once: and that’s a problem. When an LLM is one big mixed metaphor, people must manage incompatible mental models about how it works, hindering their ability to interpret and control model behavior. In this talk, I argue that redesigning AI abstractions for end-user control can enable users to redirect generic AI systems to realize their specific, distinctive goals. I demonstrate this concept by introducing just-in-time (JIT) objectives, which specialize outputs to a particular user by inducing user objectives from observed interaction traces. JIT objectives enable on-the-fly generation of software tools that visualize a research statement’s logical argument, test alternate color palettes for a figure, or provide feedback based on relevant academic experts. I then demonstrate how this approach also enables interventions into other AI systems such as social media rankers (Societal Objective Functions) and concept induction from unstructured text data (LLooM). This work argues that user-controllable AI objectives are a viable strategy to combat the issues of generic AI interactions.
Bio
Michelle Lam is a Computer Science PhD candidate at Stanford University in the Human-Computer Interaction Group. She designs, builds, and deploys novel systems for user-controllable AI, shifting the work of defining and evaluating AI objectives closer to the time and place where end users interact with AI. Michelle publishes at top HCI venues such as ACM CHI, UIST, and CSCW, where she has received Best Paper Awards (CSCW ’24, CHI ’22), an Impact Recognition (CSCW ’24), and a Best Paper honorable mention (CSCW ’23). She was recognized as a Rising Star in EECS, Stanford Interdisciplinary Graduate Fellow, and Siebel Scholar.
More information
Practical information
- General public
- Free
Contact
- Host: Bob West