"Machine learning in chemistry and beyond" (ChE-650) seminar by Bharath Ramsundar: "Language based Pre-training for Drug Discovery"
Event details
Date | 07.12.2021 |
Hour | 17:00 › 18:00 |
Speaker | Bharath received a BA and BS from UC Berkeley in EECS and Mathematics and was valedictorian of his graduating class in mathematics. He did his PhD in computer science at Stanford University where he studied the application of deep-learning to problems in drug-discovery. At Stanford, Bharath created the deepchem.io open-source project to grow the deep drug discovery open source community, co-created the moleculenet.ai benchmark suite to facilitate development of molecular algorithms, and more. Bharath’s graduate education was supported by a Hertz Fellowship, the most selective graduate fellowship in the sciences. After his PhD, Bharath co-founded Computable a startup that built better tools for collaborative dataset management. Bharath is currently working actively on growing the DeepChem community and on exploring a few early projects still in stealth. Bharath is the lead author of “TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning”, a developer’s introduction to modern machine learning, with O’Reilly Media, and the lead author of “Deep Learning for the Life Sciences” |
Location | Online |
Category | Conferences - Seminars |
Event Language | English |
Language based Pre-training for Drug Discovery
Pretraining has taken the NLP world by storm as ever larger language models have broken successive benchmarks. In this talk, I'll review some recent work applying pretraining to scientific challenges, and in particular, will discuss the challenges of pretraining for molecular machine learning. I'll introduce our new architecture, ChemBERTa, which explores the use of BERT-style pretraining for machine learning problems inspired by drug discovery applications.
Pretraining has taken the NLP world by storm as ever larger language models have broken successive benchmarks. In this talk, I'll review some recent work applying pretraining to scientific challenges, and in particular, will discuss the challenges of pretraining for molecular machine learning. I'll introduce our new architecture, ChemBERTa, which explores the use of BERT-style pretraining for machine learning problems inspired by drug discovery applications.
Practical information
- Informed public
- Free
Contact
- Kevin Maik Jablonka, Solène Oberli, Puck van Gerwen