Uncertainty quantification in atomistic modeling: From uncertainty-aware density functional theory to machine learning
Event details
| Date | 25.11.2025 › 28.11.2025 |
| Location | |
| Category | Conferences - Seminars |
| Event Language | English |
You can apply to participate and find all the relevant information (speakers, abstracts, program,...) on the event website: https://www.cecam.org/workshop-details/uncertainty-quantification-in-atomistic-modeling-from-uncertainty-aware-density-functional-theory-to-machine-learning-1380.
Registration is required to attend the full event, take part in the social activities and present a poster at the poster session (if any). However, the EPFL community is welcome to attend specific lectures without registration if the topic is of interest to their research. Do not hesitate to contact the CECAM Event Manager if you have any question.
Description
the DAEMON COST Action CA22154 acts as a co-organizer of the event. COST (European Cooperation in Science and Technology) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. https://www.cost.eu
Uncertainty quantification (UQ) is a standard, widespread practice in experimental sciences. However, rigorous uncertainty analysis in atomistic modeling—from density functional theory (DFT) calculations to machine learning (ML) models trained on DFT results—is relatively underdeveloped, leading to frequent scientific outcomes in the field that lack any uncertainty or error quantification. This poses a significant challenge for innovation and progress in materials science, especially given the crucial role of multiscale numerical simulations in the contemporary research landscape.
Our workshop first encompasses UQ in DFT calculations, which are later inherited by ML models trained on DFT data. Numerical parameters such as basis sets sizes, energy tolerances, convergence criteria, and many other preconditioning parameters need careful selection. Typically, these parameters are chosen heuristically, especially in high-throughput contexts, which can lead to inconsistent and unsystematic errors that make it challenging to compare data. Error balancing strategies can improve parameter tuning in DFT simulations [1-5], but comprehensive error bounds for generic chemistry codes and fully integrated models remain lacking.
Our workshop also spans UQ for atomistic ML. Atomistic ML has extended materials modeling at ab initio accuracy beyond the conventionally accessible length and time scales. ML models are intrinsically statiscal, however, and UQ is essential in their usage. Various UQ methods have been devised to allow atomistic ML models to be deployed with uncertainty estimates [6, 16], which enables error propagation all the way up to the physical observables [10]. Although UQ for deep neural network (NN)-based models pose a greater challenge, researchers have demonstrated various approaches in which reliable uncertainty estimates can be obtained also for these models [11, 14-16], with more recent efforts focusing on making UQ cheap and efficient for such NN-based models [7,8,12,13]. The on-the-fly ML uncertainty estimates can also be leveraged to construct robust datasets for model training via active learning strategies [9].
Our workshop aims to bring together researchers focused on UQ in both domains of DFT and atomistic ML, allowing the respective research communities to take a collective first step towards the "holy grail" of UQ in atomistic modeling, which would be to propose a comprehensive approach that links the errors of the DFT calculations to those stemming from the statistical inference of ML models. We invite researchers from both communities and beyond to join us and share the latest developments in the loosely defined areas of (i) UQ in DFT, (ii) UQ in atomistic ML, and (iii) applications of UQ in atomistic modeling, and partake in stimulating discussions that will shape the future of UQ in atomistic modeling.
Note that we encourage all prospective participants to present their research at our workshop, either as a poster or a contributed talk. Please see the detailed application instructions for more information.
The DAEMON COST Action CA22154 acts as a co-organizer of the event. COST (European Cooperation in Science and Technology) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. www.cost.eu
References
[1] J. Mortensen, K. Kaasbjerg, S. Frederiksen, J. Nørskov, J. Sethna, K. Jacobsen, Phys. Rev. Lett., 95, 216401 (2005)
[2] G. Dusson, Y. Maday, IMA. J. Numer. Anal., 37, 94-137 (2016)
[3] K. Lejaeghere, The uncertainty pyramid for electronic-structure methods, 2020
[4] G. Houchins, D. Krishnamurthy, V. Viswanathan, MRS Bull., 44, 204-212 (2019)
[5] M. Herbst, A. Levitt, E. Cancès, Faraday Discuss., 224, 227-246 (2020)
[6] C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c 2006 Massachusetts Institute of Technology.
[7] M. Kellner, M. Ceriotti, Mach. Learn.: Sci. Technol., 5, 035006 (2024)
[8] Bigi, Filippo, et al. "A prediction rigidity formalism for low-cost uncertainties in trained neural networks." arXiv preprint arXiv:2403.02251 (2024).
[9] Holzmüller, David, et al. "A framework and benchmark for deep batch active learning for regression." Journal of Machine Learning Research 24.164 (2023): 1-81.
[10] G. Imbalzano, Y. Zhuang, V. Kapil, K. Rossi, E. Engel, F. Grasselli, M. Ceriotti, The Journal of Chemical Physics, 154, (2021)
[11] J. Lee, L. Xiao, S. Schoenholz, Y. Bahri, R. Novak, J. Sohl-Dickstein, J. Pennington, J. Stat. Mech., 2020, 124002 (2020)
[12] Jacot, Arthur, Franck Gabriel, and Clément Hongler. "Neural tangent kernel: Convergence and generalization in neural networks." Advances in neural information processing systems 31 (2018).
[13] Daxberger, Erik, et al. "Laplace redux-effortless bayesian deep learning." Advances in Neural Information Processing Systems 34 (2021): 20089-20103.
[14] Gal, Yarin and Ghahramani, Zoubin. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016
[15] A. Zhu, S. Batzner, A. Musaelian, B. Kozinsky, The Journal of Chemical Physics, 158, (2023)
[16] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." Advances in neural information processing systems 30 (2017).
Practical information
- Informed public
- Registration required
Organizer
- Sanggyu Chong, EPFL ; Genevieve Dusson, CNRS & Université Bourgogne Franche-Comté ; Federico Grasselli, University of Modena and Reggio Emilia ; Michael Herbst, EPFL ; Julia Maria Westermayr, Leipzig University
Contact
- Cornelia Bujenita, CECAM Events and Operations Manager