LRS² - Daniel J. Siefman, Criticality Validation with Bayesian Analysis

Event details
Date | 30.05.2018 |
Hour | 11:00 › 12:00 |
Speaker | Daniel J. Siefman |
Location | |
Category | Conferences - Seminars |
LRS Lab Research Seminar
Season 2
Daniel J. Siefman, Criticality Validation with Bayesian Analysis
A central question in modeling physical systems is “how reliable are our calculations?” If we say a reactor is safe based on a simulation, we better be confident about it. The easiest way to check, or validate, a simulation is to compare a calculated value and an experimental value. In reactor physics, for example, we simulate a reactor and calculate keff and then we compare the calculated value to the actual keff value of the system. But what do when we do not have any experimental values? If we are designing a fast molten salt reactor with a thorium fuel cycle, we do not have any experimental reactors to validate our simulations. Furthermore, to build a research reactor to test our simulations would require millions of dollars in investment. We would want to be confident that the reactor is a good idea before investing in the research reactor, right?
When we do have experimental values to validate a simulation, we see that there is a bias between our calculation and the experiment. The bias can come from a number of sources:
When designing a new reactor, we have to quantify the uncertainty coming from modeling, methodology, and nuclear data and see how they affect economic and safety analyses. We can then improve our predictions by performing a Bayesian update with experiments that are similar to our new reactor. With the updated calculated values, we can then update our safety and economic analyses and then be that much more confident in our approach.
Season 2
Daniel J. Siefman, Criticality Validation with Bayesian Analysis
A central question in modeling physical systems is “how reliable are our calculations?” If we say a reactor is safe based on a simulation, we better be confident about it. The easiest way to check, or validate, a simulation is to compare a calculated value and an experimental value. In reactor physics, for example, we simulate a reactor and calculate keff and then we compare the calculated value to the actual keff value of the system. But what do when we do not have any experimental values? If we are designing a fast molten salt reactor with a thorium fuel cycle, we do not have any experimental reactors to validate our simulations. Furthermore, to build a research reactor to test our simulations would require millions of dollars in investment. We would want to be confident that the reactor is a good idea before investing in the research reactor, right?
When we do have experimental values to validate a simulation, we see that there is a bias between our calculation and the experiment. The bias can come from a number of sources:
- Modeling approximations: How well do we know the fuel composition, the geometry of the core?
- Methodology: How are we approximating a solution to the neutron transport equation? Diffusion theory? The Monte Carlo method? Approximations cause a bias.
- Nuclear data: The cross sections input into our neutron transport code have inherent uncertainties that cause a bias and uncertainty in the calculation.
When designing a new reactor, we have to quantify the uncertainty coming from modeling, methodology, and nuclear data and see how they affect economic and safety analyses. We can then improve our predictions by performing a Bayesian update with experiments that are similar to our new reactor. With the updated calculated values, we can then update our safety and economic analyses and then be that much more confident in our approach.
Practical information
- Informed public
- Free
- This event is internal
Organizer
- Vincent Lamirand, Daniel Siefman
Contact
- Vincent Lamirand