Emphasising extreme events when evaluating probabilistic forecasts

Thumbnail

Event details

Date 17.04.2026
Hour 15:1516:15
Speaker Sam Allen, KIT
Location
Category Conferences - Seminars
Event Language English

It is becoming increasingly common to issue forecasts that are probabilistic, in the form of probability distributions over all possible outcomes. To generate good probabilistic forecasts, we must first be able to evaluate how good a forecast is. The evaluation of probabilistic forecasts focuses on two aspects of forecast performance: forecast accuracy and forecast calibration. Forecast accuracy refers to how 'close' the forecast is to the corresponding observation, which can be quantified using proper scoring rules. Forecast calibration considers whether probabilistic forecasts are trustworthy. While most scoring rules and calibration checks treat all possible outcomes equally, some outcomes are often of more interest than others when evaluating forecasts, and one could argue that these outcomes should therefore be emphasised during forecast evaluation. For example, extreme outcomes typically lead to the largest impacts on forecast users, making accurate and calibrated forecasts for these outcomes particularly valuable. In this talk, we discuss methods to focus on particular outcomes when evaluating probabilistic forecasts. We review weighted scoring rules, which allow practitioners to incorporate a weight function into conventional scoring rules when calculating forecast accuracy, and we demonstrate that the theory underlying weighted scoring rules can readily be extended to forecast calibration. Using this, we introduce methods to evaluate the calibration of probabilistic forecasts for extreme events.
 

Practical information

  • Informed public
  • Free

Organizer

  • Rajita Chandak

Contact

  • Maroussia Schaffner

Event broadcasted in

Share