Solving differential equations in reduced- and mixed-precision

Thumbnail

Event details

Date 28.02.2022
Hour 16:1517:15
Speaker Dr. Matteo Croci (Oden Institute, UT Austin)
Location Online
Category Conferences - Seminars
Event Language English

Motivated by the advent of machine learning, the last few years saw the return of hardware-supported reduced-precision computing.
Computations with fewer digits are faster and more memory and energy efficient, but a careful implementation and rounding error analysis are required to ensure that sensible results can still be obtained.

This talk is divided in two parts in which we focus on reduced- and mixed-precision algorithms respectively. Reduced-precision algorithms obtain an as accurate solution as possible given the precision while avoiding catastrophic rounding error accumulation. Mixed-precision algorithms, on the other hand, combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision while retaining good accuracy.

In the first part of the talk we study the accumulation of rounding errors in the solution of the heat equation, a proxy for parabolic PDEs, in reduced precision using round-to-nearest (RtN) and stochastic rounding (SR). We demonstrate how to implement the numerical scheme to reduce rounding errors and we present \emph{a priori} estimates for local and global rounding errors. While the RtN solution leads to rapid rounding error accumulation and stagnation,
SR leads to much more robust implementations for which the error remains at roughly the same level of the working precision.

In the second part of the talk we focus on mixed-precision explicit stabilised Runge-Kutta methods. We show that a naive mixed-precision implementation harms convergence and leads to error stagnation, and we present a more accurate alternative. We introduce new Runge-Kutta-Chebyshev schemes that only use $q\in\{1,2\}$ high-precision function evaluations to achieve a limiting convergence order of $O(\Delta t^{q})$, leaving the remaining evaluations in low precision. These methods are essentially as cheap as their fully low-precision equivalent and they are as accurate and (almost) as stable as their high-precision counterpart.

Practical information

  • Informed public
  • Free

Organizer

  • Giacomo Garegnani, Nicolas Boumal

Contact

  • Nicolas Boumal

Event broadcasted in

Share