BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Solving differential equations in reduced- and mixed-precision
DTSTART:20220228T161500
DTEND:20220228T171500
DTSTAMP:20260407T045502Z
UID:5f587befc9524102c9bd7b40dc470b2bb4afaf0a0f5f88b5422c0a73
CATEGORIES:Conferences - Seminars
DESCRIPTION:Dr. Matteo Croci (Oden Institute\, UT Austin)\nMotivated by t
 he advent of machine learning\, the last few years saw the return of hardw
 are-supported reduced-precision computing.\nComputations with fewer digits
  are faster and more memory and energy efficient\, but a careful implement
 ation and rounding error analysis are required to ensure that sensible res
 ults can still be obtained.\n\nThis talk is divided in two parts in which 
 we focus on reduced- and mixed-precision algorithms respectively. Reduced-
 precision algorithms obtain an as accurate solution as possible given the 
 precision while avoiding catastrophic rounding error accumulation. Mixed-p
 recision algorithms\, on the other hand\, combine low- and high-precision 
 computations in order to benefit from the performance gains of reduced-pre
 cision while retaining good accuracy.\n\nIn the first part of the talk we 
 study the accumulation of rounding errors in the solution of the heat equa
 tion\, a proxy for parabolic PDEs\, in reduced precision using round-to-ne
 arest (RtN) and stochastic rounding (SR). We demonstrate how to implement 
 the numerical scheme to reduce rounding errors and we present \\emph{a pri
 ori} estimates for local and global rounding errors. While the RtN solutio
 n leads to rapid rounding error accumulation and stagnation\,\nSR leads to
  much more robust implementations for which the error remains at roughly t
 he same level of the working precision.\n\nIn the second part of the talk 
 we focus on mixed-precision explicit stabilised Runge-Kutta methods. We sh
 ow that a naive mixed-precision implementation harms convergence and leads
  to error stagnation\, and we present a more accurate alternative. We intr
 oduce new Runge-Kutta-Chebyshev schemes that only use $q\\in\\{1\,2\\}$ hi
 gh-precision function evaluations to achieve a limiting convergence order 
 of $O(\\Delta t^{q})$\, leaving the remaining evaluations in low precision
 . These methods are essentially as cheap as their fully low-precision equi
 valent and they are as accurate and (almost) as stable as their high-preci
 sion counterpart.
LOCATION:https://epfl.zoom.us/j/84030108577?pwd=bHh2Z3J2YllvTWdteHA3MHhVcn
 IyUT09
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
