Promises and Challenges of Machine Learning in Scientific Computing

Thumbnail

Event details

Date 15.04.2021
Hour 16:1517:15
Speaker Prof. Éric Vanden-Eijden (Courant Institute, NYU)
Location Online
Category Conferences - Seminars

The recent success of machine learning suggests that neural networks may be capable of approximating high-dimensional functions with controllably small errors. As a result, they could outperform standard function interpolation methods that have been the workhorses of scientific computing but do not scale well with dimension. In support of this prospect, I will first discuss recent advances about the trainability and accuracy of shallow neural networks — these offer the simplest instance of nonlinear learning in functional spaces that are fundamentally different from classic approximation spaces. The dynamics of training in these spaces can be analyzed using tools from optimal transport and statistical mechanics, which reveal when and how shallow neural networks can overcome the curse of dimensionality and hint at ways to also analyze deep architectures.

I will then discuss how scientific computing problem in high-dimension once thought intractable can be revisited through the lens of these results, by focusing on two applications: the solution of partial differential equations in variational form, and the sampling of high dimensional distributions. In these scientific applications, the main issue to address is one that does not arise in traditional supervised learning, namely how to acquire the data needed to estimate the loss and make sure that the trained networks have good generalization properties.

Practical information

  • General public
  • Free

Organizer

  • Prof. Assyr Abdulle

Contact

Event broadcasted in

Share