Fall 2023 SDSC PhD fellows workshop

Thumbnail

Event details

Date 28.11.2023
Hour 10:2516:20
Speaker Martin Josifoski, Laura Manduchi, Clément Vignac, Andreas Schlaginhaufen, Aditya Varre
Location
Category Conferences - Seminars
Event Language English

Since 2019, the Swiss Data Science Center (SDSC) has awarded fellowships to 19 PhD students working on various aspects of data science theory and methods at EPFL and ETH Zürich, from statistics to computer vision, including machine learning, statistical signal processing, and natural language processing.

We are pleased to invite you to the PhD fellows' workshop of Fall 2023. These workshops are organized once per semester, alternating between EPFL and ETH Zürich, to provide a forum of exchange on methods in data science and an opportunity for five of these fellows to present their work in person. The program will also feature in the afternoon four short presentations of SDSC collaborative projects. Everyone from the EPFL community is welcome to attend.

Date and Time: Tuesday 28 November 2023, 10:25am
Room: BC 420


Program (please find the abstract below):

  • 10:25 - Foreword and session opening – Dr. Guillaume Obozinski, Deputy Chief Data Scientist, SDSC
  • 10:30 - Martin Josifoski, Data Science Lab, EPFL: 
    • Symbolic Intermediate Representations for Reasoning and Collaborating AI
  • 11:10 - Laura Manduchi, Medical Data Science group, ETH Zürich: 
    • Tree Variational Autoencoders
  • 12:00 - Lunch
  • 13:00 - Clément Vignac, LTS4, EPFL: 
    • Discrete denoising diffusion models for graph generation
  • 13:40 - Short presentations of SDSC collaborative projects:
    • Transformer Models for the Estimation of Transit Time in Watersheds, Quentin Duchemin, SDSC.
    • Self-guided ML Algorithms for Real-Time Assimilation, Interpolation and Rendering of Flow Data, Victor Cohen, SDSC.
    • Probabilistic State Space Models for Predicting Turbulence on Pitching Airfoils, Christian Donner, SDSC.
    • A Graph Matching Approach to Tracking Neurons in Freely-Moving C. elegans, Corinne Jones, SDSC.
  • 14:30 - Coffee break
  • 14:50 - Andreas Schlaginhaufen, Sycamore, EPFL:
    • Identifiability and Transferability of Rewards in Inverse Reinforcement Learning
  • 15:30 - Aditya Varre, TML, EPFL,
    • On the spectral bias of two-layer linear networks
  • 16:10 - Closing words

Abstracts:
  • Tree Variational Autoencoders, Laura Manduchi, Medical Data Science group, ETH Zürich
    • Abstract: In this talk, I will present the Tree Variational Autoencoder (TreeVAE), a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables. It adapts its architecture to discover the optimal tree for encoding dependencies between latent variables and it hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data.
  • Discrete denoising diffusion models for graph generationClément Vignac, LTS4, EPFL
    • Abstract: Graph generation has posed significant challenges in the past years. Traditional methods, such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Normalizing Flows, have been primarily limited to generating small graphs, and often evaluated on molecules composed of no more than 9 atoms. In this talk, we introduce DiGress, a denoising diffusion model that scales to substantially larger graphs, as well as its extension MiDi for molecule generation in 3D. These models leverage a discrete diffusion process progressively edits graphs with noise, through the process of adding or removing edges and changing the node features. In contrast to Gaussian noise, the use of a discrete diffusion process allows to preserve the sparsity of graphs, which allows for various architectural improvements. We show experimentally the effectiveness of this approach, and propose hypotheses in order to explain why these models outperform GANs and VAEs by such a large margin.
    • References:
  • Identifiability and Transferability of Rewards in Regularized Inverse Reinforcement Learning, Andreas Schlaginhaufen, Sycamore, EPFL
    • Abstract: A common reason for favoring inverse reinforcement learning (IRL) over imitation learning is that rewards offer a more succinct and transferable description of a task. However, due to the ill-posedness of the inverse problem, it is widely unclear whether rewards learned via IRL are indeed transferable to new environments. In this talk, we focus on regularized IRL and provide sufficient conditions for identifiability and transferability to new environments.
  • On the spectral bias of two-layer linear networks, Aditya Varre, TML, EPFL
    • Abstract: In this talk I will present the behaviour of two-layer fully connected networks with linear activations trained with gradient flow on the square loss. We further discuss how the opti- mization process carries an implicit bias on the parameters that depends on the scale of its initialization. The main result is a variational characteriza- tion of the loss minimizers retrieved by the gradient flow for a specific initialization shape. This characterization reveals that, in the small scale initialization regime, the linear neural network’s hidden layer is biased toward having a low-rank struc- ture. To complement our results, we present a hidden mirror flow that tracks the dynamics of the singular values of the weights matrices and describe their time evolution. We support our findings with numerical experiments illustrating the phenomena.