# Difference between revisions of "Applied/ACMS/absS22"

(7 intermediate revisions by 3 users not shown) | |||

Line 34: | Line 34: | ||

Bio: | Bio: | ||

Pengchuan Zhang is a principal researcher at Microsoft Research Redmond lab. He obtained his PhD degree in Computational and Mathematical Sciences from Caltech in 2017, and then joined Microsoft Research for machine learning research. His research interests are mainly in the areas of deep learning, mathematical optimization, and their applications in vision language intelligence. On the theoretical side, he is developing more efficient and/or robust machine learning algorithms. On the application side, he is working on vision-language (VL) multi-modal intelligence, including vision language pretraining and various downstream CV and VL tasks. His work has been published in top-tier machine learning conferences (CPVR, ICCV, ECCV, NeurIPS, ICLR, ICML, …), has appeared in several media outlets (Wired, TechCrunch, GeekWire, …), and has been shipped into multiple Microsoft products (Azure Cognitive Services, Bing multi-media search, …). | Pengchuan Zhang is a principal researcher at Microsoft Research Redmond lab. He obtained his PhD degree in Computational and Mathematical Sciences from Caltech in 2017, and then joined Microsoft Research for machine learning research. His research interests are mainly in the areas of deep learning, mathematical optimization, and their applications in vision language intelligence. On the theoretical side, he is developing more efficient and/or robust machine learning algorithms. On the application side, he is working on vision-language (VL) multi-modal intelligence, including vision language pretraining and various downstream CV and VL tasks. His work has been published in top-tier machine learning conferences (CPVR, ICCV, ECCV, NeurIPS, ICLR, ICML, …), has appeared in several media outlets (Wired, TechCrunch, GeekWire, …), and has been shipped into multiple Microsoft products (Azure Cognitive Services, Bing multi-media search, …). | ||

+ | |||

+ | === Mark Taylor (Sandia) === | ||

+ | Title: An overview of the numerical methods in the atmospheric component of the Energy Exascale Earth System Model | ||

+ | |||

+ | Abstract: I will first give an overview of modern Earth system models and how they are used to study the Earth's climate. I will then describe the DOE's Energy Exascale Earth System Model (E3SM) project, including Sandia’s role in developing the "dynamical core" of the atmospheric component model and porting to upcoming Exascale computers. The dynamical core is the component which solves the differential equations of motion in the Earth's atmosphere over the entire globe, which is then coupled to a suite of parametrizations modeling the many unresolved processes. I'll go through several of the design choices made in state-of-the-art global dynamical cores: choice of grid, choice of equation formulation and discretization methods. The underlying motivation of these choices is governed by preserving the most important properties of the continuum equations, such as conservation and geostrophic balance. In E3SM we have had good success with Hamiltonian structure preserving methods, where the equations are kept in Hamiltonian form and then discretized with appropriate numerical methods which lead to a discrete Hamiltonian system. | ||

+ | |||

+ | === Stephan Wojtowytsch (TAMU) === | ||

+ | Title: Neural network approximation in shallow and deep learning | ||

+ | |||

+ | Abstract: We will discuss some fundamentals of neural network approximation: Universal approximation theorems, the superiority of shallow neural networks over linear methods of approximation, and depth separation phenomena, i.e. functions which can be approximated efficiently using deep, but not shallow neural networks. Time permitting, we will discuss some aspects of optimization for overparametrized function classes. | ||

+ | |||

+ | === Ruhui Jin (UT-Austin) === | ||

+ | Title: Tensor dimensionality reduction and applications | ||

+ | |||

+ | Abstract: Many scientific applications utilize a tensor format for data representations. However, memory and computation costs can be prohibitive due to the exponentially growing size of higher-order arrays. To address these barriers, we focus on both computational and theoretical aspects of tensor-related dimension reduction methods. We adopt classical reduction approaches, for instance random projections and principal component analysis (PCA), and pursue scalable algorithms tailored to solving tensor programs. On top of that, we discuss theoretical guarantees for reduced data. | ||

+ | |||

+ | === Eftychios Sifakis (UW) === | ||

+ | Title: Perspectives on the use of scalable physics simulation in inverse problems for machine learning and design | ||

+ | |||

+ | Abstract: In this talk I will attempt to tie together two recent threads of investigation that might appear thematically distinct, but are actually connected in bringing together simulation, optimization and inverse problems; sometimes with the objective of learning, or alternatively for the purpose of computational design. The first thread involves neural network architectures that incorporate large-scale differentiable simulators for elastic deformable models, and their application to learning mechanisms of actuation for “active” objects such as human bodies and faces, using as input just a rich corpus of demonstrations of such an object’s function. This would be contrasted to more “traditional” approaches to building such models from first principles, such the knowledge of localized active musculature and the governing laws of elastic flesh tissue. The second topic stems from topology optimization tasks and explores two important directions of evolution: (a) the accommodation of hundreds of millions (or even billions) of design degrees of freedom, and (b) the application of computational design optimization to fluidic devices, such as pumps, valves, propellors, pneumatically actuated soft robots or fluidic circuits. | ||

+ | |||

+ | === Peter Hinow (UWM) === | ||

+ | Title: Tiny Giants - Mathematics Looks at Zooplankton | ||

+ | |||

+ | Abstract: Zooplankton is an immensely diverse group of organisms occupying every corner of the oceans, seas, and freshwater bodies on earth. They form a crucial link between autotrophic phytoplankton and higher trophic levels such as crustaceans, mollusks, fish, and marine mammals. Changing water temperatures, salinities and decreasing pH values currently create monumental challenges to their well-being. A significant subgroup of zooplankton are crustaceans of sizes between 1 and 10 mm. They have extremely acute senses that allow them to navigate their surroundings, escape predators, find food and mate. In a series of works with Rudi Strickler (Department of Biological Sciences, University of Wisconsin - Milwaukee) we have investigated various behaviors of crustacean zooplankton. These include the visualization of the feeding current of the calanoid copepod Leptodiaptomus sicilis and the communication by sex pheromones in the copepod Temora longicornis. In these studies, we use tools from optics, ecology, computational fluid dynamics, and computational neuroscience. | ||

+ | |||

+ | === Thomas Powers (Brown) === | ||

+ | Title: Mechanics of Colloidal Membranes | ||

+ | |||

+ | Abstract: Colloidal membranes are unique 2D assemblages consisting of a liquid-like layer monolayer of aligned rod-like viruses that are held together by osmotic pressure. Although they are a few hundred times thicker, colloidal monolayer membranes share many properties with lipid bilayers, such as in-plane fluidity and resistance to bending. However, they also display distinctive properties, such as a propensity to have exposed edges, as well as shapes with negative Gaussian curvature. Furthermore, colloidal membranes commonly have liquid crystalline properties because the rods twist near the edge of the membrane. Accounting for both the liquid crystalline degrees of freedom and the three-dimensional shape is challenging. Therefore, we develop an effective theory in which the liquid crystalline degrees of freedom are described by geometrical properties of the edge, such as edge length, curvature, and geodesic torsion. Using this theory we predict when flat membranes are unstable to a twisted shape, calculate the power spectrum for the edge fluctuations, and compute the force vs. extension curve for a membrane subject to a stretching force. Our results give insight into the nature of the handedness of the ribbons as well as the sign of the Gaussian curvature modulus. |

## Latest revision as of 19:49, 27 April 2022

# ACMS Abstracts: Spring 2022

### Jacob Notbohm (UW)

Title: Collective Cell Migration: Rigidity Transition and the Eyes of the Cell

Abstract: Collective cell migration is an essential process in development, regeneration, and disease. The motion results from a physical balance of cell-generated forces, but the relationships between cell force and motion are challenging to study, because cell forces are actively generated within each cell and balanced by complicated interactions at the cell-substrate and cell-cell interfaces. In complex, multi-body physical systems such as this one, mathematical models can provide essential insights into the underlying mechanisms of collective cell force generation, transmission, and, ultimately, motion. This presentation will describe an experimentalist’s perspective on a class of models for collective cell migration based on the vertex model, wherein the cells are polygons that tesselate a two-dimensional plane. The models are discussed in the context of experiments performed by my research group to measure cell forces and velocities, which enable quantitative comparison between model predictions and experimental results. The presentation will focus on two specific examples. The first is a fluid-to-solid rigidity transition predicted by the models to depend on cell shape. The second is the experimental finding that cells align their propulsive forces with those of their neighbors, analogous to how birds within a flock or fish within a school use visual cues to for alignment. These two examples illustrate how our experiments have led to clearer understanding of the underlying factors within the cell that correspond to the different model parameters and have discovered new phenomena not yet accounted for in the recent models.

### Alex Townsend (Cornell)

Title: What networks of oscillators spontaneously synchronize?

Abstract: Consider a network of identical phase oscillators with sinusoidal coupling. How likely are the oscillators to spontaneously synchronize, starting from random initial phases? One expects that dense networks of oscillators have a strong tendency to pulse in unison. But, how dense is dense enough? In this talk, we use techniques from numerical linear algebra, computational algebraic geometry, and dynamical systems to derive the densest known networks that do not synchronize and the sparsest ones that do. We will find that there is a critical network density above which spontaneous synchrony is guaranteed regardless of the network's topology, and prove that synchrony is omnipresent for random networks above a lucid threshold. This is joint work with Martin Kassabov, Steven Strogatz, and Mike Stillman.

Prof. Alex Townsend is an associate professor at Cornell University in the Mathematics Department. His research is in Applied Mathematics and mainly focuses on spectral methods, low-rank techniques, fast transforms, and theoretical aspects of deep learning. Prior to Cornell, he was an Applied Math instructor at MIT (2014-2016) and a DPhil student at the University of Oxford (2010-2014). He was awarded an NSF CAREER in 2021, a SIGEST paper award in 2019, the SIAG/LA Early Career Prize in applicable linear algebra in 2018, and the Leslie Fox Prize in numerical analysis in 2015.

### Geoffrey Vasil (Sydney)

Title: The mechanics of a large pendulum chain

Abstract: I’ll discuss a particular high-dimensional system that displays subtle behaviour found in the continuum limit. The only catch is that it formally shouldn’t, which raises a few questions. When is a discrete system large enough to be called continuous? When are approximate (broken) symmetries good enough to be treated like the real thing? When and why does a fluid approximation work as well as we like to assume? What does all this say about observables and the approach to equilibria? The particular system I have in mind is a large ideal pendulum chain, and it’s cousin the continuous flexible string. I propose that the pendulum chain is a perfect model system to study notoriously difficult phenomena such as vortical turbulence, waves, cascades and thermalisation, but with many fewer degrees of freedom than a three-dimensional fluid.

### Xiangxiong Zhang (Purdue)

Title: Recent Progress on Q^k Spectral Element Method: Accuracy, Monotonicity and Applications

Abstract: In the literature, spectral element methods usually refer to finite element methods with high order polynomial basis. The Q^k spectral element method has been a popular high order method for solving second order PDEs, e.g., wave equations, for more than three decades, obtained by continuous finite element method with tenor product polynomial of degree k and with at least (k+1)-point Gauss-Lobatto quadrature. In this talk, I will present some brand new results of this classical scheme, including its accuracy, monotonicity (stability), and examples of using monotonicity to construct high order bound-preserving schemes in various applications including the Allen-Cahn equation coupled with an incompressible velocity field, Keller-Segel equation for chemotaxis, and nonlinear eigenvalue problem for Gross–Pitaevskii equation. 1) Accuracy: when the least accurate (k+1)-point Gauss-Lobatto quadrature is used, the spectral element method is also a finite difference (FD) scheme, and this FD scheme can sometimes be (k+2)-th order accurate for k>=2. This has been observed in practice but never proven before in terms of rigorous error estimates. We are able to prove it for linear elliptic, wave, parabolic and Schrödinger equations for Dirichlet boundary conditions. For Neumann boundary conditions, (k+2)-th order can be proven if there is no mixed second order derivative. Otherwise, only (k+3/2)-th order can be proven and some order loss is indeed observed in numerical tests. The accuracy result also applies to spectral element method on any curvilinear mesh that can be smoothly mapped to a rectangular mesh, e.g., solving a wave equation on an annulus region with a curvilinear mesh generated by polar coordinates. 2) Monotonicity: consider solving the Poisson equation, then a scheme is called monotone if the inverse of the stiffness matrix is entrywise non-negative. It is well known that second order centered difference or P1 finite element method can form an M-matrix thus they are monotone, and high order accurate schemes in general are not M-matrices thus not monotone. But there are exceptions. In particular, we have proven that the fourth order accurate FD scheme (Q^2 spectral element method) is a product of two M-matrices thus monotone for a variable coefficient diffusion operator: this is the first time that a high order accurate scheme is proven monotone for a variable coefficient operator. We have also proven the fifth order accurate FD scheme (Q^3 spectral element method) is a product of three M-matrices thus monotone for the Poisson equation: this is the first time that a fifth order accurate discrete Laplacian is proven monotone in two dimensions (all previously known high order monotone discrete Laplacian in 2D are fourth order accurate).

### Pengchuan Zhang (Microsoft Research)

Title: Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference

Abstract: Sampling from high-dimensional posterior distributions in Bayesian inference is a long-standing challenging problem, especially when there are multiple modes in the posterior. For a wide class of Bayesian inference problems, low-dimensional coarse-scale surrogates can be constructed to approximate the original high-dimensional fine-scale problem, and we propose to train a Multiscale Invertible Generative Network (MsIGN) to learn such high-dimensional posteriors with multiscale structure. A novel prior conditioning layer is designed to bridge networks of different resolutions, enabling coarse-to-fine multi-stage training. The double KL divergence is also utilized as the loss function to avoid mode dropping. When applied to two Bayesian inverse problems, MsIGN clearly captures multiple modes in the high-dimensional posterior and approximates the posterior accurately, demonstrating superior performance compared with previous deep generative network approaches. When applied to natural image synthesis on standard datasets, MsIGN achieves the superior performance in terms of bits-per-dimension and yields great interpret-ability of its neurons in intermediate layers. This is a joint work with Shumao Zhang and Thomas Y. Hou.

Bio: Pengchuan Zhang is a principal researcher at Microsoft Research Redmond lab. He obtained his PhD degree in Computational and Mathematical Sciences from Caltech in 2017, and then joined Microsoft Research for machine learning research. His research interests are mainly in the areas of deep learning, mathematical optimization, and their applications in vision language intelligence. On the theoretical side, he is developing more efficient and/or robust machine learning algorithms. On the application side, he is working on vision-language (VL) multi-modal intelligence, including vision language pretraining and various downstream CV and VL tasks. His work has been published in top-tier machine learning conferences (CPVR, ICCV, ECCV, NeurIPS, ICLR, ICML, …), has appeared in several media outlets (Wired, TechCrunch, GeekWire, …), and has been shipped into multiple Microsoft products (Azure Cognitive Services, Bing multi-media search, …).

### Mark Taylor (Sandia)

Title: An overview of the numerical methods in the atmospheric component of the Energy Exascale Earth System Model

Abstract: I will first give an overview of modern Earth system models and how they are used to study the Earth's climate. I will then describe the DOE's Energy Exascale Earth System Model (E3SM) project, including Sandia’s role in developing the "dynamical core" of the atmospheric component model and porting to upcoming Exascale computers. The dynamical core is the component which solves the differential equations of motion in the Earth's atmosphere over the entire globe, which is then coupled to a suite of parametrizations modeling the many unresolved processes. I'll go through several of the design choices made in state-of-the-art global dynamical cores: choice of grid, choice of equation formulation and discretization methods. The underlying motivation of these choices is governed by preserving the most important properties of the continuum equations, such as conservation and geostrophic balance. In E3SM we have had good success with Hamiltonian structure preserving methods, where the equations are kept in Hamiltonian form and then discretized with appropriate numerical methods which lead to a discrete Hamiltonian system.

### Stephan Wojtowytsch (TAMU)

Title: Neural network approximation in shallow and deep learning

Abstract: We will discuss some fundamentals of neural network approximation: Universal approximation theorems, the superiority of shallow neural networks over linear methods of approximation, and depth separation phenomena, i.e. functions which can be approximated efficiently using deep, but not shallow neural networks. Time permitting, we will discuss some aspects of optimization for overparametrized function classes.

### Ruhui Jin (UT-Austin)

Title: Tensor dimensionality reduction and applications

Abstract: Many scientific applications utilize a tensor format for data representations. However, memory and computation costs can be prohibitive due to the exponentially growing size of higher-order arrays. To address these barriers, we focus on both computational and theoretical aspects of tensor-related dimension reduction methods. We adopt classical reduction approaches, for instance random projections and principal component analysis (PCA), and pursue scalable algorithms tailored to solving tensor programs. On top of that, we discuss theoretical guarantees for reduced data.

### Eftychios Sifakis (UW)

Title: Perspectives on the use of scalable physics simulation in inverse problems for machine learning and design

Abstract: In this talk I will attempt to tie together two recent threads of investigation that might appear thematically distinct, but are actually connected in bringing together simulation, optimization and inverse problems; sometimes with the objective of learning, or alternatively for the purpose of computational design. The first thread involves neural network architectures that incorporate large-scale differentiable simulators for elastic deformable models, and their application to learning mechanisms of actuation for “active” objects such as human bodies and faces, using as input just a rich corpus of demonstrations of such an object’s function. This would be contrasted to more “traditional” approaches to building such models from first principles, such the knowledge of localized active musculature and the governing laws of elastic flesh tissue. The second topic stems from topology optimization tasks and explores two important directions of evolution: (a) the accommodation of hundreds of millions (or even billions) of design degrees of freedom, and (b) the application of computational design optimization to fluidic devices, such as pumps, valves, propellors, pneumatically actuated soft robots or fluidic circuits.

### Peter Hinow (UWM)

Title: Tiny Giants - Mathematics Looks at Zooplankton

Abstract: Zooplankton is an immensely diverse group of organisms occupying every corner of the oceans, seas, and freshwater bodies on earth. They form a crucial link between autotrophic phytoplankton and higher trophic levels such as crustaceans, mollusks, fish, and marine mammals. Changing water temperatures, salinities and decreasing pH values currently create monumental challenges to their well-being. A significant subgroup of zooplankton are crustaceans of sizes between 1 and 10 mm. They have extremely acute senses that allow them to navigate their surroundings, escape predators, find food and mate. In a series of works with Rudi Strickler (Department of Biological Sciences, University of Wisconsin - Milwaukee) we have investigated various behaviors of crustacean zooplankton. These include the visualization of the feeding current of the calanoid copepod Leptodiaptomus sicilis and the communication by sex pheromones in the copepod Temora longicornis. In these studies, we use tools from optics, ecology, computational fluid dynamics, and computational neuroscience.

### Thomas Powers (Brown)

Title: Mechanics of Colloidal Membranes

Abstract: Colloidal membranes are unique 2D assemblages consisting of a liquid-like layer monolayer of aligned rod-like viruses that are held together by osmotic pressure. Although they are a few hundred times thicker, colloidal monolayer membranes share many properties with lipid bilayers, such as in-plane fluidity and resistance to bending. However, they also display distinctive properties, such as a propensity to have exposed edges, as well as shapes with negative Gaussian curvature. Furthermore, colloidal membranes commonly have liquid crystalline properties because the rods twist near the edge of the membrane. Accounting for both the liquid crystalline degrees of freedom and the three-dimensional shape is challenging. Therefore, we develop an effective theory in which the liquid crystalline degrees of freedom are described by geometrical properties of the edge, such as edge length, curvature, and geodesic torsion. Using this theory we predict when flat membranes are unstable to a twisted shape, calculate the power spectrum for the edge fluctuations, and compute the force vs. extension curve for a membrane subject to a stretching force. Our results give insight into the nature of the handedness of the ribbons as well as the sign of the Gaussian curvature modulus.