Applied Algebra Seminar Fall 2022: Difference between revisions
No edit summary |
Jrodriguez43 (talk | contribs) No edit summary |
||
(29 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
'''Link to [https://wiki.math.wisc.edu/index.php/Applied_Algebra_Seminar current semester]''' | |||
'''When''': 1pm Central, Thursdays | '''When''': 1pm Central, Thursdays | ||
Line 27: | Line 30: | ||
| | | | ||
|- | |- | ||
|September | |September 22 | ||
| VV 901 | | VV 901 | ||
| | | [https://sites.nd.edu/aravind-baskar/ Aravind Baskar] | ||
| [https://wiki.math.wisc.edu/index.php/Applied_Algebra_Seminar_Fall_2022#Aravind_Baskar Optimal synthesis of robotic mechanisms using homotopy continuation] | |||
|Rodriguez | |||
| | |||
| | |||
|- | |- | ||
|October 6 | |October 6 | ||
| | | VV901 | ||
| | | Kaitlyn Phillipson | ||
| | | What can algebra and geometry tell us about a stimulus space? | ||
| | |(Local) | ||
|- | |- | ||
|October 20 | |October 20 | ||
| | | VV901 | ||
| | | Arthur Bik | ||
| | | Strength of (infinite) polynomials | ||
| | | Rodriguez | ||
|- | |||
|October 27 | |||
|VV901 | |||
|Colby Long | |||
|Algebraic Invariants in Phylogenetics. | |||
|Rodriguez | |||
|- | |- | ||
|November 3 | |November 3 | ||
| | |VV 901 | ||
| | |[http://zvihrosen.com/index.html Zvi Rosen] | ||
| | |[https://wiki.math.wisc.edu/index.php/Applied_Algebra_Seminar_Fall_2022#Zvi_Rosen Neural Codes and Oriented Matroids] *([https://people.math.wisc.edu/~csimpson6/activities/matroidsday/ Matroids Day] preview) | ||
| | |Rodriguez | ||
|- | |- | ||
|November 10 | |November 10 | ||
|VV 901 | |||
|Jiaxin Hu | |||
|Beyond matrices: Statistical learning with tensor data | |||
|(Local) | |||
|- | |||
|Rescheduling | |||
| | | | ||
| | |Dave Anderson | ||
| | |TBD | ||
| | |(Local) | ||
|- | |||
|Rescheduling | |||
|VV901 | |||
|Hanbaek Lyu | |||
|[https://wiki.math.wisc.edu/index.php/Applied_Algebra_Seminar_Fall_2022#Hanbaek_Liu Phase transition in contingency tables with two linear margins] | |||
|(Local) | |||
|- | |- | ||
Line 233: | Line 242: | ||
|} | |} | ||
== Abstracts == | == Abstracts 2022 Fall == | ||
=== Aravind Baskar (Notre Dame): Optimal synthesis of robotic mechanisms using homotopy continuation === | |||
Kinematic synthesis aims to find the dimensions of a mechanism after desired constraints have been posed on its motion. For exact synthesis, the number of constraints and dimensional design variables are equal. For approximate synthesis, the former exceeds the latter. The current research tackles approximate synthesis via an optimization formulation that is invariant to the number of constraint specifications. The objective function, which is polynomial in nature, is a sum of squares of the error in the kinematic equations at the task specifications. Using the first-order necessary conditions of optimality, solving the optimization problem boils down to that of finding the roots of a system of polynomial equations. Although these systems are of a larger total degree than their respective exact synthesis systems, homotopy continuation techniques based on multi-homogeneous structure and monodromy-based methods can be used to solve these systems completely. The roots of such systems comprise all the critical points including minima, saddles, and maxima which characterize the design space. Since secondary considerations further dictate whether a minimum is feasible for a practical design, some low index saddles are also found to lead to suitable designs. Investigating gradient descent paths starting from these low index saddles into the adjoining minima leads to useful families of solutions. This provides the designer with parameterized sets of design candidates enabled by the computation of all possible critical points of the objective. Two design case studies, namely, the design of a humanoid finger and the design of a robot leg for energetic hops are presented. | |||
=== Kaitlyn Phillipson (UW Madison): What can algebra and geometry tell us about a stimulus space? === | |||
Neural activity can be stored as collections of binary strings, called neural codes, which can provide information about the stimulus that the brain is receiving. In 1971, O'Keefe and Dostrovksy [OD71] found hippocampal neurons, called place cells, which fire when a neuron is in a specific location in a stimulus space, called the receptive field of the corresponding place cell. When the animal leaves the receptive field, the neuron goes silent, and when the animal returns to the receptive field, the neuron begins firing again, thus making a memory map of the space. Furthermore, it was discovered that these receptive fields were roughly convex sets in the stimulus space. A natural mathematical question, then, is: can we determine whether or not a general neural code can represent a convex receptive field diagram from the combinatorial structure of the code itself? While this question is still open in general, this talk will introduce techniques from algebra and geometry that, when applied to this problem, give some results that partially answer this question. | |||
=== Arthur Bik (IAS): Strength of (infinite) polynomials === | |||
Usually, when presented with a huge set of data in the form of a tensor, one seeks to decompose this tensor as a sum of a low number of pure tensors in order to find hidden structure. While this often works in practice, it is not clear why such a thing should be possible. Define the strength of a tensor T as the minimal number r such that T can be written as a sum of r tensors with a rank-1 flattening. For this notion of rank, we have the following result: Any nontrivial property of tensors that is preserved under changing coordinates in each direction implies bounded strength (independent of the dimensions of the tensor). In other words, given a huge tensor with structure we can expect it to be have low strength. This motivates us to study this rank function. In this talk, we focus on the strength of homogeneous polynomials (i.e. symmetric tensors). | |||
Ananyan and Hochster defined the strength (before that known as Schmidt rank) of a polynomial in their paper proving Stillman's conjecture and it has appeared in various works since. I will state what is known and not known about it. Afterwards, we go to the infinite setting, where I will discuss how strength and other notions derived from it can be used to understand when infinite polynomial series degenerate to each other. | |||
=== Colby Long: '''Algebraic Invariants in Phylogenetics.''' === | |||
Phylogenetics is the study of the evolutionary relationships among organisms. A central challenge in the field is to use biological data for a given set of organisms in order to determine the evolutionary tree that describes their history. One approach for doing so is to build tree-based models of DNA sequence evolution and then to select the model that best fits the data according to some criteria. Once a model is selected, the tree parameter of the model is assumed to describe the history of the taxa under consideration. However, in order for such a method to be reliable, the tree parameter of the model must be identifiable. In this talk, we will discuss how ''phylogenetic invariants'', algebraic relationships among DNA site-pattern frequencies, can be used to establish model identifiability. We’ll also discuss some invariants-based methods for phylogenetic inference. | |||
=== Zvi Rosen: Neural Codes and Oriented Matroids === | |||
In the 1970's, neuroscientists discovered that the hippocampus of a rat contains a virtual map of its environment. When the rat is in region 1, neuron 1 fires an electric impulse, when it moves to region 2, neuron 2 fires, and in the intersection, both neurons fire. In the decades since, algebraists have begun to model this situation abstractly as follows: Fix ''n'' convex subsets of Euclidean space, representing stimuli; then, record all possible subsets of ''[n]'' whose intersection is nonempty and not covered by other sets. This combinatorial object is called a "convex neural code." | |||
In this talk, we relate the emerging theory of convex neural codes to the established theory of oriented matroids, which generalize systems of signed linear relations. By way of this connection, we prove that all convex codes are related to some representable oriented matroid, and we show that deciding whether a neural code is convex is NP-hard. | |||
=== Jiaxin Hu: Beyond matrices: Statistical learning with tensor data === | |||
Tensors efficiently represent the multiway data and serve as the foundation of higher-order data analysis in various fields such as social networks, neuroscience, and genetics studies. Identifying the low-dimensional representation from noisy tensor observations is the primary goal in many statistical learning problems including interpretable tensor decomposition, tensor regression, and multiway clustering. Though people may unfold tensors to matrices and apply the matrix methods, the unfolding strategy ignores the higher-order structure and leads to sub-optimal results in several tensor problems. The gap between matrix and tensor analyses motivates the development of tensor tools. | |||
In this talk, I will focus on the statistical methods with higher-order tensors. First, I will introduce the basic tensor concepts and overview the low-dimensional tensor methods with various applications. Second, I will present my works on generalized tensor decomposition with multiple features and degree-corrected tensor block model for multiway clustering. We will see the difference between the matrix and tensor analyses from both modelling and theoretical aspects. | |||
=== Hanbaek Lyu: Phase transition in contingency tables with two linear margins === | |||
[https://en.wikipedia.org/wiki/Contingency_table Contingency tables] are matrices of nonnegative integer entries with prescribed row sums and columns sums (margins). They are fundamental objects in statistics for studying dependence structure between two or more variables, and they also correspond to bipartite multi-graphs with given degrees and play an important role in combinatorics and graph theory. Both counting the number of contingency tables and sampling one uniformly at random are fundamental problems concerning contingency tables with many connections and applications to other fields (e.g., testing hypothesis on co-occurrence of species in ecology). | |||
A historic guiding principle to these problems is the ''independence heuristic'', which was introduced by I. J. Good as far back as in 1950. The heuristic states that the constraints for the rows and columns of the table are asymptotically independent as the size of the table grows to infinity. This yields a simple yet surprisingly accurate formula for the count of contingency tables as well as a sampling heuristic using the hypergeometric (or Fisher-Yates) distribution. In this talk, we will discuss that the independence heuristic captures only one part of the picture. Specifically, for n by n contingency tables with two linear margins $BCn$ and $Cn$, we establish that contingency tables exhibit a sharp phase transition at $B = B_{c} = 1+\sqrt{1+1/C}. This yields that the independence heuristic may capture the structure of contingency tables well for near-homogeneous margins (when $B<B_{c}$), but in if the margins are not quite homogeneous ($B>B_{c}$) positive correlation between rows and columns may emerge and the heuristic fails dramatically. | |||
== Earlier Abstracts == | |||
===Kathryn Heal=== | ===Kathryn Heal=== | ||
''' A Lighting-Invariant Point Processor for Shape from Shading''' | ''' A Lighting-Invariant Point Processor for Shape from Shading''' | ||
Line 247: | Line 288: | ||
'''Intersecting Schubert Varieties for Subspace Clustering''' | '''Intersecting Schubert Varieties for Subspace Clustering''' | ||
Subspace clustering aims to assign a collection of data points to the smallest possible union of subspaces. This is particularly challenging when data is missing. In this talk I will describe a new approach to overcome missing data. The main idea is to search for subspaces in the Grassmannian that simultaneously fit multiple incomplete-data points. This boils down to identifying the intersection of the Schubert varieties defined by the observed data, which is easier said than done. I will tell you what we have achieved so far, and the remaining challenges. | Subspace clustering aims to assign a collection of data points to the smallest possible union of subspaces. This is particularly challenging when data is missing. In this talk I will describe a new approach to overcome missing data. The main idea is to search for subspaces in the Grassmannian that simultaneously fit multiple incomplete-data points. This boils down to identifying the intersection of the Schubert varieties defined by the observed data, which is easier said than done. I will tell you what we have achieved so far, and the remaining challenges. | ||
=== Chung Pang Mok === | === Chung Pang Mok === | ||
'''Pseudorandom Vectors Generation Using Elliptic Curves And Applications to Wiener Processes | '''Pseudorandom Vectors Generation Using Elliptic Curves And Applications to Wiener Processes |
Latest revision as of 15:07, 27 February 2023
Link to current semester
When: 1pm Central, Thursdays
Where: Van Vleck 901 Hall or Virtual: Zoom link (The passcode: Bucky )
Email List: subscribe to our google group or email JIR to add you.
Contacts: Jose Israel Rodriguez (Lead), Shamgar Gurevich
How: Virtual talks are usually 40-45 minutes long with informal discussion after; in-person talks are 40-50 minutes long.
Inclusiveness: We welcome participants with applied algebra interests from other institutions and departments. We have social time 5-10 minutes before the talk starts.
Fall 2022 Schedule
- As of 8/3/22, dates may be pushed up or back a week to accommodate a speaker when necessary.
date | Location | speaker | title | host(s) |
---|---|---|---|---|
August 18 | Social Time | --- | ||
September 22 | VV 901 | Aravind Baskar | Optimal synthesis of robotic mechanisms using homotopy continuation | Rodriguez |
October 6 | VV901 | Kaitlyn Phillipson | What can algebra and geometry tell us about a stimulus space? | (Local) |
October 20 | VV901 | Arthur Bik | Strength of (infinite) polynomials | Rodriguez |
October 27 | VV901 | Colby Long | Algebraic Invariants in Phylogenetics. | Rodriguez |
November 3 | VV 901 | Zvi Rosen | Neural Codes and Oriented Matroids *(Matroids Day preview) | Rodriguez |
November 10 | VV 901 | Jiaxin Hu | Beyond matrices: Statistical learning with tensor data | (Local) |
Rescheduling | Dave Anderson | TBD | (Local) | |
Rescheduling | VV901 | Hanbaek Lyu | Phase transition in contingency tables with two linear margins | (Local) |
Spring 2022 Schedule
date | Location | speaker | title | host(s) | |
---|---|---|---|---|---|
Aug 18 | Social Time | --- | |||
February 10 | Zoom | Diego Cifuentes (Georgia Tech) | Polynomial time guarantees for the Burer-Monteiro method | ||
February 24 | Zoom | Ola Sobieska (Wisconsin) | Identifiability of Linear Compartmental Models: The Impact of Removing Leaks and Edges | ||
March 11 (Friday, 10am) | Zoom | Chung Pang Mok (Soochow) | Pseudorandom Vectors Generation Using Elliptic Curves And Applications to Wiener Processes | ||
March 24 | Zoom | Daniel Pimentel Alarcon (Wisconsin) | Intersecting Schubert Varieties for Subspace Clustering | ||
April 14 | Zoom | Abeer Al Ahmadieh (Washington) | Determinantal Representations and the Image of the Principal Minor Map | ||
April 28 | Zoom | Kathryn Heal (Harvard) | A Lighting-Invariant Point Processor for Shape from Shading |
Fall 2021 Schedule
date | speaker | title | host(s) |
---|---|---|---|
September 16 | Jeroen Zuiddam (University of Amsterdam) | Geometric rank of tensors and applications | Virtual |
October 7 | Timo de Wolff (TU Braunschweig) | Certificates of Nonnegativity and Maximal Mediated Sets | Virtual |
October 21 | Jesús A. De Loera (UC Davis) | A Combinatorial Geometry View of Inference and Learning Problems | Virtual |
November 4 | Kevin Shu (Georgia Tech) | Nonnegative Quadratics over Stanley Reisner Varieties | Virtual |
November 18 | Angélica Torres (KTH Stockholm) | The line multi-view variety in computer vision | Virtual |
December 2 | Papri Dey (Missouri) | Hyperbolic Polynomials, and Helton-Vinnikov curves | Virtual |
Spring 2021 Schedule
date | speaker | title | host(s) |
---|---|---|---|
February 25 | Greg Blekherman (Georgia Tech) | Locally Positive Semidefinite Matrices | Virtual |
March 25 | James Saunderson (Monash University) | Lifting For Simplicity: Concise Descriptions of Convex Sets | Virtual |
April 22 (At the new 11am time!) | Hamza Fawzi (Cambridge) | Semidefinite representations, and the set of separable states | Virtual |
May 13 (At the new 11am time!) | Anna Seigal (University of Oxford) | Invariant Theory for Maximum Likelihood Estimation | Virtual |
Spring 2020 Schedule
date | speaker | title | host(s) |
---|---|---|---|
February 20 | Carla Michini (UW Madison) | Short simplex paths in lattice polytopes | Local |
March 5 | Alisha Zachariah (UW Madison) | Efficient Estimation of a Sparse Delay-Doopler Channel | Local |
March 19 | Spring Break | ||
March 26 | (Seminar on Hiatus because of Covid-19) |
Abstracts 2022 Fall
Aravind Baskar (Notre Dame): Optimal synthesis of robotic mechanisms using homotopy continuation
Kinematic synthesis aims to find the dimensions of a mechanism after desired constraints have been posed on its motion. For exact synthesis, the number of constraints and dimensional design variables are equal. For approximate synthesis, the former exceeds the latter. The current research tackles approximate synthesis via an optimization formulation that is invariant to the number of constraint specifications. The objective function, which is polynomial in nature, is a sum of squares of the error in the kinematic equations at the task specifications. Using the first-order necessary conditions of optimality, solving the optimization problem boils down to that of finding the roots of a system of polynomial equations. Although these systems are of a larger total degree than their respective exact synthesis systems, homotopy continuation techniques based on multi-homogeneous structure and monodromy-based methods can be used to solve these systems completely. The roots of such systems comprise all the critical points including minima, saddles, and maxima which characterize the design space. Since secondary considerations further dictate whether a minimum is feasible for a practical design, some low index saddles are also found to lead to suitable designs. Investigating gradient descent paths starting from these low index saddles into the adjoining minima leads to useful families of solutions. This provides the designer with parameterized sets of design candidates enabled by the computation of all possible critical points of the objective. Two design case studies, namely, the design of a humanoid finger and the design of a robot leg for energetic hops are presented.
Kaitlyn Phillipson (UW Madison): What can algebra and geometry tell us about a stimulus space?
Neural activity can be stored as collections of binary strings, called neural codes, which can provide information about the stimulus that the brain is receiving. In 1971, O'Keefe and Dostrovksy [OD71] found hippocampal neurons, called place cells, which fire when a neuron is in a specific location in a stimulus space, called the receptive field of the corresponding place cell. When the animal leaves the receptive field, the neuron goes silent, and when the animal returns to the receptive field, the neuron begins firing again, thus making a memory map of the space. Furthermore, it was discovered that these receptive fields were roughly convex sets in the stimulus space. A natural mathematical question, then, is: can we determine whether or not a general neural code can represent a convex receptive field diagram from the combinatorial structure of the code itself? While this question is still open in general, this talk will introduce techniques from algebra and geometry that, when applied to this problem, give some results that partially answer this question.
Arthur Bik (IAS): Strength of (infinite) polynomials
Usually, when presented with a huge set of data in the form of a tensor, one seeks to decompose this tensor as a sum of a low number of pure tensors in order to find hidden structure. While this often works in practice, it is not clear why such a thing should be possible. Define the strength of a tensor T as the minimal number r such that T can be written as a sum of r tensors with a rank-1 flattening. For this notion of rank, we have the following result: Any nontrivial property of tensors that is preserved under changing coordinates in each direction implies bounded strength (independent of the dimensions of the tensor). In other words, given a huge tensor with structure we can expect it to be have low strength. This motivates us to study this rank function. In this talk, we focus on the strength of homogeneous polynomials (i.e. symmetric tensors).
Ananyan and Hochster defined the strength (before that known as Schmidt rank) of a polynomial in their paper proving Stillman's conjecture and it has appeared in various works since. I will state what is known and not known about it. Afterwards, we go to the infinite setting, where I will discuss how strength and other notions derived from it can be used to understand when infinite polynomial series degenerate to each other.
Colby Long: Algebraic Invariants in Phylogenetics.
Phylogenetics is the study of the evolutionary relationships among organisms. A central challenge in the field is to use biological data for a given set of organisms in order to determine the evolutionary tree that describes their history. One approach for doing so is to build tree-based models of DNA sequence evolution and then to select the model that best fits the data according to some criteria. Once a model is selected, the tree parameter of the model is assumed to describe the history of the taxa under consideration. However, in order for such a method to be reliable, the tree parameter of the model must be identifiable. In this talk, we will discuss how phylogenetic invariants, algebraic relationships among DNA site-pattern frequencies, can be used to establish model identifiability. We’ll also discuss some invariants-based methods for phylogenetic inference.
Zvi Rosen: Neural Codes and Oriented Matroids
In the 1970's, neuroscientists discovered that the hippocampus of a rat contains a virtual map of its environment. When the rat is in region 1, neuron 1 fires an electric impulse, when it moves to region 2, neuron 2 fires, and in the intersection, both neurons fire. In the decades since, algebraists have begun to model this situation abstractly as follows: Fix n convex subsets of Euclidean space, representing stimuli; then, record all possible subsets of [n] whose intersection is nonempty and not covered by other sets. This combinatorial object is called a "convex neural code."
In this talk, we relate the emerging theory of convex neural codes to the established theory of oriented matroids, which generalize systems of signed linear relations. By way of this connection, we prove that all convex codes are related to some representable oriented matroid, and we show that deciding whether a neural code is convex is NP-hard.
Jiaxin Hu: Beyond matrices: Statistical learning with tensor data
Tensors efficiently represent the multiway data and serve as the foundation of higher-order data analysis in various fields such as social networks, neuroscience, and genetics studies. Identifying the low-dimensional representation from noisy tensor observations is the primary goal in many statistical learning problems including interpretable tensor decomposition, tensor regression, and multiway clustering. Though people may unfold tensors to matrices and apply the matrix methods, the unfolding strategy ignores the higher-order structure and leads to sub-optimal results in several tensor problems. The gap between matrix and tensor analyses motivates the development of tensor tools.
In this talk, I will focus on the statistical methods with higher-order tensors. First, I will introduce the basic tensor concepts and overview the low-dimensional tensor methods with various applications. Second, I will present my works on generalized tensor decomposition with multiple features and degree-corrected tensor block model for multiway clustering. We will see the difference between the matrix and tensor analyses from both modelling and theoretical aspects.
Hanbaek Lyu: Phase transition in contingency tables with two linear margins
Contingency tables are matrices of nonnegative integer entries with prescribed row sums and columns sums (margins). They are fundamental objects in statistics for studying dependence structure between two or more variables, and they also correspond to bipartite multi-graphs with given degrees and play an important role in combinatorics and graph theory. Both counting the number of contingency tables and sampling one uniformly at random are fundamental problems concerning contingency tables with many connections and applications to other fields (e.g., testing hypothesis on co-occurrence of species in ecology).
A historic guiding principle to these problems is the independence heuristic, which was introduced by I. J. Good as far back as in 1950. The heuristic states that the constraints for the rows and columns of the table are asymptotically independent as the size of the table grows to infinity. This yields a simple yet surprisingly accurate formula for the count of contingency tables as well as a sampling heuristic using the hypergeometric (or Fisher-Yates) distribution. In this talk, we will discuss that the independence heuristic captures only one part of the picture. Specifically, for n by n contingency tables with two linear margins $BCn$ and $Cn$, we establish that contingency tables exhibit a sharp phase transition at $B = B_{c} = 1+\sqrt{1+1/C}. This yields that the independence heuristic may capture the structure of contingency tables well for near-homogeneous margins (when $B<B_{c}$), but in if the margins are not quite homogeneous ($B>B_{c}$) positive correlation between rows and columns may emerge and the heuristic fails dramatically.
Earlier Abstracts
Kathryn Heal
A Lighting-Invariant Point Processor for Shape from Shading
Under the conventional diffuse shading model with unknown directional lighting, the set of quadratic surface shapes that are consistent with the spatial derivatives of intensity at a single image point is a two-dimensional algebraic variety embedded in the five-dimensional space of quadratic shapes. In this talk will describe the geometry of this variety, and introduce a concise feedforward model that computes an explicit, differentiable approximation of the variety from the intensity and its derivatives at any single image point. The result is a parallelizable processor that operates at each image point and produces a lighting-invariant descriptor of the continuous set of compatible surface shapes at the point. I will describe two applications of this processor: two-shot uncalibrated photometric stereo and quadratic-surface shape from shading. This talk will touch on both theory and application.
Abeer Al Ahmadieh
Determinantal Representations and the Image of the Principal Minor Map
The principal minor map takes an n by n square matrix to the length $2^n$-vector of its principal minors. A basic question is to give necessary and sufficient conditions that characterize the image of various spaces of matrices under this map. In this talk I will describe the image of the space of complex matrices using a characterization of determinantal representations of multiaffine polynomials, based on the factorization of their Rayleigh differences. Using these techniques I will give equations and inequalities characterizing the images of the spaces of real and complex symmetric and Hermitian matrices. For complex symmetric matrices this recovers a result of Oeding from 2011. This is based on a joint work with Cynthia Vinzant.
Daniel Pimentel Alarcon
Intersecting Schubert Varieties for Subspace Clustering
Subspace clustering aims to assign a collection of data points to the smallest possible union of subspaces. This is particularly challenging when data is missing. In this talk I will describe a new approach to overcome missing data. The main idea is to search for subspaces in the Grassmannian that simultaneously fit multiple incomplete-data points. This boils down to identifying the intersection of the Schubert varieties defined by the observed data, which is easier said than done. I will tell you what we have achieved so far, and the remaining challenges.
Chung Pang Mok
Pseudorandom Vectors Generation Using Elliptic Curves And Applications to Wiener Processes
Using the arithmetic of elliptic curves over finite fields, we present an algorithm for the efficient generation of sequence of uniform pseudorandom vectors in high dimension with long period, that simulates sample sequence of a sequence of independent identically distributed random variables, with values in the hypercube $[0,1]^d$ with uniform distribution. As an application, we obtain, in the discrete time simulation, an efficient algorithm to simulate, uniformly distributed sample path sequence of a sequence of independent standard Wiener processes.
Ola Sobieska
Identifiability of Linear Compartmental Models: The Impact of Removing Leaks and Edges
In biology, pharmacokinetics, and various other fields, linear compartmental models are used to represent the transfer of substances between compartments in a system, e.g. the interchange of a drug between organs in the body. Such a model is identifiable if we can recover the flow parameters from generic input and output data. In this talk, I analyze changes to identifiability resulting from deleting a pathway between two compartments or from adding or deleting an internal leak. In particular, I will present some criteria for identifiability dependant on the combinatorics of the parent model, as well as a restatement and partial answer to a conjecture of Gross, Harrington, Meshkat, and Shu. This is joint work with Patrick Chan, Katherine Johnston, Anne Shiu, and Clare Spinner.
Diego Cifuentes
Polynomial time guarantees for the Burer-Monteiro method
The Burer-Monteiro method is one of the most widely used techniques for solving large-scale semidefinite programs (SDP). The basic idea is to solve a nonconvex program in Y, where Y is an n×p matrix such that X = YYT. In this paper, we show that this method can solve SDPs in polynomial time in a smoothed analysis setting. More precisely, we consider an SDP whose domain satisfies some compactness and smoothness assumptions, and slightly perturb the cost matrix and the constraints. We show that if p ≳ √(2+2η)m, where m is the number of constraints and η>0 is any fixed constant, then the Burer-Monteiro method can solve SDPs to any desired accuracy in polynomial time, in the setting of smooth analysis. Our bound on p approaches the celebrated Barvinok-Pataki bound in the limit as η goes to zero, beneath which it is known that the nonconvex program can be suboptimal.
Papri Dey
Hyperbolic Polynomials, and Helton-Vinnikov curves
In this talk, I shall give a panoramic view of my research work. I shall introduce the notion of hyperbolic polynomials and discuss an algebraic method to test hyperbolicity of a multivariate polynomial w.r.t some fixed point via sum-of-squares relaxation, proposed in my research work. An important class of hyperbolic polynomials are definite determinantal polynomials. Helton-Vinnikov curves in the projective plane are cut-out by hyperbolic polynomials in three variables. This leads to the computational problem of explicitly producing a symmetric positive definite linear determinantal representation for a given curve. I shall focus on two approaches to this problem proposed in my research work: an algebraic approach via solving polynomial equations, and a geometric-combinatorial approach via scalar product representations of the coefficients and its connection with polytope membership problem. The algorithms to solve determinantal representation problems are implemented in Macaulay2 as a software package DeterminantalRepresentations. m2.
Angélica Torres
The line multi-view variety in computer vision
Given an arrangement of cameras, the line multi-view variety is the Zariski closure of the image of world lines in the cameras. This is a similar definition to the multi-view variety studied extensively in computer vision. The goal of this talk is to present a description of the ideal of the line multi-view variety, and discuss some geometric properties of this variety. This is joint work with Felix Rydell, Elima Shehu, Paul Breiding, and Kathlén Kohn.
Kevin Shu
Nonnegative Quadratics over Stanley Reisner Varieties
Nonnegative polynomials are of fundamental interest in the field of real algebraic geometry. We will discuss the convex geometry of nonnegative polynomials over an interesting class of algebraic varieties known as Stanley-Reisner varieties. Stanley-Reisner varieties are fairly simple, as they are unions of coordinate planes, but they have surprising connections to algebraic topology that we can use to obtain a detailed understanding of the convex geometry of these nonnegative quadratics. We will be particularly interested in the extreme nonnegative quadratic forms over such varieties. Though this will not be emphasized in the talk, this subject also has some direct applications to optimization theory.
Jesús A. De Loera
A Combinatorial Geometry View of Inference and Learning Problems
In statistical inference we wish to learn the properties or parameters of a distribution through sufficiently many samples. A famous example is logistic regression, a popular non-linear model in multivariate statistics and supervised learning. Users often rely on optimizing of maximum likelihood estimation, but how much training data do we need, as a function of the dimension of the covariates of the data, before we expect an MLE to exist with high probability? Similarly, for unsupervised learning and non-parametric statistics, one wishes to uncover the shape and patterns from samples of a measure or measures. We use only the intrinsic geometry and topology of the sample. A famous example of this type of method is the $k$-means clustering algorithm. A fascinating challenge is to explain the variability of behavior of $k$-means algorithms with distinct random initializations and the shapes of the clusters.
In this talk we present new stochastic combinatorial theorems, that give bounds on the probability of existence of maximum likelihood estimators in multinomial logistic regression and also relate to the variability of clustering initializations. Along the way we will see fascinating connections to the coupon collector problem, topological data analysis, and to the computation of Tukey centerpoints of data clouds (a high-dimensional generalization of median). This is joint work with T. Hogan, R. D. Oliveros, E. Jaramillo-Rodriguez, and A. Torres-Hernandez.
Timo de Wolff
Certificates of Nonnegativity and Maximal Mediated Sets
In science and engineering, we regularly face polynomial optimization problems, that is: minimize a real, multivariate polynomial under polynomial constraints. Solving these problems is essentially equivalent to certifying of nonnegativity of real polynomials -- a key problem in real algebraic geometry since the 19th century. Since this problem is notoriously hard to solve, one is interested in certificates that imply nonnegativity and are easier to check than nonnegativity itself. In particular, a polynomial is nonnegative if it is a sums of squares (SOS) of other polynomials. In 2014, Iliman and I introduced a new certificate of nonnegativity based on sums of nonnegative circuit polynomials (SONC), which I have developed further since then both in theory and practice joint with different coauthors.
A canonical question is: When is a nonnegative circuit polynomial also a sums of squares. It turns out that this question is purely combinatorial and is governed by sets of particular (integral) lattice points called maximal mediated sets, which where first introduced by Reznick. In this talk, I will give a brief introduction to nonnegativity, SOS, and SONC, and then discuss recent results on maximal mediated sets.
Jeroen Zuiddam
Geometric rank of tensors and applications
There are several important problems in computational complexity and extremal combinatorics that can naturally be phrased in terms of tensors. This includes the matrix multiplication problem, the cap set problem and the sunflower problem. We present a new geometric method in this area, called Geometric Rank (Kopparty–Moshkovitz–Zuiddam, 2020). Geometric Rank is a natural extension of matrix rank to tensors. As an application, we use Geometric Rank to solve a problem of Strassen about matrix multiplication tensors.
Anna Seigal
Invariant Theory for Maximum Likelihood Estimation
Maximum likelihood estimation is an optimization problem over a statistical model, to obtain the parameters that best fit observed data. I will focus on two settings: log-linear models and Gaussian group models. I will describe connections between maximum likelihood estimation and notions of stability from invariant theory. This talk is based on joint work with Carlos Améndola, Kathlén Kohn and Philipp Reichenbach.
Hamza Fawzi
Semidefinite representations, and the set of separable states
The convex set of separable states plays a fundamental role in quantum information theory as it corresponds to the set of non-entangled states. In this talk I will discuss the question of semidefinite programming representations for this convex set. Using connections with nonnegative polynomials and sums of squares I will show that except in the low-dimensions, this semialgebraic set has no semidefinite representation. As a consequence this set furnishes a counter-example to the Helton-Nie conjecture. The proof I will present is elementary and relies only on basic results about semialgebraic sets and functions.
James Saunderson
Lifting For Simplicity: Concise Descriptions of Convex Sets
This talk will give a selective overview of the theory and applications of lifts of convex sets. A lift of a convex set is a higher-dimensional convex set that projects onto the original set. Many interesting convex sets have lifts that are dramatically simpler to describe than the original set. Finding such simple lifts has significant algorithmic implications, particularly for associated optimization problems. We will consider both the classical case of polyhedral lifts, which are described by linear inequalities, as well as spectrahedral lifts, which are defined by linear matrix inequalities. The talk will include discussion of ways to construct lifts, ways to find obstructions to the existence of lifts, and a number of interesting examples from a variety of mathematical contexts. (Based on joint work with H. Fawzi, J. Gouveia, P. Parrilo, and R. Thomas).
Greg Blekherman
Locally Positive Semidefinite Matrices
The cone of positive semidefinite matrices plays a prominent role in optimization, and many hard computational problems have well-performing semidefinite relaxations. In practice, enforcing the constraint that a large matrix is positive semidefinite can be expensive. We introduce the cone of k-locally posiitive semidefinite matrices, which consists of matrices all of whose k by k principal submatrices are positive semidefinite. We consider the distance between the cones of positive and locally positive semidefinite matrices, and possible eigenvalues of locally positive semidefinite matrices. Hyperbolic polynomials play a role in some of the proofs. Joint work with Santanu Dey, Marco Molinaro, Kevin Shu and Shengding Sun.
Carla Michini
Short simplex paths in lattice polytopes
We consider the problem of optimizing a linear function over a lattice polytope P contained in [0,k]^n and defined via m linear inequalities. We design a simplex algorithm that, given an initial vertex, reaches an optimal vertex by tracing a path along the edges of P of length at most O(n^6 k log k). The length of this path is independent on m and is the best possible up to a polynomial function, since it is only polynomially far from the worst case diameter. The number of arithmetic operations needed to compute the next vertex in the path is polynomial in n, m and log k. If k is polynomially bounded by n and m, the algorithm runs in strongly polynomial time. This is a joint work with Alberto Del Pia.
Alisha Zachariah
Efficiently Estimating a Sparse Delay-Doppler Channel
Multiple wireless sensing tasks, e.g., radar detection for driver safety, involve estimating the ”channel” or relationship between signal transmitted and received. In this talk, I will focus on a certain type of channel known as the delay-doppler channel. This channel model starts to be applicable in high frequency carrier settings, which are increasingly common with recent developments in mmWave technology. Moreover, in this setting, both the channel model and existing technologies are amenable to working with signals of large bandwidth, and using such signals is a standard approach to achieving high resolution channel estimation. However, when high resolution is desirable, this approach creates a tension with the desire for efficiency because, in particular, it immediately implies that the signals in play live in a space of very high dimension N (e.g., ~10^6 in some applications), as per the Shannon-Nyquist sampling theorem.
To address this, I will propose a randomized algorithm for channel estimation in the k-sparse setting (e.g., k objects in radar detection), with sampling and space complexity both on the order of k(log N)^2, and arithmetic complexity on the order of k(log N)^3+k^2, for N sufficiently large.
While this algorithm seems to be extremely efficient -- to the best of our knowledge, the first of this nature in terms of complexity -- it is just a simple combination of three ingredients, two of which are well-known and widely used, namely digital chirp signals and discrete Gaussian filter functions, and the third being recent developments in Sparse Fast Fourier Transform algorithms.
Related events to note
date | event/title | location/speaker | info |
---|---|---|---|
Postponed | Applied Algebra Days 4 - Tensors | Several talks on tensors | |
1:30pm, 2nd Thursday of the month | Informal Seminar: Algebra in Statistics and Computation | Virtual | |
3:30pm | SIAM Student Chapter | Virtual | |
10am, 2nd Tuesday of the month | SIAM SAGA | Virtual: Recordings | Registration needed once. |
10am, Most Tuesdays | Nonlinear algebra seminar online | Virtual: Recordings | Registration required once |
Biweekly Mondays | Algebraic Statistics Online Seminar (ASOS) | Virtual: Recordings | Mailing list sign-up for Zoom-links |
January 26th-29th, 2021 | Sanya Workshop on Algebraic Geometry and Machine Learning | Virtual: Recordings | |
July 29-30, 2021 | Real algebraic geometry and convex geometry Conference | TBD: TU Braunschweig, Germany or Online |