** Albert Fathi**

Georgia Tech

Friday, March 2, 3:30 pm Martin Hall M-102**Title**: What is Weak KAM Theory?

**Abstract**: The goal of this lecture is to explain and motivate the connection between Aubry-Mather theory (Dynamical Systems), and viscosity solutions of the Hamilton-Jacobi equation (PDE). This connection is the content of weak KAM Theory. The talk should be accessible to all mathematicians. No a priori knowledge of any of the two subjects is assumed.

** Jesus De Loera**

UC Davis

Tuesday, January 16, 3:30 pm Martin Hall M-102**Title**: On the Geometry of the Simplex Method and other Simplex-like Linear Optimization Algorithms

**Abstract**: Linear programs (LPs) and their associated convex polyhedra are, without any doubt, at the core of both the theory and the practice of mathematical optimization (e.g., in discrete optimization LPs are used in practical computations using branch-and-bound, and in approximation algorithms, e.g., in rounding schemes). Despite their key importance, many simple easy-to-state mathematical properties of LP and their polyhedra remain unknown. My talk overviews the state of the art about the computational complexity of the famous simplex method and some recent variations.

All results are joint work with S. Borgwardt, E. Finhold, Raymond Hemmecke, and Jon Lee.

**Amnon Meir**

March 10, 2016 at 3:30 pm

**Title**: A Rotator's View of the NSF: Everything You Ever Wanted To Know About The NSF (But Were Afraid To Ask)

**Abstract**: During the first part of the talk I will provide an overview of the NSF and, in particular, MPS/DMS from the perspective of a faculty rotator. I will also describe proposal handling (by the NSF), the merit review process, and suggest some "dos and don'ts" when preparing and submitting a proposal. I will devote the second part of the talk to answering your questions about the NSF, DMS, and the review process (even those you were/are afraid to ask, so please come prepared)

**Dr. Laura Miller**UNC Chapel Hill

March 28, 2016, 3:30pm, Kinard 101

**Wotao Yin**

UCLA

March 30, 2016, 4 pm

**Title**: ARock: an Asynchronous Parallel Algorithmic Framework

**Abstract**: The performance of the CPU core stopped improving around 2005. The Moore's law, however, continues to apply -- not to the single-thread performance -- but the number of cores in each computer. Today, at affordable prices, we can buy 64 CPU-cores workstations, thousand-core GPUs, and even eight-core cellphones. To take advantages of multiple cores, we must parallelize our algorithms -- otherwise, our algorithms won't run any faster on newer computers. For iterative parallel algorithms to have the strong performance, asynchrony is critical. Removing the synchronizations among different cores will eliminate core idling and reduce memory-access congestions. However, some of those cores may no longer compute with latest information. We study fixed-point iterations with out-of-date information and show that randomized async-parallel iterations of a nonexpansive operator will almost surely converge to a fixed point, provided that a fixed point exists and the step size is properly chosen. As special cases, novel algorithms for linear equation systems, machine learning, distributed and decentralized optimization are introduced, and numerical performance will be presented for sparse logistic regression and others. This is joint work with Zhimin Peng (UCLA), Yangyang Xu (IMA), and Ming Yan (Michigan State).

**Panos Pardalos**

December 1, 2015 at 5 pm

**Title**: Computational Models and Challenging Global Optimization Problems

**Abstract**: Most of the conventional computer models are based on the von Neumann computer architecture and the Turing machine model. However, quantum computers (several versions!), analog computers, dna computers, and several other exotic models have been proposed in an attempt to deal with intractable problems. We are going to give a brief overview of different computing models and discuss several classes of optimization problems that remain very difficult to solve. Such problems include graph problems, nonlinear assignment problems, and global optimization problems. We will start with a historical development and then we will address several complexity and computational issues. Then we are going to discuss heuristics and techniques for their evaluation.

**Bernd Sturmfels**

March 23, 2015 at 4 pm

**Title**: The Euclidean Distance Degree

**Abstract**: The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. The Euclidean distance degree is the number of critical points for this optimization problem. We focus on projective varieties seen in engineering applications, and we discuss tools for exact computation. Our running example is the Eckart-Young Theorem which relates the nearest point map for low rank matrices with the singular value decomposition. This is joint work with Jan Draisma, Emil Horobet, Giorgio Ottaviani, Rekha Thomas.