Seminar @ Cornell Tech: Tom Goldstein
Truth or Backpropaganda: Putting Neural Network Theory to the Test with Empirical Studies
In this talk, I will explore the theory of neural networks from an empirical perspective with the goal of determining what theoretical assumptions are reasonable, and whether existing theories really describe the behaviors of neural networks that are important in practice. I’ll begin by looking at the loss landscapes of neural networks, and exploring what factors in neural network design impact optimization of neural loss functions. Then, I’ll look at generalization theories, and examine what empirical studies can tell us about why SGD finds “good” minima that generalize well. Finally, I’ll look at the theory and practice of adversarial examples, which break the generalization behavior of neural nets.
Tom Goldstein is the Pier Perotto Associate Professor of Computer Science at University of Marlyand. His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. He works at the boundary between theory and practice, leveraging mathematical foundations, complex models, and efficient hardware to build practical, high-performance systems. He designs optimization methods for a wide range of platforms ranging from powerful cluster/cloud computing environments to resource limited integrated circuits and FPGAs. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist in Electrical Engineering at Rice University and Stanford University. He has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship. Tom also serves as the Chief Science Office of Portmanteau industries, and company building accelerators for machine learning applications.