Dimitris Papailiopoulos, Assistant Professor, University of Wisconsin-Madison
Finding Everything within Random Binary Networks
A recent experimental work by Ramanujan et al. provides significant empirical evidence towards a surprising fact: random neural networks contain subnetworks with state-of-the-art accuracy across several predictive tasks. A follow-up line of theoretical work provides justification of these findings by proving that overparameterized neural networks with commonly used continuous valued initializations can indeed be pruned to approximate any target network. In this work we show that even the values of those random weights do not play a big role. We prove that the performance of any real valued target classification network can be approximated by simply pruning a random binary network that is only a logarithmic factor wider and deeper than the target network.
About the speaker
Dimitris is an Assistant Professor of ECE at the UW-Madison. His research interests span machine learning, information theory, and optimization. Before coming to the UW, Dimitris was a postdoc at UC Berkeley and a member of the AMPLab. He earned his Ph.D. from UT Austin in 2014, under the supervision of Alex Dimakis. He received his ECE Diploma and M.Sc. degree from the Technical University of Crete. Dimitris is a recipient of the NSF CAREER Award (2019), two Sony Faculty Innovation Awards (2019 and 2020), a joint IEEE ComSoc/ITSoc Best Paper Award (2020), an IEEE Signal Processing Society Young Author Best Paper Award (2015), the Vilas Associate Award (2021), the Emil Steiger Distinguished Teaching Award (2021), and the Benjamin Smith Reynolds Award for Excellence in Teaching (2019). He is the co-founder of MLSys, a new conference that targets research at the intersection of machine learning and systems. In 2018 and 2020 Dimitris was program co-chair for MLSys, and in 2019 he co-chaired the 3rd Midwest Machine Learning Symposium.
Meeting ID: 993 8781 3186