Έμβλημα Πολυτεχνείου Κρήτης
Το Πολυτεχνείο Κρήτης στο Facebook  Το Πολυτεχνείο Κρήτης στο Instagram  Το Πολυτεχνείο Κρήτης στο Twitter  Το Πολυτεχνείο Κρήτης στο YouTube   Το Πολυτεχνείο Κρήτης στο Linkedin

Νέα / Ανακοινώσεις / Συζητήσεις

Ομιλία Επίκουρου Καθηγητή Α. Κυριλλίδη (Rice University)

  • Συντάχθηκε 27-06-2022 10:03 Πληροφορίες σύνταξης

    Ενημερώθηκε: -

    Τόπος: Λ - Κτίριο Επιστημών/ΗΜΜΥ, 145Π-58
    Έναρξη: 05/07/2022 14:30
    Λήξη: 05/07/2022 16:00

    Title:
    A Tale of Sparsity in Deep Learning: Lottery Tickets, Subset Selection, and Efficiency in Distributed Learning

    Abstract:
    Neural network pruning is useful for discovering efficient, high-performing subnetworks within pre-trained, dense network architectures. Yet, more often than not, it involves a computationally expensive procedure, as the dense model must be fully pre-trained to achieve top-notch performance, at least for a number of iterations/epochs. Most existing works in this area remain empirical or impractical, based on heuristic rules about when, how and how much one needs to pre-train to recover meaningful sparse subnetworks. In this talk, we will scratch the surface of open questions in the general area of ``pruning techniques for neural network training'', with special focus on what can be theoretically characterized, in order to move from heuristics to provable protocols.

    If time permits, the talk is split into the following three themes/questions:
    - Can we theoretically characterize how much SGD-based pre-training is sufficient to identify meaningful sparse subnetworks?
    - Can we drive connections between pruning methods and classical sparse recovery, in order to leverage decades of knowledge on theory for subset selection?
    - From a practical standpoint, can we combine the above ideas with existing efficient distributed protocols in order to achieve end-to-end sparse neural network training, even avoiding heavy full-model pre-training phases?

    Bio:
    Anastasios Kyrillidis is a Noah Harding Assistant Professor at the Computer Science department at Rice University. Prior to that, he was a Goldstine postdoctoral fellow at IBM T. J. Watson Research Center (NY), and a Simons Foundation postdoc member at the University of Texas at Austin. He finished his PhD at the CS Department of EPFL (Switzerland) under the supervision of Volkan Cevher. Tasos got his M.Sc. and Diploma from the Electronic and Computer Engineering Dept. at the Technical University of Crete (Chania). His research interests include (but not limited to): Optimization for machine learning, convex and non-convex algorithms and analysis, large-scale optimization, any problem that includes a math-driven criterion and requires an efficient method for its solution.

    https://akyrillidis.github.io

     



  • Συντάχθηκε 05-07-2022 14:53 Πληροφορίες σύνταξης

    Ενημερώθηκε: -

    Η ομιλία ξεκίνησε πριν λίγο, στην αίθουσα 145.Π58. Όσοι ενδιαφέρεστε, προσέλθετε...


  • Συντάχθηκε 08-07-2022 21:00 Πληροφορίες σύνταξης

    Ενημερώθηκε: -

    Το βίντεο της ομιλίας είναι διαθέσιμο (έχει χαμηλό ήχο δυστυχώς) ... 

    https://www.youtube.com/watch?v=ha15_DYcZbc 


© Πολυτεχνείο Κρήτης 2012