Την Παρασκευή, 15/02/2019 και ώρα 10:00 π.μ., ο Χρήστος Κοζανίτης, συνεργαζόμενος ερευνητής στο Ινστιτούτο Πληροφορική-Ίδρυμα Τεχνολογίας και Έρευνας, θα δώσει ομιλία με τίτλο "A walk on the intersection of systems for Machine Learning, and Machine Learning for Systems", στην αίθουσα 137.Π.39 του Κτηρίου Επιστημών, στην Πολυτεχνειούπολη.
Despite incredible recent advances in machine learning, building machine learning applications remains prohibitively time-consuming and expensive for all but the best-trained, best-funded engineering organizations. For instance, data collection, extraction, and cleaning requires tedious trial-and-error based software development. Moreover, model training requires the use of expensive infrastructure to run for days. And in the lack of better hardware, the deployment of models in production does not always use the best possible platforms.
For pipelines with completely different compute requirements between data acquisition, and model fitting, my work uses proper abstractions that modify Apache Spark to work with clusters of heterogeneous executors. In terms of infrastructure costs, my work uses Machine Learning techniques to accommodate more workloads over the same hardware . And in the prediction front, in collaboration with TUC we have been working on hardware implementations that dramatically reduce the latency and power consumption of predictions over Convolutional Neural Networks.
Christos Kozanitis is a research collaborator at FORTH-ICS. He received his M.S. and Ph.D in Computer Science and Engineering from the University of California, San Diego in 2009 and 2013 respectively. Parts of his phd work influenced products from companies such as Cisco and Illumina. He also held a two-year postdoctoral appointment at the AMP Lab of the University of California, Berkeley, where he used and adapted state of the art big data technologies, such as Apache Spark SQL, Apache Parquet and Apache Avro to process large amounts of DNA sequencing data. His current research interests involve the improvement in software, storage and hardware level of modern datacenters in order to speed up the processing of big data workloads.