Seminar @ Cornell Tech: Mohamed Abdelfattah
Rethinking Deep Learning Computations: From AutoML to Hardware
The explosion and availability of data has created a new computation paradigm based on deep learning. Consequently, we need to rethink both software and hardware to make these computations possible, and to keep up with the ever-increasing computation demand. In this talk, I will describe how we use automated machine learning (AutoML) to enable on-device AI through neural architecture search, compression, hardware modeling and efficient search algorithms. Next, I will give an overview of my experience in designing hardware and compilers for deep learning acceleration – I will focus on a new automated codesign methodology that simultaneously improves efficiency and accuracy. Finally, I will focus on reconfigurable devices like FPGAs. I will describe how an embedded network-on-chip can transform FPGAs into a general-purpose computation platform well-suited for deep learning.
Mohamed is a research team lead at the Samsung AI Center in Cambridge UK, working on the codesign of deep learning algorithms and hardware. Before that, he was at Intel building an FPGA-based accelerator and compiler for deep neural networks. Mohamed did his PhD at the University of Toronto, during which he was awarded the Vanier Canada Graduate scholarship and three best paper awards for his work on embedded networks-on-chip for FPGAs.