报告人：Prof. Bei Yu（香港中文大学）
Accelerating Deep Convolutional Network Inference
Deep neural networks (DNNs) have achieved significant success in a variety of real world applications. However, tons of parameters in the networks restrict the efficiency of neural networks due to the large model size and the intensive computation. To address this issue, various compression and acceleration techniques have been investigated. In this talk I will introduce state-of-the-art techniques in DNN accelerating techniques from the following two perspectives: 1) how we can accelerate accurate DNN inference; 2) how we can accelerate inaccurate DNN inference.
Prof. Bei Yu received his Ph.D degree from University of Texas at Austin in 2014. He is currently an Assistant Professor in the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He has served in the editorial boards of Integration, the VLSI Journal, IET Cyber-Physical Systems: Theory & Applications, and Editor-in-Chif of IEEE TCCPS Newsletter. He has received five Best Paper Awards from Integration, the VLSI Journal in 2018, ISPD 2017, SPIE Advanced Lithography Conference 2016, ICCAD 2013, and ASPDAC 2012, four other Best Paper Award Nominations at ASPDAC 2019, DAC 2014, ASPDAC 2013, ICCAD 2011, and five ICCAD/ISPD contest awards.