Innovating and Interpreting Neural Networks

Date Posted: 
7 September, 2022

Date: Friday Sept 9, 2022, 15:30-17:00
Venue: LSB 219
Speaker: Dr. Fenglei
Position: Postdoctoral Fellow, CUHK

Biography: Dr. Fenglei Fan achieved his Bachelor’s degree at Harbin Institute of Technology (China) in 2017 and PhD degree at Rensselaer Polytechnic Institute (USA) in 2021, supervised by Dr. Ge Wang, a world-level leading researcher in medical imaging. During his PhD, he did two internships in GE Global Research Center and MIT-IBM Watson AI Lab, respectively. From 2021.08 to 2022.08, he was a postdoctoral associate at Weill Cornell Medicine, Cornell University. His research interests are deep learning theory, methodology, and applications in healthcare data analytics, image processing, manufacturing, and so on. He has published 20 peer-reviewed papers in prestigious journals of AI and image processing such as IEEE TNNLS, IEEE TMI, IEEE TCI, and IEEE TAI. His representative works are quadratic neuron-based deep learning and the width-depth quasi-equivalence of neural networks. His research outcome is widely recognized by domain experts. Due to the excellent performance, his PhD study was funded by IBM/RPI AI Horizon Scholarship, with a total expense of 20W dollars. He is an award recipient of 2021 International Neural Network Society (INNS) Dissertation Award.

Title: Innovating and Interpreting Neural Networks

Abstract: It is widely recognized that machine learning, especially deep learning, is a paradigm shift in many important fields. However, there are still many challenges in deep learning research. On the one hand, over the past years, major efforts have been dedicated to architecture innovations, leading to many advanced models such as ResNet, DenseNet. However, although deep learning is inspired by the computation of the biological neural system, current deep learning systems almost exclusively use one type of neurons, failing to reflect neuronal diversity which is a key characteristic of biological neural systems. On the other hand, despite that deep learning performs quite well in practice, it is difficult to explain its underlying mechanism and understand its behaviors. Particularly, the success of deep learning is not well underpinned by effective mathematical theory. Lacking interpretability has become a primary obstacle to the widespread translation of deep learning. In this talk, we propose quadratic neurons to promote neuronal diversity in deep learning, where an inner product in the conventional neuron is replaced with quadratic counterparts whose non-linearity enhances the expressive ability of the neuron. Furthermore, we develop accountable theories to explain the success of shortcut connections and the relationship between the width and depth of a neural network.