【字色： 红 蓝 褐 绿 黑 紫 粉红 深蓝】
【字体:8 7 6 5 4 3 2 1】
题目：浅析深度学习 Why Deep Learning Works?
报告人：Dapeng Oliver Wu 教授，电子与计算机工程，佛罗里达大学
邀请人： 赵生捷 教授
报告人简介：Dapeng Oliver Wu received Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University, Pittsburgh, PA, in 2003. He is a professor at Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL . He received University of Florida Research Foundation Professorship Award in 2009, AFOSR Young Investigator Program (YIP) Award in 2009, ONR Young Investigator Program (YIP) Award in 2008, NSF CAREER award in 2007, the IEEE Circuits and Systems for Video Technology (CSVT) Transactions Best Paper Award for Year 2001, the Best Paper Award in Globecom 2011, and the Best Paper Award in QShine 2006. Currently, he serves on the editorial board of IEEE Transactions on Communications, IEEE Transactions on Signal and Information Processing over Networks and IEEE Signal Processing Magazine. He is the founder of IEEE Transactions on Network Science and Engineering. He was the founding Editor-in-Chief of Journal of Advances in Multimedia between 2006 and 2008, and an Associate Editor for IEEE Transactions on Wireless Communications, IEEE Transactions on Vehicular Technology, and IEEE Transactions on Circuits and Systems for Video Technology. He has served as General Chair for IEEE GlobalSIP 2015, Technical Program Committee (TPC) Chair for IEEE INFOCOM 2012, and TPC Chair for IEEE International Conference on Communications (ICC 2008), Signal Processing for Communications Symposium. He served as Chair for the Award Committee, Technical Committee on Multimedia Communications, IEEE Communications Society. He was elected as a Distinguished Lecturer by IEEE Vehicular Technology Society. He is an IEEE Fellow.
内容提要：Deep learning is getting super hot in both academia and industry nowadays. It has achieved significant success. But why is deep learning able to achieve unprecedented performance? One possible reason identified is flattening of manifold-shaped data in higher layers of neural networks. However, it is not clear how to measure flattening of manifold-shaped data and what degree of flattening a deep neural network can achieve. In this talk, I will present quantitative evidence to validate this flattening hypothesis. Specifically, I will present three quantities for measuring manifold entanglement and show experiment results with both synthetic data and real-world data. Our experimental results validate the flattening hypothesis and lead to new insights on deep learning and design of better deep learning algorithms.