题目: Random Sketch and Validate for Learning from Large-Scale Data
报告人: Georgios B. Giannakis
时间: 2015年3月15日15:00-16:00
地点: 电信大楼403 室
邀请人: 刘庆文
报告人简介:
Georgios B. Giannakis received his Diploma in Electrical Engr. from the Ntl. Tech. Univ. of Athens, Greece, 1981. From 1982 to 1986 he was with the Univ. of Southern California (USC), where he received his MSc. in Electrical Engineering, 1983, MSc. in Mathematics, 1986, and Ph.D. in Electrical Engr., 1986. Since 1999 he has been a professor with the Univ. of Minnesota, where he now holds an ADC Chair in Wireless Telecommunications in the ECE Department, and serves as director of the Digital Technology Center. His general interests span the areas of communications, networking and statistical signal processing – subjects on which he has published more than 380 journal papers, 650 conference papers, 20 book chapters, two edited books and two research monographs (h-index 115). Current research focuses on big data analytics, wireless cognitive radios, network science with applications to social, brain, and power networks with renewables.. He is the (co-) inventor of 22 patents issued, and the (co-) recipient of 8 best paper awards from the IEEE Signal Processing (SP) and Communications Societies, including the G. Marconi Prize Paper Award in Wireless Communications. He also received Technical Achievement Awards from the SP Society (2000), from EURASIP (2005), a Young Faculty Teaching Award, the G. W. Taylor Award for Distinguished Research from the University of Minnesota, and the IEEE Fourier Technical Field Award (2015). He is a Fellow of the IEEE and EURASIP, and has served the IEEE in a number of posts including that of a Distinguished Lecturer for the IEEE-SPS.
内容提要:
We live in an era of data deluge. Pervasive sensors collect massive amounts of information on every bit of our lives, churning out enormous streams of raw data in various formats. Mining information from unprecedented volumes of data promises to limit the spread of epidemics and diseases, identify trends in financial markets, learn the dynamics of emergent social-computational systems, and also protect critical infrastructure including the smart grid and the Internet’s backbone network. While Big Data can be definitely perceived as a big blessing, big challenges also arise with large-scale datasets. This talk will put forth novel algorithms and present analysis of their performance in extracting computationally affordable yet informative subsets of massive datasets. Extraction will effected through innovative tools, namely adaptive censoring, random subset sampling (a.k.a. sketching), and validation. The impact of these tools will be demonstrated in machine learning tasks as fundamental as (non)linear regression, classification, and clustering of high-dimensional, large-scale, and dynamic datasets.
欢迎各位老师同学踊跃参加!