同济大学嵌入式系统与服务计算教育部重点实验室

[ 存为首页 ] [ 加入收藏 ]  

嵌入式系统与服务计算教育部重点实验室

总访问量:    

当前位置:主页- 学术活动 - 正文

“智信讲坛”(第五十四)期学术报告

浏览次数:

【保护视力背景色: 杏仁黄 秋叶褐 胭脂红 芥末绿 天蓝 雪青 灰 银河白(默认色)】 【字色: 绿 粉红 深蓝】 【字体:8 7 6 5 4 3 2 1

题目: Random Sketch and Validate for Learning from Large-Scale Data
  
  报告人: Georgios B. Giannakis
  
  时间: 2015年3月15日15:00-16:00
  
  地点: 电信大楼403 室
  
  邀请人: 刘庆文
  
  报告人简介:
  
  Georgios B. Giannakis received his Diploma in Electrical Engr. from the Ntl. Tech. Univ. of Athens, Greece, 1981. From 1982 to 1986 he was with the Univ. of Southern California (USC), where he received his MSc. in Electrical Engineering, 1983, MSc. in Mathematics, 1986, and Ph.D. in Electrical Engr., 1986. Since 1999 he has been a professor with the Univ. of Minnesota, where he now holds an ADC Chair in Wireless Telecommunications in the ECE Department, and serves as director of the Digital Technology Center. His general interests span the areas of communications, networking and statistical signal processing – subjects on which he has published more than 380 journal papers, 650 conference papers, 20 book chapters, two edited books and two research monographs (h-index 115). Current research focuses on big data analytics, wireless cognitive radios, network science with applications to social, brain, and power networks with renewables.. He is the (co-) inventor of 22 patents issued, and the (co-) recipient of 8 best paper awards from the IEEE Signal Processing (SP) and Communications Societies, including the G. Marconi Prize Paper Award in Wireless Communications. He also received Technical Achievement Awards from the SP Society (2000), from EURASIP (2005), a Young Faculty Teaching Award, the G. W. Taylor Award for Distinguished Research from the University of Minnesota, and the IEEE Fourier Technical Field Award (2015). He is a Fellow of the IEEE and EURASIP, and has served the IEEE in a number of posts including that of a Distinguished Lecturer for the IEEE-SPS.
  
  内容提要:
  
  We live in an era of data deluge. Pervasive sensors collect massive amounts of information on every bit of our lives, churning out enormous streams of raw data in various formats. Mining information from unprecedented volumes of data promises to limit the spread of epidemics and diseases, identify trends in financial markets, learn the dynamics of emergent social-computational systems, and also protect critical infrastructure including the smart grid and the Internet’s backbone network. While Big Data can be definitely perceived as a big blessing, big challenges also arise with large-scale datasets. This talk will put forth novel algorithms and present analysis of their performance in extracting computationally affordable yet informative subsets of massive datasets. Extraction will effected through innovative tools, namely adaptive censoring, random subset sampling (a.k.a. sketching), and validation. The impact of these tools will be demonstrated in machine learning tasks as fundamental as (non)linear regression, classification, and clustering of high-dimensional, large-scale, and dynamic datasets.
  
  欢迎各位老师同学踊跃参加!
 
 

发布日期:2016-03-09

上一条:“智信讲坛”(第五十六)期学术报告 下一条:“智信讲坛”(第五十二)期学术报告