• 统计研究中心
当前位置: 首页 > 系列讲座 > 正文

英国伯明翰大学雷云文博士:基于随机梯度下降的统计学习

光华讲坛——社会名流与企业家论坛第 5918 期

(线上讲座)

基于随机梯度下降的统计学习

主讲人英国伯明翰大学雷云文博士

主持人统计学院林华珍教授

时间2020年11月16日(周一)下午4:00-5:00

直播平台及会议ID腾讯会议,674 913 064

主办单位:统计研究中心和统计学院 科研处

主讲人简介:

        Dr. Yunwen Lei got his PhD degree from Wuhan University in 2014. He is now a Lecturer at School of Computer Science, University of Birmingham. Previously, he was a Humboldt Research Fellow at University of Kaiserslautern, a Research Assistant Professor at Southern University of Science and Technology, and a Postdoctoral Research Fellow at City University of Hong Kong. Dr. Lei's research interests include learning theory and optimization. He has published papers in ICML, NeurIPS, and some prestigious journals including Neural Computation, IEEE Transactions on Information Theory, Journal of Machine Learning Research, Applied and Computational Harmonic Analysis, among others.

雷云文博士于2014年在武汉大学获得博士学位。他现在是伯明翰大学计算机学院的讲师。 此前,他是凯撒斯劳滕工业大学的洪堡研究员,南方科技大学的研究助理教授,香港城市大学的博士后研究员。 雷博士的研究兴趣包括统计学习理论和优化。 他在ICMLNeurIPS和一些著名的期刊上发表了论文,包括Neural Computation, IEEE Transactions on Information Theory, Journal of Machine Learning Research, Applied and Computational Harmonic Analysis等。

内容提要:

    Stochastic gradient descent (SGD) has become the workhorse behind many machine learning problems. Optimization and sampling errors are two contradictory factors responsible for the statistical behavior of SGD. In this talk, we report our generalization analysis of SGD by considering simultaneously the optimization and sampling errors. We remove some restrictive assumptions in the literature and significantly improve the existing generalization bounds. Our results help to understand how to stop SGD early to get a best statistical behavior.

随机梯度下降(SGD)是求解机器学习优化问题的主流算法。SGD模型的统计特性取决于优化误差和抽样误差这两个互相矛盾的因素。本次报告通过同时考虑优化误差和抽样误差来分析SGD的泛化行为。 我们去掉了现有分析中一些限制性约束从而拓宽了其应用范围,并显著改善了现有的泛化误差界。 结果有助于指导如何权衡学习率以及算法停止时间以获得最佳的统计特性。



上一条:南京审计大学吕绍高副教授:Debiased distributed learning for sparse partial linear models in high dimensions

下一条:香港科技大学杨灿副教授: MR-APSS: a unified approach to Mendelian Randomization accounting for pleiotropy, sample overlap and selection bias using the genome-wide summary data