• 统计研究中心
当前位置: 首页 > 系列讲座 > 正文

中国科学院张新雨研究员:Optimal parameter-transfer learning by semiparametric model averaging

主 题:Optimal parameter-transfer learning by semiparametric model averaging

主讲人:中国科学院张新雨研究员

主持人:统计学院林华珍教授

时间:2022年8月4日(周四)上午10:30-11:30

直播平台及会议ID:腾讯会议,ID: 717-802-666

主办单位:统计研究中心和统计学院 科研处


主讲人简介:

张新雨,中科院数学与系统科学研究院/预测中心研究员,中国科学技术大学管理学院双聘教授。2010年在中科院系统所获博士学位,曾是TAMU博士后和PSU的Research Fellow。担任期刊《Journal of Systems Science and Complexity》领域主编、期刊《Statistical Analysis and Data Mining》Associate Editor、期刊《系统科学与数学》和《应用概率统计》编委,是中国统筹法优选法与经济数学研究会数据科学分会副理事长和国际统计学会当选会员。先后主持国家自然科学基金委优秀和杰出青年研究基金项目,曾获得中国管理学青年奖和中科院优秀博士学位论文等奖励。主要从事统计学和计量经济学的理论和应用研究工作,具体研究方向包括模型平均、机器学习、组合预测和卫生统计等。发表了50多篇学术论文,其中20余篇论文发表在Annals of Statistics、Biometrika、JASA、JRSSB、Journal of Econometrics和Econometric Theory。


内容提要:

In this article, we focus on the prediction for a target model by transferring the information of source models. To be flexible, we use semiparametric additive frameworks for the target and source models. Inheriting the spirits of parameter-transfer learning, we assume that different models possibly share common knowledge across parametric components that is helpful to the target predictive task. Unlike existing parameter-transfer approaches, which need to construct auxiliary source models by parameter similarity with the target model and then adopt a regularization procedure, we propose a frequentist model averaging strategy with a J-fold cross-validation criterion so that auxiliary parameter information from different models can be adaptively utilized through the data-driven weight assignments. The asymptotic optimality and weight convergence of our proposed method are built under some regularity conditions. Extensive numerical results demonstrate the superiority of the proposed method over competitive methods.


上一条:宾夕法尼亚州立大学马彦源教授:Measurement Error Models

下一条:南京审计大学吕绍高副教授:Debiased distributed learning for sparse partial linear models in high dimensions