• 统计研究中心
当前位置: 首页 > 系列讲座 > 正文

明尼苏达大学双城分校王刚华博士生:Pruning deep neural networks from a sparsity perspective


主 题Pruning deep neural networks from a sparsity perspective

基于稀缺角度的深层神经网络剪枝

主讲人明尼苏达大学双城分校王刚华博士生

主持人统计学院林华珍教授

时间:2023年6月19日(周一)下午4-5点

举办地点:柳林校区弘远楼408会议室

主办单位:统计研究中心和统计学院 科研处

主讲人简介:

Ganghua Wang received the B.S. degree from Peking University Beijing, China, in 2019. Since 2019, he has been a Ph.D. student with the School of Statistics, University of Minnesota, Twin Cities, MNUSA. His research interests include the foundations of machine learning theory and trustworthy machine learning.

王刚华,2019年毕业于北京大学,获理学学士学位。自2019年以来,他一直是明尼苏达大学双城分校统计学院的博士生。他的主要研究方向为机器学习理论基础和可信机器学习。


内容简介

Recently, deep network pruning has attracted significant attention in order to enable the rapid deployment of Al into small devices with computation and memory constraints. Many deep pruning algorithms have been proposed with impressive empirical success. However, a theoretical understanding of model compression is still limited. One problem is to understand if a network is more compressible than another of the same structure. Another problem is to quantify how much one can prune a network with theoretically guaranteed accuracy degradation. This talk address these two fundamental problems by using the sparsity-sensitive lg-norm (0 < g < 1) to characterize compressibility and provide a relationship between soft sparsity of the network weight sand the degree of compression with a controlled accuracy degradation bound. next, we propose pq index (pqi) to measure the potential compressibility of deep neural networks and use this to develop a sparsity-informed adaptive pruning (sap) algorithm. our experiments demonstrate that the proposed adaptive pruning algorithm with proper choice of hyper-parameters is superior to the iterative pruning algorithms such as the lottery ticket-based pruning methods, in terms of both compression efficiency and robustness.

最近,深度网络剪枝引起了显著的关注,以便在计算和内存限制的小型设备中快速部署人工智能(AI)。许多深度剪枝算法已经在经验上获得了令人印象深刻的成功。 然而,在理论上对模型压缩的理解仍然有限。 其中一个问题是了解一个网络是否比另一个具有相同结构的网络更具有可压缩性。另一个问题是用有理论保证的精度损失来量化网络的可剪枝程度。本报告通过使用对稀缺度敏感的lg--范数(0 < g < 1)来描述压缩程度,并给出了网络权重软稀缺度与具有可控精度损失边界的压缩程度之间的关系,解决了这两个基本问题。接下来,主讲人提出pq指数(pqi)来衡量深度神经网络的潜在可压缩性,并使用此方法开发了稀疏信息自适应剪枝算法(sap)。主讲人的实验表明在适当的超参数选择下,所提出的自适应剪枝算法在压缩效率和可靠性方面都优于彩票剪枝方法等迭代剪枝算法。










































王刚华是明尼苏达大学双城校区统计学院的四年级博士生,导师是丁杰教授和杨宇泓教授。他在2019年毕业于北京大学数学科学学院,获理学学士学位。他的主要研究方向为机器学习理论和可信机器学习,包括深度神经网络理论基础,数据隐私,和机器学习模型的稳定性,可靠性以及公平性。

上一条:北京大学丁剑教授:Recent progress on random graph matching problems(随机图匹配问题的研究进展)

下一条:美国康涅狄格大学王海鹰副教授:Subsampling for Rare Events Data and maximum sampled conditional likelihood