• 统计研究中心
当前位置: 首页 > 系列讲座 > 正文

普渡大学王啸教授:Inferential Wasserstein Generative Adversarial Networks

光华讲坛——社会名流与企业家论坛第 5776 期

Inferential Wasserstein Generative Adversarial Networks

主讲人普渡大学王啸教授

主持人统计学院林华珍教授

时间2021年5月24日(周一)上午10:00-11:00

直播平台及会议ID腾讯会议,386 661 287

主办单位:统计研究中心和统计学院 科研处

主讲人简介:

Dr. Xiao Wang is Professor of Statistics at Purdue University. His research interests focus on machine learning, deep learning,nonparametric statistics, and functional data analysis. Dr. Wang has published over fifty peer-reviewed papers including AOS, JASA, Biometrika, and top conferences such as NeurIPS, ICLR, AAAI, IJCAI, AISTAT. Dr. Wang is the elected fellow of the Institute of Mathematical Statistics (IMS). Currently, Dr. Wang is serving as Associate Editors of JASA, Technometrics, and Lifetime Data Analysis.

王啸博士是普渡大学统计学教授。他的研究兴趣集中在机器学习、深度学习、非参数统计和函数型数据分析上。 王教授在包括统计学顶级期刊AOS,JASA,Biometrika和计算机顶会期刊NeurIPS,ICLR,AAAI ,IJCAI和AISTAT等上发表论文50余篇。王教授是国际数理统计学会(IMS)的fellow;目前是JASA、Technometrics和Lifetime Data Analysis的副主编。 


内容提要:

Generative Adversarial Networks (GANs) have been impactful on many problems and applications but suffer from unstable training. The Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. The iWGAN model jointly learns an encoder network and a generator network motivated by the iterative primal dual optimization process. The encoder network maps the observed samples to the latent space and the generator network maps the samples from the latent space to the data space. We establish the generalization error bound of iWGANs to theoretically justify the performance of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that the iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining competitive and stable performances for benchmark datasets.

生成对抗网络(GANs)对许多问题和应用都有很大的影响,但其训练不稳定。Wasserstein GAN (WGAN)利用Wasserstein距离来避免GAN的最小最大二人组训练中存在的问题,但也存在模式崩溃和缺乏度量来检测收敛性等缺陷。我们介绍了一种新的推理瓦瑟斯坦GAN (iWGAN)模型,它是一个融合自编码器和wgan的原则框架。该模型在迭代原对偶优化过程的激励下,联合学习编码器网络和发电机网络。编码器网络将观测到的样本映射到潜在空间,生成器网络将潜在空间的样本映射到数据空间。我们建立了网络的泛化误差界,从理论上证明了网络的性能。在极大似然估计的框架下,进一步给出了模型的严格概率解释。iWGAN具有清晰的停止标准,与其他自动编码器gan相比有许多优点。经验实验表明,iWGAN极大地缓解了模式崩溃的症状,加快了收敛速度,能够为每个单独的样本提供质量检查的度量。我们通过对基准数据集获得具有竞争力和稳定的性能来说明iwgan的能力。



上一条:香港科技大学荆炳义教授:FEAT: an Flexible, Efficient, and Accurate Test Strategy for COVID-19

下一条:江苏师范大学赵鹏教授:A Multivariate Frequency-Severity Model for Healthcare Data Breaches