西南财经大学统计研究中心系列讲座(第423期)

罗格斯大学Linjun Zhang副教授:Evaluating LLMs When They Do Not Know the Answer: Statistical Evaluation of Mathematical Reasoning via Comparative Signals


主 题:Evaluating LLMs When They Do Not Know the Answer: Statistical Evaluation of Mathematical Reasoning via Comparative Signals

主讲人:罗格斯大学Linjun Zhang副教授

主持人:统计与数据科学学院林华珍教授

时间:2026年4月17日(周五)上午10:00-11:00

地点腾讯会议ID:191-923-586

主办单位:统计与数据科学学院和统计研究中心 科研处


主讲人简介:

Linjun Zhang是罗格斯大学统计系副教授。他于2019年在宾夕法尼亚大学沃顿商学院获得统计学博士学位,毕业时分别因卓越研究和教学获得J. Parker Bursk纪念奖和Donald S. Murray奖。他还获得了美国国家科学基金会职业奖(NSF CAREER Award)、2024年罗格斯大学校长教学奖,以及2025年Warren I. Susman卓越教学奖。他目前的研究兴趣包括大语言模型的统计基础、算法公平性、隐私保护数据分析以及深度学习理论。

Linjun Zhang is an Associate Professor in the Department of Statistics, at Rutgers University. He obtained his Ph.D. in Statistics at the Wharton School, the University of Pennsylvania in 2019, and received J. Parker Bursk Memorial Prize and Donald S. Murray Prize for excellence in research and teaching, respectively upon graduation. He also received the NSF CAREER Award, Rutgers Presidential Teaching Award in 2024, and the Warren I. Susman Award for Excellence in Teaching in 2025. His current research interests include statistical foundations of large language models, algorithmic fairness, privacy-preserving data analysis, and deep learning theory.




内容提要:

Evaluating mathematical reasoning in LLMs is constrained by limited benchmark sizes and inherent model stochasticity, yielding high-variance accuracy estimates and unstable rankings across platforms. On difficult problems, an LLM may fail to produce a correct final answer, yet still provide reliable pairwise comparison signals indicating which of two candidate solutions is better. We leverage this observation to design a statistically efficient evaluation framework that combines standard labeled outcomes with pairwise comparison signals obtained by having models judge auxiliary reasoning chains. Treating these comparison signals as control variates, we develop a semiparametric estimator based on the efficient influence function (EIF) for the setting where auxiliary reasoning chains are observed. This yields a one-step estimator that achieves the semiparametric efficiency bound, guarantees strict variance reduction over naive sample averaging, and admits asymptotic normality for principled uncertainty quantification. Across simulations, our one-step estimator substantially improves ranking accuracy, with gains increasing as model output noise grows. Experiments on GPQA Diamond, AIME 2025, and GSM8K further demonstrate more precise performance estimation and more reliable model rankings, especially in small-sample regimes where conventional evaluation is pretty unstable.


下一条:新加坡国立大学姚志刚教授:Manifold fitting