全部版块 我的主页
论坛 经济学人 二区 外文文献专区
562 0
2022-03-07
摘要翻译:
我们为统计学习提供了非渐近超额风险保证,其中我们评估目标参数的总体风险依赖于一个未知的干扰参数,该参数必须从数据中估计。我们分析了一种两阶段样本分裂元算法,该算法以两种任意估计算法为输入:一种用于目标参数估计,一种用于干扰参数估计。我们证明了如果总体风险满足Neyman正交性条件,干扰估计误差对元算法获得的超额风险界的影响是二阶的。我们的定理不知道针对目标和干扰使用的特定算法,只对它们各自的性能做出假设。这使得使用来自统计学习和机器学习的大量现有结果来为带有讨厌成分的学习提供新的保证。此外,通过关注超额风险而不是参数估计,我们可以在比以往工作更弱的假设下给出保证,并适应目标参数属于复杂非参数类的设置。我们提供了关于Neisance和target类的度量熵的条件,以便达到oracle速率--就像我们知道Neisance参数一样的相同阶的速率。我们还导出了一些特殊的估计算法,如方差惩罚经验风险最小化、神经网络估计和稀疏高维线性模型估计的新速率。我们强调了我们的结果在四个重要环境中的适用性:1)异构治疗效果估计,2)离线策略优化,3)域适应,4)缺失数据学习。
---
英文标题:
《Orthogonal Statistical Learning》
---
作者:
Dylan J. Foster and Vasilis Syrgkanis
---
最新提交年份:
2020
---
分类信息:

一级分类:Mathematics        数学
二级分类:Statistics Theory        统计理论
分类描述:Applied, computational and theoretical statistics: e.g. statistical inference, regression, time series, multivariate analysis, data analysis, Markov chain Monte Carlo, design of experiments, case studies
应用统计、计算统计和理论统计:例如统计推断、回归、时间序列、多元分析、数据分析、马尔可夫链蒙特卡罗、实验设计、案例研究
--
一级分类:Computer Science        计算机科学
二级分类:Machine Learning        机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Economics        经济学
二级分类:Econometrics        计量经济学
分类描述:Econometric Theory, Micro-Econometrics, Macro-Econometrics, Empirical Content of Economic Relations discovered via New Methods, Methodological Aspects of the Application of Statistical Inference to Economic Data.
计量经济学理论,微观计量经济学,宏观计量经济学,通过新方法发现的经济关系的实证内容,统计推论应用于经济数据的方法论方面。
--
一级分类:Statistics        统计学
二级分类:Machine Learning        机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--
一级分类:Statistics        统计学
二级分类:Statistics Theory        统计理论
分类描述:stat.TH is an alias for math.ST. Asymptotics, Bayesian Inference, Decision Theory, Estimation, Foundations, Inference, Testing.
Stat.Th是Math.St的别名。渐近,贝叶斯推论,决策理论,估计,基础,推论,检验。
--

---
英文摘要:
  We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target parameter depends on an unknown nuisance parameter that must be estimated from data. We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target parameter and one for the nuisance parameter. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate settings in which the target parameter belongs to a complex nonparametric class. We provide conditions on the metric entropy of the nuisance and target classes such that oracle rates---rates of the same order as if we knew the nuisance parameter---are achieved. We also derive new rates for specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results in four settings of central importance: 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.
---
PDF链接:
https://arxiv.org/pdf/1901.09036
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群