摘要翻译:
适形预测使用过去的经验来确定新预测的精确置信度。给定错误概率$\epsilon$,以及对标签$y$进行预测的方法$\hat{y}$,它将生成一组标签,通常包含$\hat{y}$,其中还包含概率为$1-\epsilon$的$y$。共形预测可以应用于任何产生$\hat{y}$的方法:最近邻方法、支持向量机、岭回归等。共形预测是为在线设置而设计的,在在线设置中,标签被连续预测,每个标签在下一个标签被预测之前被揭示。保形预测最新颖和最有价值的特点是,如果连续的例子是独立于同一分布进行采样的,那么连续的预测将是正确的$1-ε$,即使它们是基于累积的数据集而不是基于独立的数据集。除了连续实例独立采样的模型外,其他在线压缩模型也可以使用共形预测。广泛应用的高斯线性模型就是其中之一。本教程提供了一个完整的保形预测理论的说明,并通过几个数值例子进行工作。Vladimir Vovk、Alex Gammerman和Glenn Shafer(Springer,2005)在“随机世界中的算法学习”中提供了对该主题的更全面的处理。
---
英文标题:
《A tutorial on conformal prediction》
---
作者:
Glenn Shafer and Vladimir Vovk
---
最新提交年份:
2007
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Machine Learning
机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Statistics 统计学
二级分类:Machine Learning 机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--
---
英文摘要:
Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Given an error probability $\epsilon$, together with a method that makes a prediction $\hat{y}$ of a label $y$, it produces a set of labels, typically containing $\hat{y}$, that also contains $y$ with probability $1-\epsilon$. Conformal prediction can be applied to any method for producing $\hat{y}$: a nearest-neighbor method, a support-vector machine, ridge regression, etc. Conformal prediction is designed for an on-line setting in which labels are predicted successively, each one being revealed before the next is predicted. The most novel and valuable feature of conformal prediction is that if the successive examples are sampled independently from the same distribution, then the successive predictions will be right $1-\epsilon$ of the time, even though they are based on an accumulating dataset rather than on independent datasets. In addition to the model under which successive examples are sampled independently, other on-line compression models can also use conformal prediction. The widely used Gaussian linear model is one of these. This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples. A more comprehensive treatment of the topic is provided in "Algorithmic Learning in a Random World", by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).
---
PDF链接:
https://arxiv.org/pdf/706.3188