全部版块 我的主页
论坛 经济学人 二区 外文文献专区
392 0
2022-03-19
摘要翻译:
我们提出了一种不依赖流形假设的度量学习的一般信息论方法Seraph(半监督超稀疏度量学习范式)。在给定的概率参数为马氏距离的情况下,通过熵正则化,在有标记的数据上使该概率的熵最大化,在无标记的数据上使该概率的熵最小化,从而使有监督和无监督的部分以自然和有意义的方式集成。此外,通过鼓励由度量诱导的低秩投影来正则化Seraph。采用解析E步长和凸M步长相结合的类EM格式,高效、稳定地求解了Seraph的优化问题。实验表明,Seraph与许多已知的全局和局部度量学习方法相比具有良好的性能。
---
英文标题:
《SERAPH: Semi-supervised Metric Learning Paradigm with Hyper Sparsity》
---
作者:
Gang Niu, Bo Dai, Makoto Yamada and Masashi Sugiyama
---
最新提交年份:
2012
---
分类信息:

一级分类:Statistics        统计学
二级分类:Machine Learning        机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--

---
英文摘要:
  We propose a general information-theoretic approach called Seraph (SEmi-supervised metRic leArning Paradigm with Hyper-sparsity) for metric learning that does not rely upon the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize the entropy of that probability on labeled data and minimize it on unlabeled data following entropy regularization, which allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Furthermore, Seraph is regularized by encouraging a low-rank projection induced from the metric. The optimization of Seraph is solved efficiently and stably by an EM-like scheme with the analytical E-Step and convex M-Step. Experiments demonstrate that Seraph compares favorably with many well-known global and local metric learning methods.
---
PDF链接:
https://arxiv.org/pdf/1105.0167
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群