全部版块 我的主页
论坛 经济学人 二区 外文文献专区
419 0
2022-03-05
摘要翻译:
本文在推广经典核方法的基础上,提出了两层核机的结构。新的学习方法提供了多层计算体系结构与标准正则化方法中核学习主题之间的形式化联系。首先,给出了两层网络的一个表示定理,证明了在再生核Hilbert空间(RKHS)中,当相应的函数求解适当的变分问题时,每层上有限个核的线性组合是最优结构。这些体系结构所表达的输入输出映射被证明等效于一个合适的单层核机,在该单层核机中,核函数也是从数据中学习的。近年来,在机器学习文献中,所谓的多核学习方法引起了相当大的关注。本文证明了多核学习方法是两层核机器的具体情况,其中第二层是线性的。最后介绍了一种简单有效的多核学习方法RLS2(两层正则化最小二乘),并对其在几个学习问题上的性能进行了深入分析。一个开放源码的MATLAB工具箱可以用图形用户界面来训练和验证RLS2模型。
---
英文标题:
《Kernel machines with two layers and multiple kernel learning》
---
作者:
Francesco Dinuzzo
---
最新提交年份:
2010
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Machine Learning        机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--

---
英文摘要:
  In this paper, the framework of kernel machines with two layers is introduced, generalizing classical kernel methods. The new learning methodology provide a formal connection between computational architectures with multiple layers and the theme of kernel learning in standard regularization methods. First, a representer theorem for two-layer networks is presented, showing that finite linear combinations of kernels on each layer are optimal architectures whenever the corresponding functions solve suitable variational problems in reproducing kernel Hilbert spaces (RKHS). The input-output map expressed by these architectures turns out to be equivalent to a suitable single-layer kernel machines in which the kernel function is also learned from the data. Recently, the so-called multiple kernel learning methods have attracted considerable attention in the machine learning literature. In this paper, multiple kernel learning methods are shown to be specific cases of kernel machines with two layers in which the second layer is linear. Finally, a simple and effective multiple kernel learning method called RLS2 (regularized least squares with two layers) is introduced, and his performances on several learning problems are extensively analyzed. An open source MATLAB toolbox to train and validate RLS2 models with a Graphic User Interface is available.
---
PDF链接:
https://arxiv.org/pdf/1001.2709
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群