摘要翻译:
进化学习通过进化一个分类器群体来进行,它通常从分类器群体中返回(除了一些值得注意的例外)一个运行最好的分类器作为最终结果。与此同时,集成学习作为近十年来监督
机器学习中最有效的方法之一,通过建立一个多样化的分类器群体来进行。因此,基于进化计算的集成学习受到越来越多的关注。本文提出的进化集成学习(EEL)方法有两个贡献。首先,提出了一种新的适应度函数,该函数受协同进化的启发,增强了分类器的多样性。在此基础上,提出了一种新的基于分类裕度的选择准则。该准则用于只从最终种群中提取分类器集成(离线)或沿着进化过程渐进地提取分类器集成(在线)。在一组基准问题上的实验表明,离线学习优于单一假设、进化学习和最先进的Boosting,并生成更小的分类器集成。
---
英文标题:
《Ensemble Learning for Free with Evolutionary Algorithms ?》
---
作者:
Christian Gagn\'e (INFORMATIQUE WGZ INC.), Mich\`ele Sebag (INRIA
Futurs), Marc Schoenauer (INRIA Futurs), Marco Tomassini (ISI)
---
最新提交年份:
2007
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles.
---
PDF链接:
https://arxiv.org/pdf/0704.3905