摘要翻译:
基于Solomonoff的增量式
机器学习方法,我们提出了一种用于人工通用智能的长期记忆设计。我们使用R5RS方案及其标准库作为参考机器,但略有遗漏。我们介绍了一个基于随机上下文无关文法的Levin搜索变体,以及四个使用相同文法作为程序指导概率分布的协同更新算法。更新算法包括调整产生概率、重用以前的解、学习编程习惯用法和发现频繁的子程序。通过两个训练序列的实验证明了我们的增量学习方法是有效的。
---
英文标题:
《Teraflop-scale Incremental Machine Learning》
---
作者:
Eray \"Ozkural
---
最新提交年份:
2011
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. Experiments with two training sequences demonstrate that our approach to incremental learning is effective.
---
PDF链接:
https://arxiv.org/pdf/1103.1003