全部版块 我的主页
论坛 经济学人 二区 外文文献专区
439 0
2022-03-21
摘要翻译:
研究了高维特征空间中的一类稀疏学习问题,该问题由一个结构化的稀疏诱导范数进行正则化,该范数结合了特征群结构的先验知识。由于正则化项的非光滑性和不可分离性,这类问题往往对优化算法提出了相当大的挑战。本文重点讨论了两个常用的稀疏诱导正则化项:重叠群Lasso罚$L_1/L_2$-范数和$L_1/L_infty$-范数。我们提出了一个基于增广拉格朗日方法的统一框架,在该框架下,可以有效地解决两类正则化及其变体的问题。作为该框架的核心组成部分,我们使用交替的部分线性化/分裂技术开发了新的算法,并证明了这些算法的加速版本需要$O(\frac{1}{\sqrt{\epsilon}})$迭代才能获得$\epsilon$-最优解。为了证明我们的算法的有效性和相关性,我们在一组数据集上测试了它们,并将它们应用于两个现实世界的问题,以比较两个规范的相对优点。
---
英文标题:
《Structured Sparsity via Alternating Direction Methods》
---
作者:
Zhiwei Qin and Donald Goldfarb
---
最新提交年份:
2011
---
分类信息:

一级分类:Mathematics        数学
二级分类:Optimization and Control        优化与控制
分类描述:Operations research, linear programming, control theory, systems theory, optimal control, game theory
运筹学,线性规划,控制论,系统论,最优控制,博弈论
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Statistics        统计学
二级分类:Machine Learning        机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--

---
英文摘要:
  We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsity-inducing norm which incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the non-smoothness and non-separability of the regularization term. In this paper, we focus on two commonly adopted sparsity-inducing regularization terms, the overlapping Group Lasso penalty $l_1/l_2$-norm and the $l_1/l_\infty$-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As the core building-block of this framework, we develop new algorithms using an alternating partial-linearization/splitting technique, and we prove that the accelerated versions of these algorithms require $O(\frac{1}{\sqrt{\epsilon}})$ iterations to obtain an $\epsilon$-optimal solution. To demonstrate the efficiency and relevance of our algorithms, we test them on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms.
---
PDF链接:
https://arxiv.org/pdf/1105.0728
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群