全部版块 我的主页
论坛 经济学人 二区 外文文献专区
593 0
2022-03-19
摘要翻译:
对于紧凸域上凸函数极小化的一般问题,我们将在Frank&Wolfe1956的方法的基础上,研究一种简单的迭代逼近算法,该算法不需要投影步骤,以便停留在优化域内。代替投影步长,由电流次梯度定义的线性化问题被解决,这给出了自然停留在域中的步长方向。我们的框架将Clarkson2010的Frank&Wolfe的稀疏贪婪算法及其原对偶分析(和Hazan2008的低秩SDP方法)推广到任意凸域。我们给出了在O(1/ePsilon})迭代后保证{ePsilon}-小对偶间隙的收敛性证明。该方法允许我们理解任何L1-正则凸优化问题(以及单纯形上的优化问题)的近似解的稀疏性,表示为逼近质量的函数。我们得到了L1-问题稀疏性的{θ}(1/{ε})的匹配上下界。同样的界也适用于有界迹的低秩半定优化,表明秩O(1/ep})在这里也是最好的。作为另一个应用,我们得到了对角占优对称矩阵上任意凸函数优化时O(1/{ε})非零项的稀疏矩阵为{ε}-近似解。我们证明了我们提出的一阶方法也适用于核范数和极大范数矩阵优化问题。对于核范数正则化优化,如矩阵补全和低秩恢复,我们证明了我们的算法对大型矩阵问题的实际效率和可扩展性,如Netflix数据集。对于有界矩阵极大范数上的一般凸优化问题,我们的算法是第一个具有收敛性保证的算法。
---
英文标题:
《Convex Optimization without Projection Steps》
---
作者:
Martin Jaggi
---
最新提交年份:
2011
---
分类信息:

一级分类:Mathematics        数学
二级分类:Optimization and Control        优化与控制
分类描述:Operations research, linear programming, control theory, systems theory, optimal control, game theory
运筹学,线性规划,控制论,系统论,最优控制,博弈论
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Computer Science        计算机科学
二级分类:Systems and Control        系统与控制
分类描述:cs.SY is an alias for eess.SY. This section includes theoretical and experimental research covering all facets of automatic control systems. The section is focused on methods of control system analysis and design using tools of modeling, simulation and optimization. Specific areas of research include nonlinear, distributed, adaptive, stochastic and robust control in addition to hybrid and discrete event systems. Application areas include automotive and aerospace control systems, network control, biological systems, multiagent and cooperative control, robotics, reinforcement learning, sensor networks, control of cyber-physical and energy-related systems, and control of computing systems.
cs.sy是eess.sy的别名。本部分包括理论和实验研究,涵盖了自动控制系统的各个方面。本节主要介绍利用建模、仿真和优化工具进行控制系统分析和设计的方法。具体研究领域包括非线性、分布式、自适应、随机和鲁棒控制,以及混合和离散事件系统。应用领域包括汽车和航空航天控制系统、网络控制、生物系统、多智能体和协作控制、机器人学、强化学习、传感器网络、信息物理和能源相关系统的控制以及计算系统的控制。
--

---
英文摘要:
  For the general problem of minimizing a convex function over a compact convex domain, we will investigate a simple iterative approximation algorithm based on the method by Frank & Wolfe 1956, that does not need projection steps in order to stay inside the optimization domain. Instead of a projection step, the linearized problem defined by a current subgradient is solved, which gives a step direction that will naturally stay in the domain. Our framework generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to arbitrary convex domains. We give a convergence proof guaranteeing {\epsilon}-small duality gap after O(1/{\epsilon}) iterations.   The method allows us to understand the sparsity of approximate solutions for any l1-regularized convex optimization problem (and for optimization over the simplex), expressed as a function of the approximation quality. We obtain matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for l1-problems. The same bounds apply to low-rank semidefinite optimization with bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well. As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero entries as {\epsilon}-approximate solutions when optimizing any convex function over a class of diagonally dominant symmetric matrices.   We show that our proposed first-order method also applies to nuclear norm and max-norm matrix optimization problems. For nuclear norm regularized optimization, such as matrix completion and low-rank recovery, we demonstrate the practical efficiency and scalability of our algorithm for large matrix problems, as e.g. the Netflix dataset. For general convex optimization over bounded matrix max-norm, our algorithm is the first with a convergence guarantee, to the best of our knowledge.
---
PDF链接:
https://arxiv.org/pdf/1108.1170
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群