摘要翻译:
我们介绍了降秩隐马尔可夫模型(RR-HMM),它是HMMs的一种推广,可以模拟线性动力系统中的光滑状态演化,也可以模拟连续观测HMMs中的非对数凹预测分布。RR-HMMs假设一个M维潜在状态和n个离散观测,转移矩阵的秩为k<=m。这意味着动力学在k维子空间中演化,而预测分布集的形状由m。潜在状态信念用k维状态向量表示,推理完全在R^k中进行,使得RR-HMMs的计算效率与k态HMMs一样高,但更具有表现力。为了学习RR-HMMs,我们放宽了最近提出的HMMs谱学习算法(Hsu,Kakade和Zhang 2009)的假设,并将其应用于学习秩k RR-HMMs的k维可观测表示。该算法具有一致性和无局部最优性,并将其性能保证扩展到RR-HMM情形。我们展示了如何将该算法与核密度估计器结合使用,以有效地建模高维多元连续数据。我们还放宽了单次观测足以消除状态歧义的假设,并对算法进行了相应的扩展。在合成数据和玩具视频上的实验,以及在一个困难的机器人视觉建模问题上的实验,得到了在仿真质量和预测能力上优于标准方案的精确模型。
---
英文标题:
《Reduced-Rank Hidden Markov Models》
---
作者:
Sajid M. Siddiqi, Byron Boots, Geoffrey J. Gordon
---
最新提交年份:
2009
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Machine Learning
机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
We introduce the Reduced-Rank Hidden Markov Model (RR-HMM), a generalization of HMMs that can model smooth state evolution as in Linear Dynamical Systems (LDSs) as well as non-log-concave predictive distributions as in continuous-observation HMMs. RR-HMMs assume an m-dimensional latent state and n discrete observations, with a transition matrix of rank k <= m. This implies the dynamics evolve in a k-dimensional subspace, while the shape of the set of predictive distributions is determined by m. Latent state belief is represented with a k-dimensional state vector and inference is carried out entirely in R^k, making RR-HMMs as computationally efficient as k-state HMMs yet more expressive. To learn RR-HMMs, we relax the assumptions of a recently proposed spectral learning algorithm for HMMs (Hsu, Kakade and Zhang 2009) and apply it to learn k-dimensional observable representations of rank-k RR-HMMs. The algorithm is consistent and free of local optima, and we extend its performance guarantees to cover the RR-HMM case. We show how this algorithm can be used in conjunction with a kernel density estimator to efficiently model high-dimensional multivariate continuous data. We also relax the assumption that single observations are sufficient to disambiguate state, and extend the algorithm accordingly. Experiments on synthetic data and a toy video, as well as on a difficult robot vision modeling problem, yield accurate models that compare favorably with standard alternatives in simulation quality and prediction capability.
---
PDF链接:
https://arxiv.org/pdf/0910.0902