由 Benjamin B. Risk, David S. Matteson, David Ruppert[url=][/url] 通过 stat updates on arXiv.org[url=][/url]
Independent component analysis (ICA) is popular in many applications, including cognitive neuroscience and signal processing. Due to computational constraints, principal component analysis is used for dimension reduction prior to ICA (PCA+ICA), which could remove important information. The problem is that interesting independent components (ICs) could be mixed in several principal components that are discarded and then these ICs cannot be recovered. To address this issue, we propose likelihood component analysis (LCA), a novel methodology in which dimension reduction and latent variable estimation is achieved simultaneously by maximizing a likelihood with Gaussian and non-Gaussian components. We present a parametric LCA model using the logistic density and a semi-parametric LCA model using tilted Gaussians with cubic B-splines. We implement an algorithm scalable to datasets common in applications (e.g., hundreds of thousands of observations across hundreds of variables with dozens of latent components). In simulations, our methods recover latent components that are discarded by PCA+ICA methods. We apply our method to dependent multivariate data and demonstrate that LCA is a useful data visualization and dimension reduction tool that reveals features not apparent from PCA or PCA+ICA. We also apply our method to an experiment from the Human Connectome Project with state-of-the-art temporal and spatial resolution and identify an artifact using LCA that was missed by PCA+ICA. We present theoretical results on identifiability of the LCA model and consistency of our estimator.