全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 R语言论坛
1205 4
2021-02-05

do it best , economy and management.
Author : Daniel tulips liu . Copyright © tulipsliu.
Also , Copyright © Sebastian Ankargren , Yukai Yang

First ,Abstract

Abstract Time series are often sampled at different frequencies, which leads to mixed-frequency data. Mixed frequencies are often neglected in applications as high-frequency series are aggregated to lower frequencies. In the mfbvar package, we introduce the possibility to estimate Bayesian vector autoregressive (VAR) models when the set of included time series consists of monthly and quarterly variables. TWe provide a user-friendly interface for model estimation and forecasting. The capabilities of the package are illustrated in an application.
Keywords: vector autoregression, steady-state prior

1. Introduction

Vector autoregressive (VAR) models constitute an important tool for multivariate time series analysis. They are, in their original form, easy to fit and to use and have hence been used for various types of policy analyses as well as for forecasting purposes. A major obstacle in applied VAR modeling is the curse of dimensionality: the number of parameters grows quadratically in the number of variables, and having several hundred or even thousands of parameters is not uncommon. Thus, VAR models estimated by maximum likelihood are usually associated with bad precision.

2. Mixed-Frequency models

Suppose that the system evolves at the monthly frequency. Let xtx_t be an n×1n\times 1 monthly process. Decompose xt=(xm,t,xq,t)x_t=(x_{m, t}^\top, x_{q, t}^\top)^\top into nmn_m monthly variables, and a nqn_q-dimensional latent process for the quarterly observations. By letting yt=(ym,t,yq,t)y_t=(y_{m, t}^\top, y_{q, t}^\top)^\top denote observations, it is implied that ym,t=xm,ty_{m, t}=x_{m, t} as the monthly part is always observed. For the remaining quarterly variables, we instead observe a weighted average of xqx_q. There are two common aggregations used in the literature: intra-quarterly averaging and triangular aggregation. The former assumes the relation between observed and latent variables to be
As the system is assumed to evolve at the monthly frequency, we specify a VAR(pp) model for xtx_t:
(1.1)xt=ϕ+Φ1xt1++Φpxtp+ϵt,ϵtN(0,Σ). x_t=\phi+\Phi_1 x_{t-1}+\cdots+\Phi_p x_{t-p}+\epsilon_t, \quad \epsilon_t \sim \operatorname{N}(0, \Sigma).\tag{1.1}
The VAR(pp) model can be written in companion form, where we let zt=(xt,xt1,,xtp+1)z_t=(x_t^\top, x_{t-1}^\top, \dots, x_{t-p+1}^\top)^\top. Thus, we obtain
(1.2)zt=π+Πzt1+ut,utN(0,Ω), z_t=\pi+\Pi z_{t-1}+u_t, \quad u_t \sim \operatorname{N}(0, \Omega), \tag{1.2}
where π\pi, Π\Pi and Ω\Omega are the corresponding companion form matrices constructed from (ϕ,Φ1,,Φp,Σ)(\phi, \Phi_1, \dots, \Phi_p, \Sigma);
It is now possible to specify the observation equation as
(1.3)yt=MtΛzt y_t = M_t\Lambda z_t \tag{1.3}
where MtM_t is a deterministic selection matrix and Λ\Lambda an aggregation matrix based on the weighting scheme employed. The MtM_t yields a time-varying observation vector by selecting rows corresponding to variables which are observed, whereas Λ\Lambda aggregates the underlying latent process.

2.1 The steady-state prior proposed by \cite{Villani2009} reformulates \eqref{eq:original} to be on the mean-adjusted form
Φ(L)(xtΨdt)=ϵt, \Phi(L)(x_t-\Psi d_t)=\epsilon_t,
where Φ(L)=(InΦ1LΦpLp)\Phi(L)=(I_{n}-\Phi_1 L -\cdots - \Phi_p L^p) is an invertible lag polynomial. The intercept ϕ\phi in \eqref{eq:original} can be replaced by the more general deterministic term Φ0dt\Phi_0d_t, where Φ0\Phi_0 is n×mn\times m and dtd_t is m×1m\times 1. The steady-state parameters Ψ\Psi in \eqref{eq:meanadj} relate to Φ0\Phi_0 through Ψ=[Φ(L)]1Φ0\Psi=[\Phi(L)]^{-1}\Phi_0. By the reformulation, we obtain parameters Ψ\Psi that immediately yield the unconditional mean of xtx_t—the steady state. The rationale is that while it is potentially difficult to express prior beliefs about Φ0\Phi_0, eliciting prior beliefs about Ψ\Psi is often easier.
(1.3)ψjωψ,jN(ψj,ωψ,j)ωψ,jϕψ,λψG(ϕψ,0.5ϕψλψ)ϕψExp(1)λψG(c0,c1)j=1,,nm\begin{aligned} \psi_j|\omega_{\psi ,j}&\sim \operatorname{N}(\underline{\psi}_j, \omega_{\psi, j})\\ \omega_{\psi, j}|\phi_\psi, \lambda_\psi &\sim \operatorname{G}(\phi_\psi, 0.5\phi_\psi\lambda_\psi)\\ \phi_\psi &\sim \operatorname{Exp}(1)\\ \lambda_\psi &\sim \operatorname{G}(c_0, c_1)\\ j&=1, \dots, nm \end{aligned}\tag{1.3}

where G(a,b)\operatorname{G}(a,b) denotes the gamma distribution with shape-rate parametrization, and Exp(c)\operatorname{Exp}(c) denotes the exponential distribution.
The common stochastic volatility specification presented by \cite{Carriero2016} assumes that the covariance structure in the model is constant over time, but adds a factor that enables time-dependent scaling of the error covariance matrix. More specifically, it is assumed that
\begin{align}
\VAR(\epsilon_t|f_t, \Sigma)=f_t\Sigma,
\end{align}
where ftf_t is a scalar, Σ\Sigma is inverse Wishart as in \eqref{eq:iw}, and
\begin{equation}
\begin{aligned}
\log f_t&=\rho\log f_{t-1}+v_t\
v_t&\sim \operatorname{N}(0, \sigma^2)\
\rho&\sim \operatorname{N}(\underline{\mu}\rho, \underline{\Omega}\rho; |\rho|<1)\
\sigma^2&\sim \operatorname{IG}(\underline{d}\cdot\underline{\sigma}^2, , \underline{d}),
\end{aligned}\label{eq:csv}
\end{equation}
where N(a,b;x&lt;c)\operatorname{N}(a, b; |x|&lt;c) denotes the truncated normal distribution with support (c,c)(-c, c), and IG(a,b)\operatorname{IG}(a,b) is the inverse gamma distribution with parameters (a,b)(a, b).

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2021-7-8 07:21:00
$$

\begin{aligned}
\log f_t&=\rho\log f_{t-1}+v_t\
v_t&\sim \operatorname{N}(0, \sigma^2)\
\rho&\sim \operatorname{N}(\underline{\mu}\rho, \underline{\Omega}\rho; |\rho|<1)\
\sigma^2&\sim \operatorname{IG}(\underline{d}\cdot\underline{\sigma}^2, , \underline{d}),
\end{aligned}\label{eq:csv}

$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群