全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 R语言论坛
1755 12
2020-12-06
今天进行了新的测试,以后不论是 MATLAB还是R程序,都会做成PDF 文件的样子发布,作为课程发布;
今天没有对英文单词的拼写是否正确进行校正。

只是测试一下,这次的测试不收论坛币,以后发布的课程都会收论坛币。因为今天的测试使用了一个命令 echo ,这样的话大家看不到源代码。以后发布的 PDF 文件,会把源代码添加进去。让大家使用的时候更加的容易。


$$
{\varepsilon _t} = H_t^{1/2}{z_t},
$$



firstlession.pdf
大小:(439.09 KB)

 马上下载




图片一张:
一个贝叶斯估计的图片


后续上传带代码的PDF文件。

图片二,本帖明天添加一个带有源代码的 R 软件的PDF 文件后,不再修改本帖;
这个图片是 R 软件贝叶斯计量做的图片。 不过使用难度非常高,大概需要大学教授、博士学位水平,我群里会 WinBUGS贝叶斯估计的也是一个博士后弟弟;认识了十年了吧。
先上代码:
复制代码
插入的第二张图片, 这样看来本帖过于大,不再增加带有代码的 PDF 文件, 那么以后的帖子会采用带有彩色代码,重要的事情强调一遍。(彩色R代码的PDF文件,以后经常上传论坛。代码可以自己在自己的电脑上测试)。
图片二.png
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2020-12-6 16:39:22
$$
\begin{aligned}
y_t(\tau) ~ &= ~ \beta_{1t} ~ + \beta_{2t} \left( \frac{1 ~ - ~ e^{-\lambda\tau}}{\lambda\tau}  \right) ~ + ~ \beta_{3t} \left( \frac{1 ~ - ~ e^{-\lambda\tau}}{\lambda\tau} ~ - ~ e^{-\lambda\tau}  \right) ~ + ~ \varepsilon_t(\tau), &\varepsilon_t ~ &\sim ~ N(0, ~ \sigma^2I_p), \\
\beta_{t+1} ~ &= ~ \left( I ~ - ~ \Phi\right)\mu ~ + ~ \Phi\beta_t + \eta_t, &\eta_t ~ &\sim ~ N(0, ~ \Sigma_{\eta}), \\
  & &\beta_1 ~ &\sim ~ N(\mu, ~ P_{\beta}),
\end{aligned}
$$
where $y_t(\tau)$ is the interest rate at time $t$ for maturity $\tau$, $\lambda$ is a constant, $p$ is the number of maturities (dependent variables), $\beta_t$ is the unobserved vector containing the factors $\beta_{1t}$, $\beta_{2t}$, $\beta_{3t}$, $\mu$ is a vector containing the means of the factors, $\Phi$ is a vector autoregressive coefficient matrix corresponding to a stationary process, and $P_\beta$ is the initial uncertainty of $\beta_1$ that is chosen such that $P_\beta ~ - ~ \Phi P_\beta \Phi^{\top} ~ = ~ \Sigma_\eta$. The factors in $\beta$ have an interesting interpretation: $\beta_1$ can be seen as the level for all interest rates, $\beta_2$ identifies the slope of the yield curve, and $\beta_3$ represents the shape of the yield curve.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-6 16:42:52
$$
\begin{aligned}
y_t ~ &= ~ \mu_t ~ + ~ \varepsilon_t, ~~~~~~ \varepsilon_t ~ \sim ~ N(0, ~ \sigma_\varepsilon^2), \\
\mu_{t+1} ~ &= ~ \mu_t ~ + ~ \eta_t, ~~~~~~ \eta_t ~ \sim ~ N(0, ~ \sigma_\eta^2),
\end{aligned}
$$


where $y_t$ is the dependent variable at time $t$, $\mu_t$ is the unobserved level at time $t$, and $\varepsilon_t$ and $\eta_t$ are disturbances. This model has two parameters, $\sigma_\varepsilon^2$ and $\sigma_\eta^2$ that will be estimated by maximising the loglikelihood. In addition, the level $\mu_t$ is a state parameter that will be estimated by employing the Kalman Filter.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-6 16:44:52
$$
\begin{aligned}
y_t ~ &= ~ Z_t\alpha_t ~ + ~ \varepsilon_t, &\varepsilon_t ~ &\sim ~ N(0, ~ H_t), \\
\alpha_{t+1} ~ &= ~ T_t\alpha_t ~ + ~ R_t\eta_t, &\eta_t ~ &\sim ~ N(0, ~ Q_t), \\
& &\alpha_1 ~ &\sim ~ N(a_1, ~ P_1),
\end{aligned}
$$

where $y_t$ is the *observation vector*, a $p ~ \times ~ 1$ vector of dependent variables at time $t$, $\alpha_t$ is the unobserved *state vector*, a $m ~ \times ~ 1$ vector of state variables at time $t$, and $\varepsilon_t$ and $\eta_t$ are disturbance vectors of respectively the observation equation, and the state equation. To initialise the model, $a_1$ is used as the initial guess of the state vector, and $P_1$ is the corresponding uncertainty of that guess. The matrices $Z_t$, $H_t$, $T_t$, $R_t$, and $Q_t$ are called the *system matrices* of the state space model. Different specifications of these system matrices, lead to different interpretations of the model at hand.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-7 07:18:29
## Definition of the GMVAR model
The Gaussian mixture vector autoregressive (GMVAR) model with $M$ mixture components and autoregressive order $p$, which we refer to as the GMVAR($p,M$) model, is a mixture of $M$ stationary Gaussian VAR($p$) models with time varying mixing weights. At each point of time $t$, a GMVAR process generates an observation from one of its mixture components (also called regimes) according to the probabilities pointed by the mixing weights.

Let $y_t=(y_{1t},...,y_{dt})$, $t=1,2,...$ be the $d$-dimensional vector valued time series of interest, and let $\mathcal{F}_{t-1}$ be the $\sigma$-algebra generated by the random variables $\lbrace y_{t-j},j>0 \rbrace$ (containing the information on the past of $y_t$). Let $M$ be the number of mixture components, and $\boldsymbol{s}_t=(s_{t,1},...,s_{t,M})$ a sequence of (non-observable) random vectors such that at each $t$ exactly one of its components takes the value one and others take the value zero. The component that takes the value one is selected according to the probabilities $Pr(s_{t,m}=1|\mathcal{F}_{t-1})\equiv\alpha_{m,t}$, $m=1,...,M$ with $\sum_{m=1}^M\alpha_{m,t}=1$ for all $t$.
Then, for a GMVAR($p,M$) model, we have
$$
y_t = \sum_{m=1}^M s_{t,m}(\mu_{m,t}+\Omega_{m}^{1/2}\varepsilon_t), \quad \varepsilon_t\sim\text{NID}(0,I_d),
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-7 07:18:50
where $\Omega_{m}$ is a conditional positive definite covariance matrix, NID stands for "normally and independently distributed", and
$$
\mu_{m,t} = \varphi_{m,0} + \sum_{i=1}^p A_{m,i}y_{t-i}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

点击查看更多内容…
相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群