全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 MATLAB等数学软件专版
2020-12-16 19:28:27
$$\textbf{y}_t \sim \mathcal{N}(\mu_t, \sigma^2_t), \quad t=2,\dots,T$$
$$\mu_t = \alpha + \phi^P_{p=1} \textbf{y}_{t-p}, \quad t=(p+1),\dots,T$$
$$\epsilon = \textbf{y} - \mu$$
$$\delta = (\epsilon > 0) \times 1$$
$$\sigma^2_t = \omega + \sum^Q_{q=1} \theta_{q,1} \delta_{t-1} \epsilon^2_{t-1} + \theta_{q,2} (1-\delta_{t-1}) \epsilon^2_{t-1}$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\phi_p \sim \mathcal{N}(0, 1000), \quad p=1,\dots,P$$
$$\omega \sim \mathcal{HC}(25)$$
$$\theta_{q,j} \sim \mathcal{U}(0, 1), \quad q=1\dots,Q, \quad j=1,\dots,2$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 19:29:02
$$\textbf{y}_t \sim \mathcal{N}(\mu_t, \sigma^2_t), \quad t=2,\dots,T$$
$$\mu_t = \alpha + \phi^P_{p=1} \textbf{y}_{t-p} + \delta_{t-1} \gamma_1 \sigma^2_{t-1} + (1 - \delta_{t-1}) \gamma_2 \sigma^2_{t-1}, \quad t=(p+1),\dots,T$$
$$\epsilon = \textbf{y} - \mu$$
$$\delta = (\epsilon > 0) \times 1$$
$$\sigma^2_t = \omega + \sum^Q_{q=1} \theta_{q,1} \delta_{t-1} \epsilon^2_{t-1} + \theta_{q,2} (1-\delta_{t-1}) \epsilon^2_{t-1}$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\gamma_k \sim \mathcal{N}(0, 1000), \quad k=1,\dots,2$$
$$\phi_p \sim \mathcal{N}(0, 1000), \quad p=1,\dots,P$$
$$\omega \sim \mathcal{HC}(25)$$
$$\theta_{q,j} \sim \mathcal{U}(0, 1), \quad q=1\dots,Q, \quad j=1,\dots,2$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 19:30:34
$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$
$$\mu_t = \alpha + \phi \textbf{y}_{t-1} + \sum^K_{k=1} \textbf{X}_{t,k} \beta \lambda^{\textbf{L}[t,k]}, \quad k=1,\dots,K, \quad t=2,\dots,T$$
$$\mu_1 = \alpha + \sum^K_{k=1} \textbf{X}_{1,k} \beta \lambda^{\textbf{L}[1,k]}, \quad k=1,\dots,K$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta \sim \mathcal{N}(0, 1000)$$
$$\lambda \sim \mathcal{U}(0, 1)$$
$$\phi \sim \mathcal{N}(0, 1000)$$
$$\sigma \sim \mathcal{HC}(25)$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 19:31:10
The \pkg{LaplacesDemon} package is capable of estimating any Bayesian model for which the likelihood is specified\footnote{Examples of more than 100 Bayesian models may be found in the ``Examples'' vignette that comes with the \pkg{LaplacesDemon} package. Likelihood-free estimation is also possible by approximating the likelihood, such as in Approximate Bayesian Computation (ABC).}. To use the \pkg{LaplacesDemon} package, the user must specify a model. Let's consider a linear regression model, which is often denoted as:

$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$
$$\mu = \textbf{X}\beta$$

The dependent variable, $\textbf{y}$, is normally distributed according to expectation vector $\mu$ and scalar variance $\sigma^2$, and expectation vector $\mu$ is equal to the inner product of design matrix \textbf{X} and transposed parameter vector $\beta$.

For a Bayesian model, the notation for the residual variance, $\sigma^2$, has often been replaced with the inverse of the residual precision, $\tau^{-1}$. Here, $\sigma^2$ will be used. Prior probabilities are specified for $\beta$ and $\sigma$ (the standard deviation, rather than the variance):

$$\beta_j \sim \mathcal{N}(0, 1000), \quad j=1,\dots,J$$
$$\sigma \sim \mathcal{HC}(25)$$

Each of the $J$ $\beta$ parameters is assigned a vague\footnote{`Traditionally, a vague prior would be considered to be under the class of uninformative or non-informative priors. 'Non-informative' may be more widely used than 'uninformative', but here that is considered poor English, such as saying something is `non-correct' when there's a word for that \dots `incorrect'. In any case, uninformative priors do not actually exist \citep{irony97}, because all priors are informative in some way. These priors are being described here as vague, but not as uninformative.} prior probability distribution that is normally-distributed according to $\mu=0$ and $\sigma^2=1000$. The large variance or small precision indicates a lot of uncertainty about each $\beta$, and is hence a vague distribution. The residual standard deviation $\sigma$ is half-Cauchy-distributed according to its hyperparameter, scale=25. When exploring new prior distributions, the user is encouraged to use the \code{is.proper} function to check for prior propriety.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 19:34:29
A numerical approximation algorithm iteratively maximizes the logarithm of the unnormalized joint posterior density as specified in this \code{Model} function. In Bayesian inference, the logarithm of the unnormalized joint posterior density is proportional to the sum of the log-likelihood and logarithm of the prior densities:

$$\log[p(\Theta|\textbf{y})] \propto \log[p(\textbf{y}|\Theta)] + \log[p(\Theta)]$$

where $\Theta$ is a set of parameters, $\textbf{y}$ is the data, $\propto$ means `proportional to'\footnote{For those unfamiliar with $\propto$, this symbol simply means that two quantities are proportional if they vary in such a way that one is a constant multiplier of the other. This is due to an unspecified constant of proportionality in the equation. Here, this can be treated as `equal to'.}, $p(\Theta|\textbf{y})$ is the joint posterior density, $p(\textbf{y}|\Theta)$ is the likelihood, and $p(\Theta)$ is the set of prior densities.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 19:37:12
Finally, everything is put together to calculate the logarithm of the unnormalized joint posterior density. The expectation vector is the inner product of the design matrix, and the transpose of the vector are used to estimate the sum of the log-likelihoods, where:

$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$

and as noted before, the logarithm of the unnormalized joint posterior density is:

$$\log[p(\Theta|\textbf{y})] \propto \log[p(\textbf{y}|\Theta)] + \log[p(\Theta)]$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 19:42:11
$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$
$$\mu = \alpha + \beta_1 \textbf{x} + \beta_2 (\textbf{x} - \theta)[(\textbf{x} - \theta) > 0]$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta \sim \mathcal{N}(0, 1000)$$
$$\sigma \sim \mathcal{HC}(25)$$
$$\theta \sim \mathcal{U}(-1.3, 1.1)$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 20:02:31
This form can be stated as the unnormalized joint posterior being proportional to the likelihood times the prior. However, the goal in model-based Bayesian inference is usually not to summarize the unnormalized joint posterior distribution, but to summarize the marginal distributions of the parameters. The full parameter set $\Theta$ can typically be partitioned into

$$\Theta = \{\Phi, \Lambda\}$$

where $\Phi$ is the sub-vector of interest, and $\Lambda$ is the complementary sub-vector of $\Theta$, often referred to as a vector of nuisance parameters. In a Bayesian framework, the presence of nuisance parameters does not pose any formal, theoretical problems. A nuisance parameter is a parameter that exists in the joint posterior distribution of a model, though it is not a parameter of interest. The marginal posterior distribution of $\phi$, the parameter of interest, can simply be written as

$$p(\phi | \textbf{y}) = \int p(\phi, \Lambda | \textbf{y}) d\Lambda$$

In model-based Bayesian inference, Bayes' theorem is used to estimate the unnormalized joint posterior distribution, and finally the user can assess and make inferences from the marginal posterior distributions.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 20:03:29
$$\textbf{Y}_{t,j} \sim \mathcal{N}(\mu_{t,j}, \sigma^2_j), \quad t=1,\dots,T, \quad j=1,\dots,J$$
$$\mu_{t,j} = \alpha_j + \sum^P_{p=1} \Gamma_{1:J,j,p}\Phi_{1:J,j,p}\textbf{Y}_{t-p,j}$$
$$\alpha_j \sim \mathcal{N}(0, 1000)$$
$$\Gamma_{i,k,p} \sim \mathcal{BERN}(0.5), \quad i=1,\dots,J, \quad k=1,\dots,J, \quad p=1,\dots,P$$
$$(\Phi_{i,k,p} | \Gamma_{i,k,p}) \sim (1 - \Gamma_{i,k,p})\mathcal{N}(0, 0.01) + \Gamma_{i,k,p}\mathcal{N}(0, 10), \quad i=1,\dots,J, \quad k=1,\dots,J, \quad p=1,\dots,P$$
$$\sigma_j \sim \mathcal{HC}(25)$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 20:04:55
For example, suppose one asks the question: what is the probability of going to Hell, conditional on consorting (or given that a person consorts) with Laplace's Demon\footnote{This example is, of course, intended with humor.}. By replacing $A$ with $Hell$ and $B$ with $Consort$, the question becomes

$$\Pr(\mathrm{Hell} | \mathrm{Consort}) = \frac{\Pr(\mathrm{Consort} | \mathrm{Hell}) \Pr(\mathrm{Hell})}{\Pr(\mathrm{Consort})}$$

Note that a common fallacy is to assume that $\Pr(A | B) = \Pr(B | A)$, which is called the conditional probability fallacy.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 20:05:26
Another way to state Bayes' theorem is

$$\Pr(A_i | B) = \frac{\Pr(B | A_i)\Pr(A_i)}{\Pr(B | A_i)\Pr(A_i) +...+ \Pr(B | A_n)\Pr(A_n)}$$

Let's examine our \textit{burning} question, by replacing $A_i$ with Hell or Heaven, and replacing $B$ with Consort


\item $\Pr(A_1) = \Pr(\mathrm{Hell})$
\item $\Pr(A_2) = \Pr(\mathrm{Heaven})$
\item $\Pr(B) = \Pr(\mathrm{Consort})$
\item $\Pr(A_1 | B) = \Pr(\mathrm{Hell} | \mathrm{Consort})$
\item $\Pr(A_2 | B) = \Pr(\mathrm{Heaven} | \mathrm{Consort})$
\item $\Pr(B | A_1) = \Pr(\mathrm{Consort} | \mathrm{Hell})$
\item $\Pr(B | A_2) = \Pr(\mathrm{Consort} | \mathrm{Heaven})$

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-16 20:05:51
The basis for Bayesian inference is derived from Bayes' theorem. Here is Bayes' theorem, equation \ref{bayestheorem}, again

$$\Pr(A | B) = \frac{\Pr(B | A)\Pr(A)}{\Pr(B)}$$

Replacing $B$ with observations $\textbf{y}$, $A$ with parameter set $\Theta$, and probabilities $\Pr$ with densities $p$ (or sometimes $\pi$ or function $f$), results in the following

$$
p(\Theta | \textbf{y}) = \frac{p(\textbf{y} | \Theta)p(\Theta)}{p(\textbf{y})}$$

where $p(\textbf{y})$ will be discussed below, p($\Theta$) is the set of prior distributions of parameter set $\Theta$ before $\textbf{y}$ is observed, $p(\textbf{y} | \Theta)$ is the likelihood of $\textbf{y}$ under a model, and $p(\Theta | \textbf{y})$ is the joint posterior distribution, sometimes called the full posterior distribution, of parameter set $\Theta$ that expresses uncertainty about parameter set $\Theta$ after taking both the prior and data into account. Since there are usually multiple parameters, $\Theta$ represents a set of $j$ parameters, and may be considered hereafter in this article as

$$\Theta = \theta_1,...,\theta_j$$

The denominator

$$p(\textbf{y}) = \int p(\textbf{y} | \Theta)p(\Theta) d\Theta$$

defines the ``marginal likelihood'' of $\textbf{y}$, or the ``prior predictive distribution'' of $\textbf{y}$, and may be set to an unknown constant $\textbf{c}$. The prior predictive distribution\footnote{The predictive distribution was introduced by \citet{jeffreys61}.} indicates what $\textbf{y}$ should look like, given the model, before $\textbf{y}$ has been observed. Only the set of prior probabilities and the model's likelihood function are used for the marginal likelihood of $\textbf{y}$. The presence of the marginal likelihood of $\textbf{y}$ normalizes the joint posterior distribution, $p(\Theta | \textbf{y})$, ensuring it is a proper distribution and integrates to one.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 09:39:42
The core of models implemented in \pkg{brms} is the prediction of the response $y$ through predicting all parameters $\theta_p$ of the response distribution $D$, which is also called the model \code{family} in many R packages. We write
$$y_i \sim D(\theta_{1i}, \theta_{2i}, ...)$$
to stress the dependency on the $i\textsuperscript{th}$ observation. Every parameter $\theta_p$ may be regressed on its own predictor term $\eta_p$ transformed by the inverse link function $f_p$ that is $\theta_{pi} = f_p(\eta_{pi})$\footnote{A parameter can also be assumed constant across observations so that a linear predictor is not required.}. Such models are typically refered to as \emph{distributional models}\footnote{The models described in \citet{brms1} are a sub-class of the here described models.}. Details about the parameterization of each \code{family} are given in \code{vignette("brms\_families")}.

Suppressing the index $p$ for simplicity, a predictor term $\eta$ can generally be written as
$$
\eta = \mathbf{X} \beta + \mathbf{Z} u + \sum_{k = 1}^K s_k(x_k)
$$
In this equation, $\beta$ and $u$ are the coefficients at population-level and group-level respectively and $\mathbf{X}, \mathbf{Z}$ are the corresponding design matrices. The terms $s_k(x_k)$ symbolize optional smooth functions of unspecified form based on covariates $x_k$ fitted via splines (see \citet{wood2011} for the underlying implementation in the \pkg{mgcv} package) or Gaussian processes \citep{williams1996}. The response $y$ as well as $\mathbf{X}$, $\mathbf{Z}$, and $x_k$ make up the data, whereas $\beta$, $u$, and the smooth functions $s_k$ are the model parameters being estimated. The coefficients $\beta$ and $u$ may be more commonly known as fixed and random effects, but I avoid theses terms following the recommendations of \citet{gelmanMLM2006}. Details about prior distributions of $\beta$ and $u$ can be found in \citet{brms1} and under \code{help("set\_prior")}.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 09:40:15
As an alternative to the strictly additive formulation described above, predictor terms may also have any form specifiable in Stan. We call it a \emph{non-linear} predictor and write
$$\eta = f(c_1, c_2, ..., \phi_1, \phi_2, ...)$$
The structure of the function $f$ is given by the user, $c_r$ are known or observed covariates, and $\phi_s$ are non-linear parameters each having its own linear predictor term $\eta_{\phi_s}$ of the form specified above. In fact, we should think of non-linear parameters as placeholders for linear predictor terms rather than as parameters themselves. A frequentist implementation of such models, which inspired the non-linear syntax in \pkg{brms}, can be found in the \pkg{nlme} package \citep{nlme2016}.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 09:40:42
While some non-linear relationships, such as quadratic relationships, can be expressed within the basic R formula syntax, other more complicated ones cannot. For this reason, it is possible in \pkg{brms} to fully specify non-linear predictor terms similar to how it is done in \pkg{nlme}, but fully compatible with the extended multilevel syntax described above. Suppose, for instance, we want to model the non-linear growth curve
$$
y = b_1 (1 - \exp(-(x / b_2)^{b_3})
$$
between $y$ and $x$ with parameters $b_1$, $b_2$, and $b_3$ (see Example 3 in this paper for an implementation of this model with real data). Furthermore, we want all three parameters to vary by a grouping variable $g$ and model those group-level effects as correlated. Additionally $b_1$ should be predicted by a covariate $z$. We can express this in \pkg{brms} using multiple formulas, one for the non-linear model itself and one per non-linear parameter:
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 09:59:36
While some non-linear relationships, such as quadratic relationships, can be expressed within the basic R formula syntax, other more complicated ones cannot. For this reason, it is possible in \pkg{brms} to fully specify non-linear predictor terms similar to how it is done in \pkg{nlme}, but fully compatible with the extended multilevel syntax described above. Suppose, for instance, we want to model the non-linear growth curve
$$
y = b_1 (1 - \exp(-(x / b_2)^{b_3})
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 10:45:05
Model
The Bayesian one-sample t-test makes the assumption that the observations are normally distributed with mean $\mu$ and variance $\sigma^2$. The model is then reparametrized in terms of the standardized effect size $\delta = \mu/\sigma$. For the standardized effect size, a Cauchy prior with location zero and scale $r = 1/\sqrt{2}$ is used. For the variance $\sigma^2$, Jeffreys's prior is used: $p(\sigma^2) \propto 1/\sigma^2$.
  
In this example, we are interested in comparing the null model $\mathcal{H}_0$, which posits that the effect size $\delta$ is zero, to the alternative hypothesis $\mathcal{H}_1$, which assigns $\delta$ the above described Cauchy prior.
  
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 10:46:43
Model and Data
The model that we will use assumes that each of the $n$ observations $y_i$ (where $i$ indexes the observation, $i = 1,2,...,n$) is normally distributed with corresponding mean $\theta_i$ and a common known variance $\sigma^2$: $y_i \sim \mathcal{N}(\theta_i, \sigma^2)$. Each $\theta_i$ is drawn from a normal group-level distribution with mean $\mu$ and variance $\tau^2$: $\theta_i \sim \mathcal{N}(\mu, \tau^2)$. For the group-level mean $\mu$, we use a normal prior distribution of the form $\mathcal{N}(\mu_0, \tau^2_0)$. For the group-level variance $\tau^2$, we use an inverse-gamma prior of the form $\text{Inv-Gamma}(\alpha, \beta)$.

In this example, we are interested in comparing the null model $\mathcal{H}_0$, which posits that the group-level mean $\mu = 0$, to the alternative model $\mathcal{H}_1$, which allows $\mu$ to be different from zero. First, we generate some data from the null model:
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 10:47:25
In this vignette, we explain how one can bridge sampling can be performed when we are faced with parameter spaces that are in some way non-standard. We will look at simplex parameters and circular parameters.

Simplex parameters are encountered often, in particular in mixture models or when modeling compositional data, where a set of parameters $\theta_1, \dots, \theta_k$ is used that are constrained by $0 \leq \theta \leq 1$ and $\sum_{j=1}^k \theta_j = 1$.  This happens often when we use relative weights of several components, or when we model proportions or probabilities.

Circular parameters are angles that lie on the circle, that is, the parameters are given in degrees ($0^\circ - 360^\circ$) or radians ($0 - 2\pi$). The core property of this type of parameter space is that it is periodical, that is, for example $\theta = 0^\circ = 360^\circ.$ Another way to think of such parameters is as two-dimensional unit vectors, $\boldsymbol{x} = \{x_1, x_2\}$, which are constrained by $\sqrt{x_1^2 + x_2^2} = 1$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 10:50:40
Model and Data
The model that we will use assumes that each of the $n$ observations $y_i$ (where $i$ indexes the observation, $i = 1,2,...,n$) is normally distributed with corresponding mean $\theta_i$ and a common known variance $\sigma^2$: $y_i \sim \mathcal{N}(\theta_i, \sigma^2)$. Each $\theta_i$ is drawn from a normal group-level distribution with mean $\mu$ and variance $\tau^2$: $\theta_i \sim \mathcal{N}(\mu, \tau^2)$. For the group-level mean $\mu$, we use a normal prior distribution of the form $\mathcal{N}(\mu_0, \tau^2_0)$. For the group-level variance $\tau^2$, we use an inverse-gamma prior of the form $\text{Inv-Gamma}(\alpha, \beta)$.

In this example, we are interested in comparing the null model $\mathcal{H}_0$, which posits that the group-level mean $\mu = 0$, to the alternative model $\mathcal{H}_1$, which allows $\mu$ to be different from zero. First, we generate some data from the null model:
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:21:56
$$
CI_{s} = \sum_{u=u_1}^{^uNC} \sum_{i=i_1}^{^iN} UR_{ui/N}.
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:22:34
$$
RFC_s = \frac{FC_s}{N} = \frac{\sum_{i=i_1}^{^iN} UR_i}{N}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:23:42
$$
CV_{e} = {Uc_{e}}  \cdot{IC_{e}}  \cdot \sum {IUc_{e}}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:29:42
$$
p(M_\gamma | y, X) \; = \; \frac{p(y |M_\gamma, X) p(M_\gamma)}{p(y|X)} \;  = \frac{p(y |M_\gamma, X) p(M_\gamma)        }{\sum_{s=1}^{2^K} p(y| M_s, X) p(M_s)}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:32:25
$$ p(M_\gamma) = \theta^{k_\gamma} (1-\theta)^{K-k_\gamma} $$
Since expected model size is $\bar{m}= K \theta$, the researcher's prior choice reduces to eliciting a prior expected model size $\bar{m}$ (which defines $\theta$ via the relation $\theta=\bar{m}/K$). Choosing a prior model size of $K/2$ yields $\theta=\frac{1}{2}$ and thus exactly the uniform model prior $p(M_\gamma)=2^{-K}$. Therefore, putting prior model size at a value $<\frac{1}{2}$ tilts the prior distribution toward smaller model sizes and vice versa.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:49:22
Let $y_{i}$ be a $(1 \times T)$ matrix containing a single endogenous variable from $Y$ for $i = 1, 2, ..., n$. Let $x_{i}$ be an $((L+1) \times T)$ matrix of $L$ lags of $y_{i}$ and a constant for $i = 1, 2, ..., n$. $\phi_{i}$ is a $(1 \times (L+1))$ matrix containing the lagged coefficients from the reduced form univariate Autoregression (AR) model for $i = 1, 2, ..., n$. $e_{i}$ is a $(1 \times T)$ matrix of the residuals from the univariate AR model for $i = 1, 2, ..., n$. $e$ is an $(n \times T)$ matrix of residuals from the univariate AR models. $\Sigma$ is an $(n \times n)$ symmetric covariance matrix of the residuals from the univariate AR models. $\Sigma_{i}$ is the $(i,i)$ element of $\Sigma$ for $i = 1, 2, ..., n$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 18:52:29
$p(A|Y)$ is the posterior density of $A|Y$. The products of $p(det(A))$ and/or $p(H)$ are multiplied by $p(A|Y)$ when priors are chosen for $det(A)$ and/or $H$ ($A^{-1}$), respectively. $p(A)$, $p(det(A))$, and $p(H)$ are products of the prior densities for $A$, $det(A)$, and $H$ ($A^{-1}$), respectively. $det(A)$ and $det(A \Omega A^{\top})$ are the determinants of the matrices $A$ and $A \Omega A^{\top}$, respectively.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 19:05:45
$$
{\bfQ_t} = \left( {\bar \bfQ - \bfA'\bar Q\bfA - \bfB'\bar \bfQ\bfB - \bfG'{{\bar \bfQ}^ - }\bfG} \right) + \bfA'{\bfz_{t - 1}}{{\bfz'}_{t - 1}}\bfA + \bfB'{\bfQ_{t - 1}}\bfB + \bfG'\bfz_t^ - {{\bfz'}_t}^ - \bfG
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 19:06:21
$$
{Q_{t + n}} = \left( {1 - \alpha  - \beta } \right)\bar Q + \alpha {E_t}\left[ {{z_{t + n - 1}}{{z'}_{t + n - 1}}} \right] + \beta {Q_{t + n - 1}}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-17 19:07:08
$$
{E_t}\left[ {{R_{t + n}}} \right] = \sum\limits_{i = 0}^{n - 2} {\left( {1 - \alpha  - \beta } \right)\bar R{{\left( {\alpha  + \beta } \right)}^i} + {{\left( {\alpha  + \beta } \right)}^{n - 1}}{R_{n + 1}}}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群