全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 MATLAB等数学软件专版
2020-12-25 17:26:10
$$
\begin{align}
y_t &= \left[ 1    \phi \right] {x}_{t-1} +
\varepsilon_t\label{obseq}\\
{x}_t &= \left[\begin{array}{ll}
                        1 & \phi\\
                        0 & \phi
                   \end{array}\right]{x}_{t-1}
            + \left[\begin{array}{l}
                          \alpha\\
                          \beta
                    \end{array}\right]\varepsilon_t.\label{stateeq}
\end{align}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:27:49
The model is fully specified once we state the distribution of the
error term $\varepsilon_t$. Usually we assume that these are
independent and identically distributed, following a normal
distribution with mean 0 and variance $\sigma^2$, which we write as
$\varepsilon_t \sim\mbox{NID}(0, \sigma^2)$.

$$
\begin{align*}
y_t &= (\ell_{t-1} + \phi b_{t-1})(1 + \varepsilon_t)\\
\ell_t &= (\ell_{t-1} + \phi b_{t-1})(1 + \alpha \varepsilon_t)\\
b_t &= \phi b_{t-1} + \beta(\ell_{t-1}+\phi b_{t-1})\varepsilon_t,
\end{align*}$$
$$
\begin{align*}
y_t &= \left[ 1    \phi \right] \bm{x}_{t-1}(1 + \varepsilon_t)\\
{x}_t &= \left[\begin{array}{ll}
                        1 & \phi \\
                        0 & \phi
                   \end{array}\right]\bm{x}_{t-1} + \left[ 1
                   \phi
                   \right] {x}_{t-1}
             \left[\begin{array}{l}
                          \alpha\\
                          \beta
                    \end{array}\right]\varepsilon_t.
\end{align*}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:28:27
We constrain the initial states $\bm{x}_0$ so that the seasonal
indices add to zero for additive seasonality, and add to $m$ for
multiplicative seasonality. There have been several suggestions for
restricting the parameter space for  $\alpha$, $\beta$ and $\gamma$.
The traditional approach is to ensure that the various equations can
be interpreted as weighted averages, thus requiring $\alpha$,
$\beta^*=\beta/\alpha$, $\gamma^*=\gamma/(1-\alpha)$ and $\phi$ to
all lie within $(0,1)$. This suggests
$$0<\alpha<1,\qquad 0<\beta<\alpha,\qquad 0<\gamma <
1-\alpha,\qquad\mbox{and}\qquad 0<\phi<1.
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:29:11
$$\mbox{AIC} = L^*(\hat{\theta},\hat{x}_0) + 2q,
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:29:52
Choosing the model order using unit root tests and the AIC

A non-seasonal ARIMA($p,d,q$) process is given by
$$
\phi(B)(1-B^d)y_{t} = c + \theta(B)\varepsilon_t
$$

where $\{\varepsilon_t\}$ is a white noise process with mean zero
and variance $\sigma^2$, $B$ is the backshift operator, and
$\phi(z)$ and $\theta(z)$ are polynomials of order $p$ and $q$
respectively. To ensure causality and invertibility, it is assumed
that $\phi(z)$ and $\theta(z)$ have no roots for $|z|<1$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:30:36
The seasonal ARIMA$(p,d,q)(P,D,Q)_m$ process is given by
$$
\Phi(B^m)\phi(B)(1-B^{m})^D(1-B)^dy_{t} = c + \Theta(B^m)\theta(B)\varepsilon_t
$$
where $\Phi(z)$ and $\Theta(z)$ are polynomials of orders $P$ and
$Q$ respectively, each containing no roots inside the unit circle.
If $c\ne0$, there is an implied polynomial of order $d+D$ in the
forecast function.

The main task in automatic ARIMA forecasting is selecting an
appropriate model order, that is the values $p$, $q$, $P$, $Q$, $D$,
$d$. If $d$ and $D$ are known, we can select the orders $p$, $q$,
$P$ and $Q$ via an information criterion such as the AIC:
$$\mbox{AIC} = -2\log(L) + 2(p+q+P+Q+k)$$
where $k=1$ if $c\ne0$ and 0 otherwise, and $L$ is the maximized
likelihood of the model fitted to the \emph{differenced} data
$(1-B^m)^D(1-B)^dy_t$.  The likelihood of the full model for $y_t$
is not actually defined and so the value of the AIC for different
levels of differencing are not comparable.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:30:59
For seasonal data, we consider ARIMA$(p,d,q)(P,D,Q)_m$ models where
$m$ is the seasonal frequency and $D=0$ or $D=1$ depending on an
extended Canova-Hansen test [@CH95]. Canova and Hansen only
provide critical values for $2<m<13$. In our implementation of their
test, we allow any value of $m>1$. Let $C_m$ be the critical value
for seasonal period $m$. We plotted $C_m$ against $m$ for values of
$m$ up to 365 and noted that they fit the line $C_m = 0.269
m^{0.928}$ almost exactly. So for $m>12$, we use this simple
expression to obtain the critical value.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:51:39
$$
\begin{align*}
\frac{{dy_1}}{{dt}} &=  - k_1 \cdot y_1 + k_2 \cdot y_2 \cdot y_3 \\
\frac{{dy_2}}{{dt}} &= k_1 \cdot y_1 - k_2 \cdot y_2 \cdot y_3 - k_3 \cdot y_2 \cdot y_2 \\
\frac{{dy_3}}{{dt}} &= k_3 \cdot y_2 \cdot y_2 \\
\end{align*}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 17:54:57
$$
\begin{align*}
    \frac{dX}{dt} &=  a \cdot X + Y \cdot Z \\
    \frac{dY}{dt} &=  b \cdot (Y - Z) \\
    \frac{dZ}{dt} &=  - X \cdot Y + c \cdot Y - Z
\end{align*}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 18:07:46
$$
D(Y | X) = D(Y | X_1, \dots, X_m) = D(Y | f( X_1, \dots,
X_m)),
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 18:13:57
The model that we will use assumes that each of the $n$ observations $y_i$ (where $i$ indexes the observation, $i = 1,2,...,n$) is normally distributed with corresponding mean $\theta_i$ and a common known variance $\sigma^2$: $y_i \sim \mathcal{N}(\theta_i, \sigma^2)$. Each $\theta_i$ is drawn from a normal group-level distribution with mean $\mu$ and variance $\tau^2$: $\theta_i \sim \mathcal{N}(\mu, \tau^2)$. For the group-level mean $\mu$, we use a normal prior distribution of the form $\mathcal{N}(\mu_0, \tau^2_0)$. For the group-level variance $\tau^2$, we use an inverse-gamma prior of the form $\text{Inv-Gamma}(\alpha, \beta)$.

In this example, we are interested in comparing the null model $\mathcal{H}_0$, which posits that the group-level mean $\mu = 0$, to the alternative model $\mathcal{H}_1$, which allows $\mu$ to be different from zero. First, we generate some data from the null model:
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 19:12:02
$$
\frac{\partial O_2}{\partial t}=- \frac{\partial Flux}{\partial x} -cons \cdot \frac{O_2}{O_2+k_s}\\
Flux = - D\cdot \frac{\partial O_2}{\partial x} \\
O_2(x=0)=upO2
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:37:29
A bivariate copula function describes the dependence structure between two random variables. Two random variables $X_1$ and $X_2$ are joined by a copula function C if their joint cumulative distribution function can be written as
$$F(x_1, x_2) = C(F_1 (x_1), F_2 (x_2 )), -\infty \le  x_1, x_2 \le +\infty$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:37:45
there exists for every bivariate (multivariate in extension) distribution a copula representation C which is unique for continuous random variables. If the joint cumulative distribution function and the two marginals are known, then the copula function can be written as
$$C(u, ~v) = F(F_1^{-1} (u), ~F_2^{-1} (v)),~ 0 \le~ u, ~v ~\le~ 1$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:38:02
To obtain the joint probability density, the joint cumulative distribution $F(x_1, x_2)$ should be differentiated to yield
$$f(x_1, ~x_2) = f_1(x_1) ~f_2(x_2 ) ~c(F_1(x_1), ~F_2 (x_2))$$
where $f_1$ and $f_2$ denote the marginal density functions and c the copula density function corresponding to the copula cumulative distribution function C. Therefore from
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:38:25
a bivariate probability density can be expressed using the marginal and the copula density, given that the copula function is absolutely continuous and twice differentiable.

When the functional form of the marginal and the joint densities are known, the copula density can be derived as follows
$$c(F_1(x_1), ~F_2(x_2)) = \frac{f(x_1, ~x_2)}{f_1 (x_1 )~ f_2 (x_2 )}$$                                                               

While our interest does not lie in finding the copula function, the equations above serve to show how one can move from the copula function to the bivariate density or vice-versa, given that the marginal densities are known. The decompositions allow for constructions of other and possible better models for the variables than would be possible if we limited ourselves to only existing standard bivariate distributions.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:38:55
Since there are two sources of heterogeneity in the data, the within- and between-study variability, the parameters involved in a meta-analysis of diagnostic accuracy studies vary at two levels. For each study $i$, $i = 1, ..., n$, let $Y_{i}~=~(Y_{i1},~ Y_{i2})$  denote the true positives and true negatives, $N_{i}~=~( N_{i1},~ N_{i2})$ the diseased and healthy individuals respectively, and $\pi_{i}~ =~ (\pi_{i1},~ \pi_{i2})$ represent the `unobserved' sensitivity and specificity respectively.  

Given study-specific sensitivity and specificity, two separate binomial distributions describe the distribution of true positives and true negatives among the diseased and the healthy individuals as follows
$$Y_{ij}~ |~ \pi_{ij}, ~\textbf{x}_i~ \sim~ bin(\pi_{ij},~ N_{ij}), i~=~1, ~\dots ~n, ~j~=~1, ~2$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:39:19
where $\textbf{x}_i$ generically denotes one or more covariates, possibly affecting $\pi_{ij}$. Equation ~\ref{eq:5} forms the higher level of the hierarchy and models the within-study variability. The second level of the hierarchy aims to model the between study variability of sensitivity and specificity while accounting for the inherent negative correlation thereof, with a bivariate distribution as follows
$$
\begin{pmatrix}
g(\pi_{i1})\\
g(\pi_{i2})
\end{pmatrix} \sim f(g(\pi_{i1}),~ g(\pi_{i2}))~ =~ f(g(\pi_{i1})) ~f(g(\pi_{i2})) ~c(F_1(g(\pi_{i1})),~ F_2(g(\pi_{i2}))),
$$               
where $g(.)$ denotes a transformation that might be used to modify the (0, 1) range to the whole real line. While it is critical to ensure that the studies included in the meta-analysis satisfy the specified entry criterion, there are study specific characteristics like different test thresholds and other unobserved differences that give rise to the second source of variability, the between-study variability. It is indeed the difference in the test thresholds between the studies that gives rise to the correlation between sensitivity and specificity. Including study level covariates allows us to model part of the between-study variability. The covariate information can and should be used to model the mean as well as the correlation between sensitivity and specificity.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:39:56
In the next section we give more details on different bivariate distributions $f(g(\pi_{i1}),~g(\pi_{i2}))$ constructed using the logit or identity link function $g(.)$, different marginal densities and/or different copula densities $c$. We discuss their implications and demonstrate their application in meta-analysis of diagnostic accuracy studies.  An overview of suitable parametric families of copula for mixed models for diagnostic test accuracy studies was recently given by Nikoloupolous (2015). Here, we consider five copula functions which can model negative correlation.

Given the density and the distribution function of the univariate and bivariate standard normal distribution with correlation parameter $\rho \in (-1, 1)$, the bivariate Gaussian copula function and density is expressed [@Meyer] as
$$C(u, ~v, ~\rho) = \Phi_2(\Phi^{-1}(u),~ \Phi^{-1}(v),~ \rho), $$
$$c(u, ~v, ~\rho) =~  \frac{1}{\sqrt{1~-~\rho^2}}  ~exp\bigg(\frac{2~\rho~\Phi^{-1}(u) ~\Phi^{-1}(v) - \rho^2~ (\Phi^{-1}(u)^2 + \Phi^{-1}(v)^2)}{2~(1 - \rho^2)}\bigg ) $$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:41:03
The logit transformation is often used in binary logistic regression to relate the probability of "success" (coded as 1, failure as 0) of the binary response variable with the linear predictor model that theoretically can take values over the whole real line. In diagnostic test accuracy studies, the `unobserved' sensitivities and specificities can range from 0 to 1 whereas their logits = $log ( \frac{\pi_{ij}}{1~-~ \pi_{ij}})$ can take any real value allowing to use the normal distribution as follows
$$
logit (\pi_{ij}) ~\sim~ N(\mu_j, ~\sigma_j) ~<=> ~logit (\pi_{ij}) ~=~ \mu_j ~+~ \varepsilon_{ij},
$$
where, $\mu_j$ is a vector of the mean sensitivity and specificity for a study with zero random effects, and $\varepsilon_{i}$ is a vector of random effects associated with study $i$.
Now $u$ is the normal distribution function of $logit(\pi_{i1}$) with parameters $\mu_1$ and $\sigma_1$, \textit{v} is the normal distribution function of $logit(\pi_{i2})$ with parameters $\mu_2$ and $\sigma_2$,  $\Phi_2$ is the distribution function of a bivariate standard normal distribution with correlation parameter $\rho \in (-1, ~1)$ and $\Phi^{-1}$  is the quantile of the standard normal distribution. In terms of $\rho$, Kendall's tau is expressed as ($\frac{2}{\pi}$)arcsin$(\rho)$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-25 20:41:27
With simple algebra the copula density with normal marginal distributions simplifies to
$$
c(u, ~v, ~\rho) = \frac{1}{\sqrt{1 - \rho^2}} ~exp\bigg(\frac{1}{2 ~(1 ~-~ \rho^2)}~ \bigg( \frac{2~\rho~(x - \mu_1)~(y - \mu_2)}{\sigma_1 ~\sigma_2} ~-~ \rho^2 ~\bigg (\frac{{x ~-~ \mu_1}^2}{\sigma_1} ~+~ \frac{{y ~-~ \mu_2}^2}{\sigma_2}\bigg)\bigg ) \bigg ). $$

The product of the copula density above, the normal marginal of $logit(\pi_{i1}$) and $logit(\pi_{i2}$) form a bivariate normal distribution which characterize the model by [@Reitsma], [@Arends], [@Chu], and [@Rileyb], the so-called bivariate random-effects meta-analysis (BRMA) model, recommended as the appropriate method for meta-analysis of diagnostic accuracy studies.
Study level covariate information explaining heterogeneity is introduced through the parameters of the marginal and the copula as follows
$$\boldsymbol{\mu}_j = \textbf{X}_j\textbf{B}_j^{\top}. $$                                                               
$\boldsymbol{X}_j$ is a $n \times p$ matrix containing the covariates values for the mean sensitivity($j = 1$) and specificity($j = 2$). For simplicity, assume that $\boldsymbol{X}_1 = \boldsymbol{X}_2 = \boldsymbol{X}$.   $\boldsymbol{B}_j^\top$ is a $p \times$ 1$ vector of regression parameters, and $p$ is the number of parameters.
By inverting the logit functions, we obtain
$$
\pi_{ij} = logit^{-1} (\mu_j + \varepsilon_{ij}).
$$                                                                       
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 12:00:33
$$
f'(x) = \lim\limits_{h \rightarrow 0} \dfrac{f(x + h) - f(x)}{h}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 12:01:11
$$
\begin{aligned} \frac{\partial Y}{\partial X_3} = & \beta_3 + \beta_5 X_2 + \\ & \beta_6 X_1 + \beta_7 X_1 X_2 \end{aligned}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 12:05:16
$$
f'(x) = \lim\limits_{h \rightarrow 0} \dfrac{f(x+h) - f(x-h)}{2h}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 12:12:30
$$
\begin{align}
ME(X_1) = \dfrac{\partial Y}{\partial X_1} = f_1'(X) = g_1(f(X)) \\
ME(X_2) = \dfrac{\partial Y}{\partial X_2} = f_2'(X) = g_2(f(X))
\end{align}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 12:13:18
$$
J = \begin{bmatrix}
        \frac{\partial g_1}{\partial \beta_0} &
        \frac{\partial g_1}{\partial \beta_1} &
        \frac{\partial g_1}{\partial \beta_2} &
        \dots &
        \frac{\partial g_1}{\partial \beta_K} \\

        \frac{\partial g_2}{\partial \beta_0} &
        \frac{\partial g_2}{\partial \beta_1} &
        \frac{\partial g_2}{\partial \beta_2} &
        \dots &
        \frac{\partial g_2}{\partial \beta_K} \\

        \dots \\

        \frac{\partial g_M}{\partial \beta_0} &
        \frac{\partial g_M}{\partial \beta_1} &
        \frac{\partial g_M}{\partial \beta_2} &
        \dots &
        \frac{\partial g_M}{\partial \beta_K} \\
\end{bmatrix}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 14:36:32
In first-year calculus, we define intervals such
as $(u, v)$ and $(u, \infty)$. Such an interval
is a \emph{neighborhood} of $a$
if $a$ is in the interval. Students should
realize that $\infty$ is only a
0 symbol, not a number. This is important since
we soon introduce concepts
such as $\lim_{x \to \infty} f(x)$.

When we introduce the derivative
$$\left[
\lim_{x \to a} \frac{f(x) - f(a)}{x - a},
\right]$$
we assume that the function is defined and
continuous in a neighborhood of $a$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 14:37:17
$$
\frac{1 + 2x}{x + y + xy}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 14:37:54
$$
\left( \frac{1 + x}{2 + y^{2}} \right)^{2}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-26 14:38:41
$$
\lim_{x \to 0} f(x) = 0
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群