全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 MATLAB等数学软件专版
2020-12-28 14:19:21
Consider we have a universe of
eight assets and we would like to design a risk parity portfolio $\mathbf{w}$ satisfying the
following constraints:
$$ w_5 + w_6 + w_7 + w_8 \geq 30\%, $$ $$ w_2 + w_6 \geq w_1 + w_5 + 5\%, $$ and $$ \sum_i w_i = 100\% . $$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:19:41
**Formulation "rc-double-index"**:
$$R(\mathbf{w}) = \sum_{i,j=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_{i}-w_{j}\left(\boldsymbol{\Sigma}\mathbf{w    }\right)_{j}\right)^{2}$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:20:37
$$
R(\mathbf{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}-b_i\right)^{2}
$$

$$
R(\mathbf{w}) = \sum_{i=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i - b_i\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}\right)^{2}
$$

$$
R(\mathbf{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{\sqrt{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}}-b_i\sqrt{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}\right)^{2}
$$

$$
R(\mathbf{w},\theta) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{b_i} - \theta \right)^{2}
$$

$$
R(\mathbf{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}\right)^{2}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:21:01
Consider the risk budgeting equations
$$w_i\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i = b_i \;\mathbf{w}^{T}\boldsymbol{\Sigma}\mathbf{w}, \qquad i=1,\ldots,N$$

with $\mathbf{1}^T\mathbf{w}=1$ and $\mathbf{w} \ge \mathbf{0}$.

If we define $\mathbf{x}=\mathbf{w}/\sqrt{\mathbf{w}^{T}\boldsymbol{\Sigma}\mathbf{w}}$, then we can rewrite the risk budgeting equations compactly as
$$\boldsymbol{\Sigma}\mathbf{x} = \mathbf{b}/\mathbf{x}$$ with $\mathbf{x} \ge \mathbf{0}$ and we can always recover the portfolio by normalizing: $\mathbf{w} = \mathbf{x}/(\mathbf{1}^T\mathbf{x})$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:21:21
So we can finally formulate the risk budgeting problem as the following convex optimization problem:
$$\underset{\mathbf{x}\ge\mathbf{0}}{\textsf{minimize}} \quad \frac{1}{2}\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x} - \mathbf{b}^T\log(\mathbf{x}).$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:21:50
Roncalli et al. [@GriveauRichardRoncalli2013] proposed a slightly different formulation (also convex):
$$\underset{\mathbf{x}\ge\mathbf{0}}{\textsf{minimize}} \quad \sqrt{\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}} - \mathbf{b}^T\log(\mathbf{x}).$$

Unfortunately, even though these two problems are convex, they do not conform with the typical classes that most solvers embrace (i.e., LP, QP, QCQP, SOCP, SDP, GP, etc.).

Nevertheless, there are several simple iterative algorithms that can be used, like the Newton method and the cyclical coordinate descent algorithm.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:22:08
**Newton method**
The Newton method obtains the iterates based on the gradient $\nabla f$ and the Hessian ${\sf H}$ of the objective function $f(\mathbf{x})$ as follows:
$$\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - {\sf H}^{-1}(\mathbf{x}^{(k)})\nabla f(\mathbf{x}^{(k)})$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:22:31
* For the function $f(\mathbf{x}) = \frac{1}{2}\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x} -
  \mathbf{b}^T\log(\mathbf{x})$, the gradient and Hessian are given by
    $$\begin{array}{ll}
    \nabla f(\mathbf{x}) &= \boldsymbol{\Sigma}\mathbf{x} - \mathbf{b}/\mathbf{x}\\
    {\sf H}(\mathbf{x}) &= \boldsymbol{\Sigma} + {\sf Diag}(\mathbf{b}/\mathbf{x}^2).
    \end{array}$$

* For the function $f(\mathbf{x}) = \sqrt{\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}} -
  \mathbf{b}^T\log(\mathbf{x})$, the gradient and Hessian are given by
    $$\begin{array}{ll}
    \nabla f(\mathbf{x}) &= \boldsymbol{\Sigma}\mathbf{x}/\sqrt{\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}} - \mathbf{b}/\mathbf{x}\\
    {\sf H}(\mathbf{x}) &= \left(\boldsymbol{\Sigma} - \boldsymbol{\Sigma}\mathbf{x}\mathbf{x}^T\boldsymbol{\Sigma}/\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}\right) / \sqrt{\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}} + {\sf Diag}(\mathbf{b}/\mathbf{x}^2).
    \end{array}$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:23:02
**Cyclical coordinate descent algorithm**
This method simply minimizes in a cyclical manner with respect to each element
of the variable $\mathbf{x}$ (denote $\mathbf{x}_{-i}=[x_1,\ldots,x_{i-1},0,x_{i+1},\ldots,x_N]^T$),
while helding the other elements fixed.

* For the function $f(\mathbf{x}) = \frac{1}{2}\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x} -
  \mathbf{b}^T\log(\mathbf{x})$, the minimization w.r.t. $x_i$ is
    $$\underset{x_i>0}{\textsf{minimize}} \quad \frac{1}{2}x_i^2\boldsymbol{\Sigma}_{ii} + x_i(\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i}) - b_i\log{x_i}$$
with gradient $\nabla_i f = x_i\boldsymbol{\Sigma}_{ii} + (\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i}) - b_i/x_i$.
Setting the gradient to zero gives us the second order equation
$$x_i^2\boldsymbol{\Sigma}_{ii} + x_i(\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i}) - b_i = 0$$
with positive solution given by
$$x_i^\star = \frac{-(\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i})+\sqrt{(\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i})^2+
4\boldsymbol{\Sigma}_{ii} b_i}}{2\boldsymbol{\Sigma}_{ii}}.$$

* The derivation for the function
$f(\mathbf{x}) = \sqrt{\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}} - \mathbf{b}^T\log(\mathbf{x})$
follows similarly. The update for $x_i$ is given by
$$x_i^\star = \frac{-(\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i})+\sqrt{(\mathbf{x}_{-i}^T\boldsymbol{\Sigma}_{\cdot,i})^2+
4\boldsymbol{\Sigma}_{ii} b_i \sqrt{\mathbf{x}^{T}\boldsymbol{\Sigma}\mathbf{x}}}}{2\boldsymbol{\Sigma}_{ii}}.$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-28 14:23:31
Many practical formulations deployed to design risk parity portfolios lead to nonconvex problems,
specially when additional objective terms such as mean return or variance, or additional
constraints, namely, shortselling, are taken into account. To circumvent the complications
that arise in such formulations, Feng & Palomar [@FengPal2015riskparity] proposed a method called sucessive convex
approximation (SCA). The SCA method works by convexifying the risk concentration term at some
pre-defined point, casting the nonconvex problem into a much simpler strongly convex
optimization problem. This procedure is then iterated until convergence is achieved. It is important
to highlight that the SCA method always converges to a stationary point.

At the $k$-th iteration, the SCA method aims to solve
\begin{align}\begin{array}{ll}
    \underset{\mathbf{w}}{\textsf{minimize}} & \sum_{i=1}^{n}\left(g_i(\mathbf{w}^k) +
    (\nabla g_i(\mathbf{w}^{k}))^{T}(\mathbf{w} - \mathbf{w}^{k})\right)^2 +
    \tau ||\mathbf{w} - \mathbf{w}^{k}||^{2}_{2} + \lambda F(\mathbf{w})\\
\textsf{subject to} & \mathbf{w}^{T}\mathbf{1} = 1, \mathbf{w} \in \mathcal{W},
\end{array}\end{align}
where the first order Taylor expasion of $g_i(\mathbf{w})$ has been used.

After some mathematical manipulations described in detail in [@FengPal2015riskparity], the optimization
problem above can be rewritten as
\begin{align}\begin{array}{ll}
    \underset{\mathbf{w}}{\textsf{minimize}} & \dfrac{1}{2}\mathbf{w}^{T}\mathbf{Q}^{k}\mathbf{w} +
    \mathbf{w}^{T}\mathbf{q}^{k} + \lambda F(\mathbf{w})\\
\textsf{subject to} & \mathbf{w}^{T}\mathbf{1} = 1, \mathbf{w} \in \mathcal{W},
\end{array}\end{align}
where
\begin{align}
    \mathbf{Q}^{k} & \triangleq 2(\mathbf{A}^{k})^{T}\mathbf{A}^{k} + \tau \mathbf{I},\\
    \mathbf{q}^{k} & \triangleq 2(\mathbf{A}^{k})^{T}\mathbf{g}(\mathbf{w}^{k}) - \mathbf{Q}^{k}\mathbf{w}^{k},
\end{align}
and
\begin{align}
    \mathbf{A}^{k} & \triangleq \left[\nabla_{\mathbf{w}} g_{1}\left(\mathbf{w}^{k}\right), ...,
                              \nabla_{\mathbf{w}} g_{n}\left(\mathbf{w}^{k}\right)\right]^{T} \\
    \mathbf{g}\left(\mathbf{w}^{k}\right) & \triangleq \left[g_{1}\left(\mathbf{w}^{k}\right), ...,
                                                   g_{n}\left(\mathbf{w}^{k}\right)\right]^{T}.
\end{align}

The above problem is a quadratic program (QP) which can be efficiently solved by
standard R libraries. Furthermore, it is straightforward that adding the mean return
or variance terms still keeps the structure of the problem intact.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:18:02
$$
\Phi(L){(1 - L)^d}({y_t} - {\mu _t}) = \Theta (L){\varepsilon _t},
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:18:42
$$
{\mu _t} = \mu  + \sum\limits_{i = 1}^{m - n} {{\delta _i}} {x_{i,t}} + \sum\limits_{i = m - n + 1}^m {\delta _i}{x_{i,t}}{\sigma _t} + \xi \sigma _t^k,
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:19:13
$$
{z_t}(\theta ) = \frac{{{y_t} - \mu (\theta ,{x_t})}}{{\sigma (\theta ,{x_t})}},
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:19:58
$$
\sigma _t^2 = \left( {\omega  + \sum\limits_{j = 1}^m {{\zeta _j}{v_{jt}}} } \right) + \sum\limits_{j = 1}^q {{\alpha _j}\varepsilon _{t - j}^2 + } \sum\limits_{j = 1}^p {{\beta _j}\sigma _{t - j}^2},
$$
with $\sigma_t^2$ denoting the conditional variance, $\omega$ the intercept
and $\varepsilon_t^2$ the residuals from the mean filtration process discussed
previously. The GARCH order is defined by $(q, p)$ (ARCH, GARCH), with possibly
\verb@m@ external regressors $v_j$ which are passed \emph{pre-lagged}.
If variance targeting is used, then $\omega$ is replaced by,
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:20:49
$$
{{\bar \sigma }^2}\left( {1 - \hat P} \right) - \sum\limits_{j = 1}^m {{\zeta _j}{{\bar v}_j}}
$$

where ${\bar \sigma}^2$ is the unconditional variance of $\varepsilon^2$ which
is consistently estimated by its sample counterpart at every iteration of the
solver following the mean equation filtration, and ${\bar v}_j$ represents the
sample mean of the $j^{th}$ external regressors in the variance equation
(assuming stationarity), and $\hat P$ is the persistence and defined below. If a
numeric value was provided to the \emph{variance.targeting} option in the specification
(instead of logical), this will be used instead of ${\bar \sigma }^2$ for the
calculation.\footnote{Note that this should represent a value related to the variance
in the plain vanilla GARCH model. In more general models such as the APARCH, this is
a value related to $\sigma^{\delta}$, which may not be obvious since $\delta$ is not
known prior to estimation, and therefore care should be taken in those cases.
Finally, if scaling is  used in the estimation (via the fit.control option), this value will
also be automatically scale adjusted by the routine.}
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:21:09
$$
\hat P = \sum\limits_{j = 1}^q {{\alpha _j}}  + \sum\limits_{j = 1}^p {{\beta _j}}.
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:22:05
$$
{\log _e}\left( {\sigma _t^2} \right) = \left( {\omega  + \sum\limits_{j = 1}^m {{\zeta _j}{v_{jt}}} } \right) + \sum\limits_{j = 1}^q {\left( {{\alpha _j}{z_{t - j}} + {\gamma _j}\left( {\left| {{z_{t - j}}} \right| - E\left| {{z_{t - j}}} \right|} \right)} \right) + } \sum\limits_{j = 1}^p {{\beta _j}{{\log }_e}\left( {\sigma _{t - j}^2} \right)}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:23:01
$$ \hat P = \sum\limits_{j = 1}^q {{\alpha _j}}  + \sum\limits_{j = 1}^p {{\beta _j} + } \sum\limits_{j = 1}^q {{\gamma _j}\kappa }, $$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:24:06
$$
\sigma _t^\delta  = \left( {\omega  + \sum\limits_{j = 1}^m {{\zeta _j}{v_{jt}}} } \right) + \sum\limits_{j = 1}^q {{\alpha _j}{{\left( {\left| {{\varepsilon _{t - j}}} \right| - {\gamma _j}{\varepsilon _{t - j}}} \right)}^\delta } + } \sum\limits_{j = 1}^p {{\beta _j}\sigma _{t - j}^\delta }
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:24:59
$$
\sigma _t^\lambda  = \left( {\omega  + \sum\limits_{j = 1}^m {{\zeta _j}{v_{jt}}} } \right) + \sum\limits_{j = 1}^q {{\alpha _j}\sigma _{t - j}^\lambda {{\left( {\left| {{z_{t - j}} - {\eta _{2j}}} \right| - {\eta _{1j}}\left( {{z_{t - j}} - {\eta _{2j}}} \right)} \right)}^\delta } + } \sum\limits_{j = 1}^p {{\beta _j}\sigma _{t - j}^\lambda }
$$

which is a Box-Cox transformation for the conditional standard deviation whose
shape is determined by $\lambda$, and the parameter $\delta$ transforms the
absolute value function which it subject to rotations and shifts through the
$\eta_{1j}$ and $\eta_{2j}$ parameters respectively. Various submodels arise
from this model, and are passed to the \verb@ugarchspec@ 'variance.model' list
via the submodel option,
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:25:22
$$
{\kappa _j} = E{\left( {\left| {{z_{t - j}} - {\eta _{2j}}} \right| - {\eta _{1j}}\left( {{z_{t - j}} - {\eta _{2j}}} \right)} \right)^\delta } = \int\limits_{ - \infty }^\infty  {{{\left( {\left| {z - {\eta _{2j}}} \right| - {\eta _{1j}}\left( {z - {\eta _{2j}}} \right)} \right)}^\delta }f\left( {z,0,1,...} \right)dz}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:25:50
$$
\begin{gathered}
  \sigma _t^2 = {q_t} + \sum\limits_{j = 1}^q {{\alpha _j}\left( {\varepsilon _{t - j}^2 - {q_{t - j}}} \right) + } \sum\limits_{j = 1}^p {{\beta _j}\left( {\sigma _{t - j}^2 - {q_{t - j}}} \right)}  \hfill \\
  {q_t} = \omega  + \rho {q_{t - 1}} + \phi \left( {\varepsilon _{t - 1}^2 - \sigma _{t - 1}^2} \right) \hfill \\
\end{gathered}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:26:19
$$
\begin{gathered}
  {E_{t - 1}}\left( {\sigma _{t + n}^2} \right) = {E_{t - 1}}\left( {{q_{t + n}}} \right) + \sum\limits_{j = 1}^q {{\alpha _j}\left( {\varepsilon _{t + n - j}^2 - {q_{t + n - j}}} \right) + } \sum\limits_{j = 1}^p {{\beta _j}\left( {\sigma _{t + n - j}^2 - {q_{t + n - j}}} \right)}  \hfill \\
  {E_{t - 1}}\left( {\sigma _{t + n}^2} \right) = {E_{t - 1}}\left( {{q_{t + n}}} \right) + \sum\limits_{j = 1}^q {{\alpha _j}{E_{t - 1}}\left[ {\varepsilon _{t + n - j}^2 - {q_{t + n - j}}} \right] + } \sum\limits_{j = 1}^p {{\beta _j}{E_{t - 1}}\left[ {\sigma _{t + n - j}^2 - {q_{t + n - j}}} \right]}  \hfill \\
\end{gathered}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:26:40
$$
\begin{gathered}
  {E_{t - 1}}\left( {\sigma _{t + n}^2} \right) = {E_{t - 1}}\left( {{q_{t + n}}} \right) + \sum\limits_{j = 1}^q {{\alpha _j}{E_{t - 1}}\left[ {\sigma _{t + n - j}^2 - {q_{t + n - j}}} \right] + } \sum\limits_{j = 1}^p {{\beta _j}{E_{t - 1}}\left[ {\sigma _{t + n - j}^2 - {q_{t + n - j}}} \right]}  \hfill \\
  {E_{t - 1}}\left( {\sigma _{t + n}^2} \right) = {E_{t - 1}}\left( {{q_{t + n}}} \right) + {\left[ {\sum\limits_{j = 1}^{\max (p,q)} {\left( {{\alpha _j} + {\beta _j}} \right)} } \right]^n}\left( {\sigma _t^2 - {q_t}} \right) \hfill \\
\end{gathered}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:28:06
$$
\begin{gathered}
  {Y_t} = \left[ {\begin{array}{*{20}{c}}
   {\log \sigma _t^2}  \\
   .  \\
   .  \\
   .  \\
   {\log \sigma _{t - p + 1}^2}  \\
   {\log {r_t}}  \\
   .  \\
   .  \\
   {\log {r_{t - q + 1}}}  \\

\end{array} } \right],A = \left( {\begin{array}{*{20}{c}}
   {\left( {{\beta _1},...,{\beta _p}} \right)} & {\left( {{\alpha _1},...,{\alpha _q}} \right)}  \\
   {\left( {{I_{p - 1 \times p - 1}},{0_{p - 1 \times 1}}} \right)} & {{0_{p - 1 \times q}}}  \\
   {\delta \left( {{\beta _1},...,{\beta _p}} \right)} & {\delta \left( {{\alpha _1},...,{\alpha _q}} \right)}  \\
   {{0_{q - 1 \times p}}} & {\left( {{I_{q - 1 \times q - 1}},{0_{q - 1 \times 1}}} \right)}  \\
\end{array} } \right),b = \left( {\begin{array}{*{20}{c}}
   \omega   \\
   {{0_{p - 1 \times 1}}}  \\
   {\xi  + \delta \omega }  \\
   {{0_{q - 1 \times 1}}}  \\
\end{array} } \right) \hfill \\
  {\varepsilon _t} = \left( {\begin{array}{*{20}{c}}
   {{0_{p \times 1}}}  \\
   {\tau \left( {{z_t}} \right) + {u_t}}  \\
   {{0_{q \times 1}}}  \\
\end{array} } \right) \hfill \\
\end{gathered}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:28:48
$$
\begin{array}{l}
\sigma _t^2 = \omega {\left[ {1 - \beta \left( L \right)} \right]^{ - 1}} + \left\{ {1 - [1 - \beta {{\left( L \right)}^{ - 1}}\phi \left( L \right){{\left( {1 - L} \right)}^d}} \right\}\varepsilon _t^2\\
= {\omega ^*} + \lambda \left( L \right)\varepsilon _t^2\\
= {\omega ^*} + \sum\limits_{j = 1}^\infty  {{\lambda _i}{L^i}\varepsilon _t^2}
\end{array}
$$

where ${\lambda _1} = {\phi _1} - {\beta _1} + d$ and ${\lambda _k} = {\beta _1}{\lambda _{k - 1}} + \left( {\frac{{k - 1 - d}}{k} - {\phi _1}} \right){\pi _{d,k - 1}}$. For the FIGARCH(1,d,1) model, sufficient conditions to ensure positivity of the conditional variance are $\omega  > 0$,${\beta _1} - d \le {\phi _1} \le \left( {\frac{{2 - d}}{2}} \right)$ , and $d\left( {{\phi _1} - \frac{{\left( {1 - d} \right)}}{2}} \right) \le {\beta _1}\left( {{\phi _1} - {\beta _1} + d} \right)$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:31:29
$$
\begin{array}{l}
\phi \left( L \right){\left( {1 - L} \right)^d}\varepsilon _t^2 = \omega  + \left( {1 - \beta \left( L \right)} \right)\left( {\varepsilon _t^2 - \sigma _t^2} \right)\\
\phi \left( L \right){\left( {1 - L} \right)^d}\varepsilon _t^2 = \omega  - \sigma _t^2 + \varepsilon _t^2 + \beta \left( L \right)\sigma _t^2 - \beta \left( L \right)\varepsilon _t^2\\
\sigma _t^2 = \omega  + \varepsilon _t^2 + \beta \left( L \right)\sigma _t^2 - \beta \left( L \right)\varepsilon _t^2 - \phi \left( L \right){\left( {1 - L} \right)^d}\varepsilon _t^2\\
\sigma _t^2 = \omega  + \left\{ {1 - \beta \left( L \right) - \phi \left( L \right){{\left( {1 - L} \right)}^d}} \right\}\varepsilon _t^2 + \beta \left( L \right)\sigma _t^2\\
\sigma _t^2 = \omega  + \left\{ {1 - \beta \left( L \right) - \left( {1 - \alpha \left( L \right)} \right){{\left( {1 - L} \right)}^d}} \right\}\varepsilon _t^2 + \beta \left( L \right)\sigma _t^2
\end{array}
$$

Truncating the expansion to 1000 lags and setting ${\left( {1 - L} \right)^d}\varepsilon _t^2 = \varepsilon _t^2 + \left( {\sum\limits_{k = 1}^{1000} {{\pi _k}{L^k}} } \right)\varepsilon _t^2 = \varepsilon _t^2 + \bar \varepsilon _t^2$, we can rewrite the equation as:
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-2 08:31:46
$$
\begin{array}{l}
\sigma _t^2 = \omega  + \left\{ {\varepsilon _t^2 - \beta \left( L \right)\varepsilon _t^2 - {{\left( {1 - L} \right)}^d}\varepsilon _t^2 + \alpha \left( L \right){{\left( {1 - L} \right)}^d}\varepsilon _t^2} \right\} + \beta \left( L \right)\sigma _t^2\\
\sigma _t^2 = \omega  + \left\{ {\varepsilon _t^2 - \beta \left( L \right)\varepsilon _t^2 - \left( {\varepsilon _t^2 + \bar \varepsilon _t^2} \right) + \alpha \left( L \right)\left( {\varepsilon _t^2 + \bar \varepsilon _t^2} \right)} \right\} + \beta \left( L \right)\sigma _t^2\\
\sigma _t^2 = \omega  + \varepsilon _t^2 - \beta \left( L \right)\varepsilon _t^2 - \left( {\varepsilon _t^2 + \bar \varepsilon _t^2} \right) + \alpha \left( L \right)\left( {\varepsilon _t^2 + \bar \varepsilon _t^2} \right) + \beta \left( L \right)\sigma _t^2\\
\sigma _t^2 = \omega  - \bar \varepsilon _t^2 - \beta \left( L \right)\varepsilon _t^2 + \alpha \left( L \right)\left( {\varepsilon _t^2 + \bar \varepsilon _t^2} \right) + \beta \left( L \right)\sigma _t^2\\
\sigma _t^2 = \left( {\omega  - \bar \varepsilon _t^2} \right) - \sum\limits_{j = 1}^p {{\beta _j}\varepsilon _{t - j}^2 + \sum\limits_{j = 1}^q {{\alpha _j}\varepsilon _{t - j}^2}  + } \sum\limits_{j = 1}^q {{\alpha _j}\bar \varepsilon _{t - j}^2}  + \sum\limits_{j = 1}^p {{\beta _j}\sigma _{t - j}^2} \\
\sigma _t^2 = \left( {\omega  - \bar \varepsilon _t^2} \right) + \sum\limits_{j = 1}^q {{\alpha _j}\left( {\varepsilon _{t - j}^2 + \bar \varepsilon _{t - j}^2} \right)}  + \sum\limits_{j = 1}^p {{\beta _j}\left( {\sigma _{t - j}^2 - \varepsilon _{t - j}^2} \right)}
\end{array}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-8 19:13:36
$$
\sigma
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-1-17 15:42:14
$$
d_1 = \frac{1}{\sigma\sqrt{T}} \left[\log\left(\frac{S}{K}\right) +
\left(r+\frac{\sigma^2}{2}\right) T\right]
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群