全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 MATLAB等数学软件专版
2020-12-13 19:49:04
$$
\begin{array}{l}
\frac{\sigma_1 \,\sigma_2 }{2}+\frac{\sigma_1 \,{\mathrm{e}}^{\frac{t\,\sqrt{\gamma^2 -4\,{\omega_0 }^2 }}{2}} }{2}-\frac{\gamma \,\sigma_1 \,\sigma_2 }{2\,\sqrt{\gamma^2 -4\,{\omega_0 }^2 }}+\frac{\gamma \,\sigma_1 \,{\mathrm{e}}^{\frac{t\,\sqrt{\gamma^2 -4\,{\omega_0 }^2 }}{2}} }{2\,\sqrt{\gamma^2 -4\,{\omega_0 }^2 }}\\
\mathrm{}\\
\textrm{where}\\
\mathrm{}\\
\;\;\sigma_1 ={\mathrm{e}}^{-\frac{\gamma \,t}{2}} \\
\mathrm{}\\
\;\;\sigma_2 ={\mathrm{e}}^{-\frac{t\,\sqrt{\gamma^2 -4\,{\omega_0 }^2 }}{2}}
\end{array}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:02:24
$$
\mathrm{cosh}\left(\omega_0 \,t\,\sqrt{\zeta^2 -1}\right)\,{\mathrm{e}}^{-\omega_0 \,t\,\zeta }
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:03:32
$$
\left(\begin{array}{c}
m\,\frac{\partial }{\partial t}\;\mathrm{Dxt}\left(t\right)-\frac{F\left(t\right)\,x\left(t\right)}{r}\\
g\,m+m\,\frac{\partial }{\partial t}\;\mathrm{Dyt}\left(t\right)-\frac{F\left(t\right)\,y\left(t\right)}{r}\\
-r^2 +{x\left(t\right)}^2 +{y\left(t\right)}^2 \\
\mathrm{Dxt}\left(t\right)-\frac{\partial }{\partial t}\;x\left(t\right)\\
\mathrm{Dyt}\left(t\right)-\frac{\partial }{\partial t}\;y\left(t\right)
\end{array}\right)
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:04:04
$$
\left(\begin{array}{c}
m\,\mathrm{Dxtt}\left(t\right)-\frac{F\left(t\right)\,x\left(t\right)}{r}\\
g\,m+m\,\mathrm{Dytt}\left(t\right)-\frac{F\left(t\right)\,y\left(t\right)}{r}\\
-r^2 +{x\left(t\right)}^2 +{y\left(t\right)}^2 \\
\mathrm{Dxt}\left(t\right)-{\textrm{Dxt}}_1 \left(t\right)\\
\mathrm{Dyt}\left(t\right)-{\textrm{Dyt}}_1 \left(t\right)\\
2\,{\textrm{Dxt}}_1 \left(t\right)\,x\left(t\right)+2\,{\textrm{Dyt}}_1 \left(t\right)\,y\left(t\right)\\
2\,y\left(t\right)\,\frac{\partial }{\partial t}\;{\textrm{Dyt}}_1 \left(t\right)+2\,{{\textrm{Dxt}}_1 \left(t\right)}^2 +2\,{{\textrm{Dyt}}_1 \left(t\right)}^2 +2\,\mathrm{Dxt1t}\left(t\right)\,x\left(t\right)\\
\mathrm{Dxtt}\left(t\right)-\mathrm{Dxt1t}\left(t\right)\\
\mathrm{Dytt}\left(t\right)-\frac{\partial }{\partial t}\;{\textrm{Dyt}}_1 \left(t\right)\\
{\textrm{Dyt}}_1 \left(t\right)-\frac{\partial }{\partial t}\;y\left(t\right)
\end{array}\right)
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:04:40
$$
\left(\begin{array}{cc}
\mathrm{Dytt}\left(t\right) & \frac{\partial }{\partial t}\;\mathrm{Dyt}\left(t\right)\\
\mathrm{Dxtt}\left(t\right) & \frac{\partial }{\partial t}\;\mathrm{Dxt}\left(t\right)\\
{\textrm{Dxt}}_1 \left(t\right) & \frac{\partial }{\partial t}\;x\left(t\right)\\
{\textrm{Dyt}}_1 \left(t\right) & \frac{\partial }{\partial t}\;y\left(t\right)\\
\mathrm{Dxt1t}\left(t\right) & \frac{\partial^2 }{\partial t^2 }\;x\left(t\right)
\end{array}\right)
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:05:13
$$
\frac{d^2y}{dt^2} = (1-y^2)\frac{dy}{dt} - y
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:06:20
1. Define Parameters of the Model Using Stochastic Differential Equations
$$
dX = \mu(t, X) dt + \sigma(t, X) dB(t)
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:06:59
Manually integrate the right side by parts. The result is
$$\frac{\partial}{\partial X} \left(k(2)\frac{\partial v}{\partial X}\right) + k(3) -
\frac{\partial v}{\partial X} \frac{\partial k(2)}{\partial X}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:07:29
2. Apply Ito's Rule
Asset prices follow a multiplicative process. That is, the logarithm of the price can be described in terms of an SDE, but the expected value of the price itself is of interest because it describes the profit, and thus we need an SDE for the latter.
In general, if a stochastic process X is given in terms of an SDE, then Ito's rule says that the transformed process G(t, X) satisfies

$$
dG = \left(\mu \frac{dG}{dX} + \frac{\sigma^2}{2} \frac{d^2G}{dX^2} +
\frac{dG}{dt}\right) dt + \frac{dG}{dX} \sigma dB(t)
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 20:09:07
3. Solve Feynman-Kac Equation
Before you can convert symbolic expressions to MATLAB function handles, you must replace function calls, such as diff(v(t, X), X) and v(t, X), with variables. You can use any valid MATLAB variable names.

$$
\frac{{\sigma \left(t,X\right)}^2 \,\frac{\partial^2 }{\partial X^2 }\;y\left(X\right)}{2}+\mu \left(t,X\right)\,\frac{\partial }{\partial X}\;y\left(X\right)=-1
$$
$$
{\left(\frac{X\,{\sigma_0 }^2 }{2}+X\,\mu_0 \right)}\,\frac{\partial }{\partial X}\;y\left(X\right)+\frac{X^2 \,{\sigma_0 }^2 \,\frac{\partial^2 }{\partial X^2 }\;y\left(X\right)}{2}=-1
$$
$$
\begin{array}{l}
\frac{2\,\mu_0 \,\sigma_5 \,\mathrm{log}\left(b\right)-2\,\mu_0 \,\sigma_4 \,\mathrm{log}\left(a\right)+a^{\sigma_1 } \,{\sigma_0 }^2 \,\sigma_5 \,\sigma_4 -b^{\sigma_1 } \,{\sigma_0 }^2 \,\sigma_5 \,\sigma_4 }{\sigma_3 }-\frac{\mathrm{log}\left(X\right)}{\mu_0 }+\frac{\sigma_2 \,{\left(\sigma_7 -\sigma_6 -a^{\sigma_1 } \,{\sigma_0 }^2 \,\sigma_5 +b^{\sigma_1 } \,{\sigma_0 }^2 \,\sigma_4 \right)}}{\sigma_3 }+\frac{X^{\sigma_1 } \,{\sigma_0 }^2 \,\sigma_2 }{2\,{\mu_0 }^2 }\\
\mathrm{}\\
\textrm{where}\\
\mathrm{}\\
\;\;\sigma_1 =\frac{2\,\mu_0 }{{\sigma_0 }^2 }\\
\mathrm{}\\
\;\;\sigma_2 ={\mathrm{e}}^{-\frac{2\,\mu_0 \,\mathrm{log}\left(X\right)}{{\sigma_0 }^2 }} \\
\mathrm{}\\
\;\;\sigma_3 =2\,{\mu_0 }^2 \,{\left(\sigma_5 -\sigma_4 \right)}\\
\mathrm{}\\
\;\;\sigma_4 ={\mathrm{e}}^{-\frac{\sigma_6 }{{\sigma_0 }^2 }} \\
\mathrm{}\\
\;\;\sigma_5 ={\mathrm{e}}^{-\frac{\sigma_7 }{{\sigma_0 }^2 }} \\
\mathrm{}\\
\;\;\sigma_6 =2\,\mu_0 \,\mathrm{log}\left(b\right)\\
\mathrm{}\\
\;\;\sigma_7 =2\,\mu_0 \,\mathrm{log}\left(a\right)
\end{array}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 22:35:11
$$
sz = \frac{log\left({omegabars_{t}}\right)+\frac{{ssigmat_{t-1}}^{2}}{2}}{{ssigmat_{t-1}}}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-13 22:35:32
$$
szplus = \frac{log\left({omegabars_{t+1}}\right)+\frac{{ssigmat_{t}}^{2}}{2}}{{ssigmat_{t}}}
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:25:26
$$
\min_{\beta_0,\beta} \frac{1}{N} \sum_{i=1}^{N} w_i l(y_i,\beta_0+\beta^T x_i) + \lambda\left[(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1\right],
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:26:14
$$
\tilde{\beta}_j \leftarrow \frac{S(\frac{1}{N}\sum_{i=1}^N x_{ij}(y_i-\tilde{y}_i^{(j)}),\lambda \alpha)}{1+\lambda(1-\alpha)},
$$
where $\tilde{y}_i^{(j)} = \tilde{\beta}_0 + \sum_{\ell \neq j} x_{i\ell} \tilde{\beta}_\ell$, and $S(z, \gamma)$ is the soft-thresholding operator with value $\text{sign}(z)(|z|-\gamma)_+$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:26:53
Here we solve the following problem:
$$
\min_{(\beta_0, \beta) \in \mathbb{R}^{(p+1)\times K}}\frac{1}{2N} \sum_{i=1}^N ||y_i -\beta_0-\beta^T x_i||^2_F+\lambda \left[ (1-\alpha)||\beta||_F^2/2 + \alpha\sum_{j=1}^p||\beta_j||_2\right].
$$
Here $\beta_j$ is the jth row of the $p\times K$ coefficient matrix $\beta$, and we replace the absolute penalty on each single coefficient by a group-lasso penalty on each coefficient K-vector $\beta_j$ for a single predictor $x_j$.

We use a set of data generated beforehand for illustration.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:27:37
### Binomial Models

For the binomial model, suppose the response variable takes value in $\mathcal{G}=\{1,2\}$. Denote $y_i = I(g_i=1)$. We model
$$\mbox{Pr}(G=2|X=x)=\frac{e^{\beta_0+\beta^Tx}}{1+e^{\beta_0+\beta^Tx}},$$
which can be written in the following form
$$\log\frac{\mbox{Pr}(G=2|X=x)}{\mbox{Pr}(G=1|X=x)}=\beta_0+\beta^Tx,$$
the so-called "logistic" or log-odds transformation.

The objective function for the penalized logistic regression uses the negative binomial log-likelihood, and is
$$
\min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}} -\left[\frac{1}{N} \sum_{i=1}^N y_i \cdot (\beta_0 + x_i^T \beta) - \log (1+e^{(\beta_0+x_i^T \beta)})\right] + \lambda \big[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\big].
$$
Logistic regression is often plagued with degeneracies when $p > N$ and exhibits wild behavior even when $N$ is close to $p$;
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:28:13
### Multinomial Models

For the multinomial model, suppose the response variable has $K$ levels ${\cal G}=\{1,2,\ldots,K\}$. Here we model
$$\mbox{Pr}(G=k|X=x)=\frac{e^{\beta_{0k}+\beta_k^Tx}}{\sum_{\ell=1}^Ke^{\beta_{0\ell}+\beta_\ell^Tx}}.$$

Let ${Y}$ be the $N \times K$ indicator response matrix, with elements $y_{i\ell} = I(g_i=\ell)$. Then the elastic-net penalized negative log-likelihood function becomes
$$
\ell(\{\beta_{0k},\beta_{k}\}_1^K) = -\left[\frac{1}{N} \sum_{i=1}^N \Big(\sum_{k=1}^Ky_{il} (\beta_{0k} + x_i^T \beta_k)- \log \big(\sum_{k=1}^K e^{\beta_{0k}+x_i^T \beta_k}\big)\Big)\right] +\lambda \left[ (1-\alpha)||\beta||_F^2/2 + \alpha\sum_{j=1}^p||\beta_j||_q\right].
$$
Here we really abuse notation! $\beta$ is a $p\times K$ matrix of coefficients. $\beta_k$ refers to the kth column (for outcome category k), and $\beta_j$ the jth row (vector of K coefficients for variable j).
The last penalty term is $||\beta_j||_q$, we have two options for q: $q\in \{1,2\}$.
When q=1, this is a lasso penalty on each of the parameters. When q=2, this is a grouped-lasso penalty on all the K coefficients for a particular variables, which makes them all be zero or nonzero together.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:28:52
## Poisson Models

Poisson regression is used to model count data under the assumption of Poisson error, or otherwise non-negative data where the mean and variance are proportional. Like the Gaussian and binomial model, the Poisson is a member of the exponential family of distributions. We usually model its positive mean on the log scale:   $\log \mu(x) = \beta_0+\beta' x$.
The log-likelihood for observations $\{x_i,y_i\}_1^N$ is given my
$$
l(\beta|X, Y) = \sum_{i=1}^N \left(y_i (\beta_0+\beta' x_i) - e^{\beta_0+\beta^Tx_i}\right).
$$
As before, we optimize the penalized log-lielihood:
$$
\min_{\beta_0,\beta} -\frac1N l(\beta|X, Y)  + \lambda \left((1-\alpha) \sum_{i=1}^N \beta_i^2/2) +\alpha \sum_{i=1}^N |\beta_i|\right).
$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:29:22
## Cox Models

The Cox proportional hazards model is commonly used for the study of the relationship beteween predictor variables and survival time. In the usual survival analysis framework, we have data of the form $(y_1, x_1, \delta_1), \ldots, (y_n, x_n, \delta_n)$ where $y_i$, the observed time, is a time of failure if $\delta_i$ is 1 or right-censoring if $\delta_i$ is 0. We also let $t_1 < t_2 < \ldots < t_m$ be the increasing list of unique failure times, and $j(i)$ denote the index of the observation failing at time $t_i$.

The Cox model assumes a semi-parametric form for the hazard
$$
h_i(t) = h_0(t) e^{x_i^T \beta},
$$
where $h_i(t)$ is the hazard for patient $i$ at time $t$, $h_0(t)$ is a shared baseline hazard, and $\beta$ is a fixed, length $p$ vector. In the classic setting $n \geq p$, inference is made via the partial likelihood
$$
L(\beta) = \prod_{i=1}^m \frac{e^{x_{j(i)}^T \beta}}{\sum_{j \in R_i} e^{x_j^T \beta}},
$$
where $R_i$ is the set of indices $j$ with $y_j \geq t_i$ (those at risk at time $t_i$).
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:30:50
## Appendix 0: Convergence Criteria

Glmnet uses a convergence criterion that focuses not on coefficient
change but rather the impact of the change on the fitted values, and
hence the  loss part of the objective. The net result is a
weighted norm of the coefficient change vector.

For gaussian models it uses the following. Suppose observation $i$
has weight $w_i$. Let $v_j$ be the (weighted)
sum-of-squares for variable $x_j$:
$$v_j=\sum_{i=1}^Nw_ix_{ij}^2.$$
If there is an intercept in the model, these $x_j$ will be centered by
the weighted mean, and hence this would be a weighted variance.
After $\hat\beta_j^o$ has been updated to $\hat\beta_j^n$,    we compute
$\Delta_j=v_j(\hat\beta_j^o-\hat\beta_j^n)^2$. After a complete cycle of coordinate descent, we look at
$\Delta_{max}=\max_j\Delta_j$.  Why this measure?
We can write
$$\Delta_j=\frac1N\sum_{i=1}^N w_j(x_{ij}\hat\beta_j^o-x_{ij}\hat\beta_j^n)^2,$$
which measures the weighted sum of squares of changes in fitted values
for this term. This measures the impact of the change in this
coefficient on the fit. If the largest such change is negligible, we stop.


For logistic regression, and other non-Gaussian models it is similar
for the inner loop. Only now the weights for each observation are more
complex. For example, for logisitic regression the weights are those
that arise from the current Newton step, namely $w_i^*=w_i\hat p_i(1-\hat p_i)$. Here $\hat p_i$ are the fitted probabilities as we
entered the current inner loop.  The intuition is the same --- it
measures the impact of the coefficient change on the current weighted
least squares loss, or quadratic approximation to the log-likelihood
loss.

What about outer-loop convergence? We use the same measure, except now
$\hat\beta^o$ is the coefficient vector before we entered this inner
loop, and $\hat\beta^n$ the converged solution for this inner
loop. Hence if this Newton step had no impact, we declare outer-loop convergence.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:31:59
Nevertheless, we still illustrate with a typical example in linear model in the following for the purpose of comparison. Given $X, Y$ and $\lambda_0 > 0$, we want to find $\beta$ such that
$$
\min_{\beta} ||Y - X\beta||_2^2 + \lambda_0 ||\beta||_1,
$$
where, say, $\lambda_0 = 8$.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-14 08:32:58
Suppose the `glmnet` fitted linear predictor at $\lambda$ is
$\hat\eta_\lambda(x)$ and the relaxed version is $\tilde
\eta_\lambda(x)$. We also allow for shrinkage between the two:
$$\tilde \eta_{\lambda,\gamma}=(1-\gamma)\tilde
\eta_\lambda(x)+\gamma\hat\eta_\lambda(x).$$
$\gamma\in[0,1]$ is an additional tuning parameter which can be
selected by cross validation.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:41:19
$$\textbf{y}_i \sim \mathcal{N}(\mu_i, \sigma^2_1)$$
$$\mu_i = \alpha + \beta[\textbf{X}_{i,1}] + \gamma[\textbf{X}_{i,2}] + \delta \textbf{X}_{i,2}, \quad i=1,\dots,N$$
$$\epsilon_i = \textbf{y}_i - \mu_i$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta_j \sim \mathcal{N}(0, \sigma^2_2), \quad j=1,\dots,J$$
$$\beta_J = - \sum^{J-1}_{j=1} \beta_j$$
$$\gamma_k \sim \mathcal{N}(0, \sigma^2_3), \quad k=1,\dots,K$$
$$\gamma_K = - \sum^{K-1}_{k=1} \gamma_k$$
$$\delta \sim \mathcal{N}(0, 1000)$$
$$\sigma_m \sim \mathcal{HC}(25), \quad m=1,\dots,3$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:41:46
$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2_1)$$
$$\mu_i = \alpha + \beta[\textbf{x}_i], \quad i=1,\dots,N$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta_j \sim \mathcal{N}(0, \sigma^2_2), \quad j=1,\dots,J$$
$$\beta_J = - \displaystyle\sum^{J-1}_{j=1} \beta_j$$
$$\sigma_{1:2} \sim \mathcal{HC}(25)$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:42:35
$$\textbf{y}_i \sim \mathcal{N}(\mu_i, \sigma^2_1)$$
$$\mu_i = \alpha + \beta[\textbf{X}_{i,1}] + \gamma[\textbf{X}_{i,2}], \quad i=1,\dots,N$$
$$\epsilon_i = \textbf{y}_i - \mu_i$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta_j \sim \mathcal{N}(0, \sigma^2_2), \quad j=1,\dots,J$$
$$\beta_J = - \sum^{J-1}_{j=1} \beta_j$$
$$\gamma_k \sim \mathcal{N}(0, \sigma^2_3), \quad k=1,\dots,K$$
$$\gamma_K = - \sum^{K-1}_{k=1} \gamma_k$$
$$\sigma_m \sim \mathcal{HC}(25), \quad m=1,\dots,3$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:43:17
$$\textbf{y} = \mu + \epsilon$$
$$\mu = \textbf{X} \beta$$
$$\beta_j \sim \mathcal{N}(0, 1000), \quad j=1,\dots,J$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:44:09
\subsection{Form}
$$\textbf{y}_t \sim \mathcal{N}(\mu_t, \sigma^2_t), \quad t=1,\dots,T$$
$$\mu_t = \alpha + \sum^P_{p=1} \phi_p \textbf{y}_{t-p}, \quad t=1,\dots,T$$
$$\epsilon_t = \textbf{y}_t - \mu_t$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\phi_p \sim \mathcal{N}(0, 1000), \quad p=1,\dots,P$$
$$\sigma^2_t = \omega + \sum^Q_{q=1} \theta_q \epsilon^2_{t-q}, \quad t=2,\dots,T$$
$$\omega \sim \mathcal{HC}(25)$$
$$\theta_q \sim \mathcal{U}(0, 1), \quad q=1,\dots,Q$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:44:38
$$\Pr(A | B) = \frac{\Pr(B | A)\Pr(A)}{\Pr(B)}$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:45:09
For example, suppose one asks the question: what is the probability of going to Hell, conditional on consorting (or given that a person consorts) with Laplace's Demon\footnote{This example is, of course, intended with humor.}. By replacing $A$ with $Hell$ and $B$ with $Consort$, the question becomes

$$\Pr(\mathrm{Hell} | \mathrm{Consort}) = \frac{\Pr(\mathrm{Consort} | \mathrm{Hell}) \Pr(\mathrm{Hell})}{\Pr(\mathrm{Consort})}$$

Note that a common fallacy is to assume that $\Pr(A | B) = \Pr(B | A)$, which is called the conditional probability fallacy.

\subsection{Bayes' Theorem, Example 2} \label{bayestheorem2}

Another way to state Bayes' theorem is

$$\Pr(A_i | B) = \frac{\Pr(B | A_i)\Pr(A_i)}{\Pr(B | A_i)\Pr(A_i) +...+ \Pr(B | A_n)\Pr(A_n)}$$
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2020-12-15 17:45:46

\item $\Pr(A_1) = \Pr(\mathrm{Hell})$
\item $\Pr(A_2) = \Pr(\mathrm{Heaven})$
\item $\Pr(B) = \Pr(\mathrm{Consort})$
\item $\Pr(A_1 | B) = \Pr(\mathrm{Hell} | \mathrm{Consort})$
\item $\Pr(A_2 | B) = \Pr(\mathrm{Heaven} | \mathrm{Consort})$
\item $\Pr(B | A_1) = \Pr(\mathrm{Consort} | \mathrm{Hell})$
\item $\Pr(B | A_2) = \Pr(\mathrm{Consort} | \mathrm{Heaven})$


Laplace's Demon was conjured and asked for some data. He was glad to oblige.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群