全部版块 我的主页
论坛 数据科学与人工智能 人工智能 机器学习
2646 1
2016-02-25
<br /><br /><strong style="color: rgb(51, 51, 51); font-family: 'Microsoft Yahei', Tahoma, Simsun; ; line-height: 27px;"><font size="4"><font color="#ff8c00">人大经济论坛经管爱问微信好号“jgasker”好文共享与推荐,实时答疑服务,欢迎关注!</font></font></strong><p></p><div><strong style="color: rgb(51, 51, 51); font-family: 'Microsoft Yahei', Tahoma, Simsun; ; line-height: 27px;"><font size="4"><font color="#ff8c00"><br></font></font></strong></div><div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px;">我们都知道,在进行数据挖掘或者机器学习模型建立的时候,因为在统计学习中,假设数据满足独立同分布(i.i.d,independently and identically distributed),即当前已产生的数据可以对未来的数据进行推测与模拟,因此都是使用历史数据建立模型,即使用已经产生的数据去训练,然后使用该 模型去拟合未来的数据。但是一般独立同分布的假设往往不成立,即数据的分布可能会发生变化(distribution drift),并且可能当前的数据量过少,不足以对整个数据集进行分布估计,因此往往需要防止模型过拟合,提高模型泛化能力。而为了达到该目的的最常见方 法便是:正则化,即在对模型的目标函数(objective function)或代价函数(cost function)加上正则项。<br>在对模型进行训练时,有可能遇到训练数据不够,即训练数据无法对整个数据的分布进行估计的时候,或者在对模型进行过度训练(overtraining)时,常常会导致模型的过拟合(overfitting)。如下图所示:<br><img title="" src="http://img.blog.csdn.net/20151026205032330" alt="这里写图片描述" style="margin-right: auto; margin-left: auto; border: 0px; height: auto; width: auto; max-; display: block; background: url(<img src=" http:="" dataunion.org="" wp-content="" themes="" yzipi="" images="" imgbg.gif"="" border="0">) no-repeat;"&gt;<br>通过上图可以看出,随着模型训练的进行,模型的复杂度会增加,此时模型在训练数据集上的训练误差会逐渐减小,但是在模型的复杂度达到一定程度时,模型 在验证集上的误差反而随着模型的复杂度增加而增大。此时便发生了过拟合,即模型的复杂度升高,但是该模型在除训练集之外的数据集上却不work。<br>为了防止过拟合,我们需要用到一些方法,如:early stopping、数据集扩增(Data augmentation)、正则化(Regularization)、Dropout等。</p><h3 id="early-stopping" style="margin-top: 10px; margin-bottom: 10px; margin-left: -36px; padding-right: 40px; padding-left: 63px; height: 40px; line-height: 40px; border-left-; border-left-style: solid; border-left-color: rgb(51, 134, 164); ; color: rgb(51, 134, 164); display: inline-block; border-top-left-radius: 8px; border-top-right-radius: 8px; border-bottom-right-radius: 8px; border-bottom-left-radius: 8px; font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px; background: rgb(247, 247, 247);"><a name="t2" style="color: rgb(102, 102, 102); ; -webkit-transition: color 0.3s; transition: color 0.3s;"></a>Early stopping</h3><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px;">对模型进行训练的过程即是对模型的参数进行学习更新的过程,这个参数学习的过程往往会用到一些迭代方法,如梯度下降(Gradient descent)学习算法。Early stopping便是一种迭代次数截断的方法来防止过拟合的方法,即在模型对训练数据集迭代收敛之前停止迭代来防止过拟合。<br>Early stopping方法的具体做法是,在每一个Epoch结束时(一个Epoch集为对所有的训练数据的一轮遍历)计算validation data的accuracy,当accuracy不再提高时,就停止训练。这种做法很符合直观感受,因为accurary都不再提高了,在继续训练也是无 益的,只会提高训练的时间。那么该做法的一个重点便是怎样才认为validation accurary不再提高了呢?并不是说validation accuracy一降下来便认为不再提高了,因为可能经过这个Epoch后,accuracy降低了,但是随后的Epoch又让accuracy又上去 了,所以不能根据一两次的连续降低就判断不再提高。一般的做法是,在训练的过程中,记录到目前为止最好的validation accuracy,当连续10次Epoch(或者更多次)没达到最佳accuracy时,则可以认为accuracy不再提高了。此时便可以停止迭代了 (Early Stopping)。这种策略也称为“No-improvement-in-n”,n即Epoch的次数,可以根据实际情况取,如10、20、30……</p><h3 id="数据集扩增" style="margin-top: 10px; margin-bottom: 10px; margin-left: -36px; padding-right: 40px; padding-left: 63px; height: 40px; line-height: 40px; border-left-; border-left-style: solid; border-left-color: rgb(51, 134, 164); ; color: rgb(51, 134, 164); display: inline-block; border-top-left-radius: 8px; border-top-right-radius: 8px; border-bottom-right-radius: 8px; border-bottom-left-radius: 8px; font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px; background: rgb(247, 247, 247);"><a name="t3" style="color: rgb(102, 102, 102); ; -webkit-transition: color 0.3s; transition: color 0.3s;"></a>数据集扩增</h3><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px;">在数据挖掘领域流行着这样的一句话,“有时候往往拥有更多的数据胜过一个好的模型”。因为我们在使用训练数据训练模型,通过这个模型对将来的数 据进行拟合,而在这之间又一个假设便是,训练数据与将来的数据是独立同分布的。即使用当前的训练数据来对将来的数据进行估计与模拟,而更多的数据往往估计 与模拟地更准确。因此,更多的数据有时候更优秀。但是往往条件有限,如人力物力财力的不足,而不能收集到更多的数据,如在进行分类的任务中,需要对数据进 行打标,并且很多情况下都是人工得进行打标,因此一旦需要打标的数据量过多,就会导致效率低下以及可能出错的情况。所以,往往在这时候,需要采取一些计算 的方式与策略在已有的数据集上进行手脚,以得到更多的数据。<br>通俗得讲,数据机扩增即需要得到更多的符合要求的数据,即和已有的数据是独立同分布的,或者近似独立同分布的。一般有以下方法:</p><ul style="list-style: none; color: rgb(101, 101, 101); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; ; letter-spacing: 0.5px; line-height: normal;"><li style="margin-left: 0px; float: left; list-style: none;">从数据源头采集更多数据</li><li style="margin-left: 0px; float: left; list-style: none;">复制原有数据并加上随机噪声</li><li style="margin-left: 0px; float: left; list-style: none;">重采样</li><li style="margin-left: 0px; float: left; list-style: none;">根据当前数据集估计数据分布参数,使用该分布产生更多数据等</li></ul><h3 id="正则化方法" style="margin-top: 10px; margin-bottom: 10px; margin-left: -36px; padding-right: 40px; padding-left: 63px; height: 40px; line-height: 40px; border-left-; border-left-style: solid; border-left-color: rgb(51, 134, 164); ; color: rgb(51, 134, 164); display: inline-block; border-top-left-radius: 8px; border-top-right-radius: 8px; border-bottom-right-radius: 8px; border-bottom-left-radius: 8px; font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px; background: rgb(247, 247, 247);"><a name="t4" style="color: rgb(102, 102, 102); ; -webkit-transition: color 0.3s; transition: color 0.3s;"></a>正则化方法</h3><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px;">正则化方法是指在进行目标函数或代价函数优化时,在目标函数或代价函数后面加上一个正则项,一般有L1正则与L2正则等。</p><ul style="list-style: none; color: rgb(101, 101, 101); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; ; letter-spacing: 0.5px; line-height: normal;"><li style="margin-left: 0px; float: left; list-style: none;">L1正则<br>L1正则是基于L1范数,即在目标函数后面加上参数的L1范数和项,即参数绝对值和与参数的积项,即:<p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);"></p><div class="MathJax_Display"><span id="MathJax-Element-1-Frame" class="MathJax"><span id="MathJax-Span-1" class="math"><span id="MathJax-Span-2" class="mrow"><span id="MathJax-Span-3" class="mi">C</span><span id="MathJax-Span-4" class="mo">=</span><span id="MathJax-Span-5" class="msubsup"><span id="MathJax-Span-6" class="mi">C</span><span id="MathJax-Span-7" class="mn">0</span></span><span id="MathJax-Span-8" class="mo">+</span><span id="MathJax-Span-9" class="mfrac"><span id="MathJax-Span-10" class="mi">λ</span><span id="MathJax-Span-11" class="mi">n</span></span><span id="MathJax-Span-12" class="munderover"><span id="MathJax-Span-13" class="mo">∑</span><span id="MathJax-Span-14" class="texatom"><span id="MathJax-Span-15" class="mrow"><span id="MathJax-Span-16" class="mi">w</span></span></span></span><span id="MathJax-Span-17" class="texatom"><span id="MathJax-Span-18" class="mrow"><span id="MathJax-Span-19" class="mo">|</span></span></span><span id="MathJax-Span-20" class="mi">w</span><span id="MathJax-Span-21" class="texatom"><span id="MathJax-Span-22" class="mrow"><span id="MathJax-Span-23" class="mo">|</span></span></span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">其中<span id="MathJax-Element-2-Frame" class="MathJax"><span id="MathJax-Span-24" class="math"><span id="MathJax-Span-25" class="mrow"><span id="MathJax-Span-26" class="msubsup"><span id="MathJax-Span-27" class="mi">C</span><span id="MathJax-Span-28" class="mn">0</span></span></span></span></span>代表原始的代价函数,<span id="MathJax-Element-3-Frame" class="MathJax">n</span>是样本的个数,<span id="MathJax-Element-4-Frame" class="MathJax">λ</span>就是正则项系数,权衡正则项与<span id="MathJax-Element-5-Frame" class="MathJax"><span id="MathJax-Span-35" class="math"><span id="MathJax-Span-36" class="mrow"><span id="MathJax-Span-37" class="msubsup"><span id="MathJax-Span-38" class="mi">C</span><span id="MathJax-Span-39" class="mn">0</span></span></span></span></span>项的比重。后面那一项即为L1正则项。<br>在计算梯度时,<span id="MathJax-Element-6-Frame" class="MathJax">w</span>的梯度变为:</p><div class="MathJax_Display"><span id="MathJax-Element-7-Frame" class="MathJax"><span id="MathJax-Span-43" class="math"><span id="MathJax-Span-44" class="mrow"><span id="MathJax-Span-45" class="mfrac"><span id="MathJax-Span-46" class="mrow"><span id="MathJax-Span-47" class="mi">∂</span><span id="MathJax-Span-48" class="texatom"><span id="MathJax-Span-49" class="mrow"><span id="MathJax-Span-50" class="mi">C</span></span></span></span><span id="MathJax-Span-51" class="mrow"><span id="MathJax-Span-52" class="mi">∂</span><span id="MathJax-Span-53" class="mi">w</span></span></span><span id="MathJax-Span-54" class="mo">=</span><span id="MathJax-Span-55" class="mfrac"><span id="MathJax-Span-56" class="mrow"><span id="MathJax-Span-57" class="mi">∂</span><span id="MathJax-Span-58" class="texatom"><span id="MathJax-Span-59" class="mrow"><span id="MathJax-Span-60" class="msubsup"><span id="MathJax-Span-61" class="mi">C</span><span id="MathJax-Span-62" class="mn">0</span></span></span></span></span><span id="MathJax-Span-63" class="mrow"><span id="MathJax-Span-64" class="mi">∂</span><span id="MathJax-Span-65" class="mi">w</span></span></span><span id="MathJax-Span-66" class="mo">+</span><span id="MathJax-Span-67" class="mfrac"><span id="MathJax-Span-68" class="mi">λ</span><span id="MathJax-Span-69" class="mi">n</span></span><span id="MathJax-Span-70" class="mi">s</span><span id="MathJax-Span-71" class="mi">g</span><span id="MathJax-Span-72" class="mi">n</span><span id="MathJax-Span-73" class="mo">(</span><span id="MathJax-Span-74" class="mi">w</span><span id="MathJax-Span-75" class="mo">)</span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">其中,<span id="MathJax-Element-8-Frame" class="MathJax"><span id="MathJax-Span-76" class="math"><span id="MathJax-Span-77" class="mrow"><span id="MathJax-Span-78" class="mi">s</span><span id="MathJax-Span-79" class="mi">g</span><span id="MathJax-Span-80" class="mi">n</span></span></span></span>是符号函数,那么便使用下式对参数进行更新:</p><div class="MathJax_Display"><span id="MathJax-Element-9-Frame" class="MathJax"><span id="MathJax-Span-81" class="math"><span id="MathJax-Span-82" class="mrow"><span id="MathJax-Span-83" class="mi">w</span><span id="MathJax-Span-84" class="mo">:=</span><span id="MathJax-Span-85" class="mi">w</span><span id="MathJax-Span-86" class="mo">+</span><span id="MathJax-Span-87" class="mi">α</span><span id="MathJax-Span-88" class="mfrac"><span id="MathJax-Span-89" class="mrow"><span id="MathJax-Span-90" class="mi">∂</span><span id="MathJax-Span-91" class="msubsup"><span id="MathJax-Span-92" class="mi">C</span><span id="MathJax-Span-93" class="mn">0</span></span></span><span id="MathJax-Span-94" class="mrow"><span id="MathJax-Span-95" class="mi">∂</span><span id="MathJax-Span-96" class="mi">w</span></span></span><span id="MathJax-Span-97" class="mo">+</span><span id="MathJax-Span-98" class="mi">β</span><span id="MathJax-Span-99" class="mfrac"><span id="MathJax-Span-100" class="mi">λ</span><span id="MathJax-Span-101" class="mi">n</span></span><span id="MathJax-Span-102" class="mi">s</span><span id="MathJax-Span-103" class="mi">g</span><span id="MathJax-Span-104" class="mi">n</span><span id="MathJax-Span-105" class="mo">(</span><span id="MathJax-Span-106" class="mi">w</span><span id="MathJax-Span-107" class="mo">)</span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">对于有些模型,如线性回归中(L1正则线性回归即为Lasso回归),常数项<span id="MathJax-Element-10-Frame" class="MathJax">b</span>的更新方程不包括正则项,即:</p><div class="MathJax_Display"><span id="MathJax-Element-11-Frame" class="MathJax"><span id="MathJax-Span-111" class="math"><span id="MathJax-Span-112" class="mrow"><span id="MathJax-Span-113" class="mi">b</span><span id="MathJax-Span-114" class="mo">:=</span><span id="MathJax-Span-115" class="mi">b</span><span id="MathJax-Span-116" class="mo">+</span><span id="MathJax-Span-117" class="mi">α</span><span id="MathJax-Span-118" class="mfrac"><span id="MathJax-Span-119" class="mrow"><span id="MathJax-Span-120" class="mi">∂</span><span id="MathJax-Span-121" class="msubsup"><span id="MathJax-Span-122" class="mi">C</span><span id="MathJax-Span-123" class="mn">0</span></span></span><span id="MathJax-Span-124" class="mrow"><span id="MathJax-Span-125" class="mi">∂</span><span id="MathJax-Span-126" class="mi">b</span></span></span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">其中,梯度下降算法中,<span id="MathJax-Element-12-Frame" class="MathJax"><span id="MathJax-Span-127" class="math"><span id="MathJax-Span-128" class="mrow"><span id="MathJax-Span-129" class="mi">α</span><span id="MathJax-Span-130" class="mo">&lt;</span><span id="MathJax-Span-131" class="mn">0</span><span id="MathJax-Span-132" class="mo">,</span><span id="MathJax-Span-133" class="mi">β</span><span id="MathJax-Span-134" class="mo">&lt;</span><span id="MathJax-Span-135" class="mn">0</span></span></span></span>,而在梯度上升算法中则相反。<br>从上式可以看出,当<span id="MathJax-Element-13-Frame" class="MathJax">w</span>为正时,更新后<span id="MathJax-Element-14-Frame" class="MathJax">w</span>会变小;当<span id="MathJax-Element-15-Frame" class="MathJax">w</span>为负时,更新后<span id="MathJax-Element-16-Frame" class="MathJax">w</span>会变大;因此L1正则项是为了使得那些原先处于零(即<span id="MathJax-Element-17-Frame" class="MathJax"><span id="MathJax-Span-148" class="math"><span id="MathJax-Span-149" class="mrow"><span id="MathJax-Span-150" class="texatom"><span id="MathJax-Span-151" class="mrow"><span id="MathJax-Span-152" class="mo">|</span></span></span><span id="MathJax-Span-153" class="mi">w</span><span id="MathJax-Span-154" class="texatom"><span id="MathJax-Span-155" class="mrow"><span id="MathJax-Span-156" class="mo">|</span></span></span><span id="MathJax-Span-157" class="mo">≈</span><span id="MathJax-Span-158" class="mn">0</span></span></span></span>)附近的参数<span id="MathJax-Element-18-Frame" class="MathJax">w</span>往零移动,使得部分参数为零,从而降低模型的复杂度(模型的复杂度由参数决定),从而防止过拟合,提高模型的泛化能力。<br>其中,L1正则中有个问题,便是L1范数在0处不可导,即<span id="MathJax-Element-19-Frame" class="MathJax"><span id="MathJax-Span-162" class="math"><span id="MathJax-Span-163" class="mrow"><span id="MathJax-Span-164" class="texatom"><span id="MathJax-Span-165" class="mrow"><span id="MathJax-Span-166" class="mo">|</span></span></span><span id="MathJax-Span-167" class="mi">w</span><span id="MathJax-Span-168" class="texatom"><span id="MathJax-Span-169" class="mrow"><span id="MathJax-Span-170" class="mo">|</span></span></span></span></span></span>在0处不可导,因此在<span id="MathJax-Element-20-Frame" class="MathJax">w</span>为0时,使用原来的未经正则化的更新方程来对<span id="MathJax-Element-21-Frame" class="MathJax">w</span>进行更新,即令<span id="MathJax-Element-22-Frame" class="MathJax"><span id="MathJax-Span-177" class="math"><span id="MathJax-Span-178" class="mrow"><span id="MathJax-Span-179" class="mi">s</span><span id="MathJax-Span-180" class="mi">g</span><span id="MathJax-Span-181" class="mi">n</span><span id="MathJax-Span-182" class="mo">(</span><span id="MathJax-Span-183" class="mn">0</span><span id="MathJax-Span-184" class="mo">)</span><span id="MathJax-Span-185" class="mo">=</span><span id="MathJax-Span-186" class="mn">0</span></span></span></span>,这样即:</p><div class="MathJax_Display"><span id="MathJax-Element-23-Frame" class="MathJax"><span id="MathJax-Span-187" class="math"><span id="MathJax-Span-188" class="mrow"><span id="MathJax-Span-189" class="mi">s</span><span id="MathJax-Span-190" class="mi">g</span><span id="MathJax-Span-191" class="mi">n</span><span id="MathJax-Span-192" class="mo">(</span><span id="MathJax-Span-193" class="mi">w</span><span id="MathJax-Span-194" class="mo">)</span><span id="MathJax-Span-195" class="msubsup"><span id="MathJax-Span-196" class="texatom"><span id="MathJax-Span-197" class="mrow"><span id="MathJax-Span-198" class="mo">|</span></span></span><span id="MathJax-Span-199" class="texatom"><span id="MathJax-Span-200" class="mrow"><span id="MathJax-Span-201" class="mi">w</span><span id="MathJax-Span-202" class="mo">&gt;</span><span id="MathJax-Span-203" class="mn">0</span></span></span></span><span id="MathJax-Span-204" class="mo">=</span><span id="MathJax-Span-205" class="mn">1</span><span id="MathJax-Span-206" class="mo">,</span><span id="MathJax-Span-207" class="mi">s</span><span id="MathJax-Span-208" class="mi">g</span><span id="MathJax-Span-209" class="mi">n</span><span id="MathJax-Span-210" class="mo">(</span><span id="MathJax-Span-211" class="mi">w</span><span id="MathJax-Span-212" class="mo">)</span><span id="MathJax-Span-213" class="msubsup"><span id="MathJax-Span-214" class="texatom"><span id="MathJax-Span-215" class="mrow"><span id="MathJax-Span-216" class="mo">|</span></span></span><span id="MathJax-Span-217" class="texatom"><span id="MathJax-Span-218" class="mrow"><span id="MathJax-Span-219" class="mi">w</span><span id="MathJax-Span-220" class="mo">&lt;</span><span id="MathJax-Span-221" class="mn">0</span></span></span></span><span id="MathJax-Span-222" class="mo">=</span><span id="MathJax-Span-223" class="mo">−</span><span id="MathJax-Span-224" class="mn">1</span><span id="MathJax-Span-225" class="mo">,</span><span id="MathJax-Span-226" class="mi">s</span><span id="MathJax-Span-227" class="mi">g</span><span id="MathJax-Span-228" class="mi">n</span><span id="MathJax-Span-229" class="mo">(</span><span id="MathJax-Span-230" class="mi">w</span><span id="MathJax-Span-231" class="mo">)</span><span id="MathJax-Span-232" class="msubsup"><span id="MathJax-Span-233" class="texatom"><span id="MathJax-Span-234" class="mrow"><span id="MathJax-Span-235" class="mo">|</span></span></span><span id="MathJax-Span-236" class="texatom"><span id="MathJax-Span-237" class="mrow"><span id="MathJax-Span-238" class="mi">w</span><span id="MathJax-Span-239" class="mo">=</span><span id="MathJax-Span-240" class="mn">0</span></span></span></span><span id="MathJax-Span-241" class="mo">=</span><span id="MathJax-Span-242" class="mn">0</span></span></span></span></div></li><li style="margin-left: 0px; float: left; list-style: none;">L2正则<br>L2正则是基于L2范数,即在目标函数后面加上参数的L2范数和项,即参数的平方和与参数的积项,即:<p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);"></p><div class="MathJax_Display"><span id="MathJax-Element-24-Frame" class="MathJax"><span id="MathJax-Span-243" class="math"><span id="MathJax-Span-244" class="mrow"><span id="MathJax-Span-245" class="mi">C</span><span id="MathJax-Span-246" class="mo">=</span><span id="MathJax-Span-247" class="msubsup"><span id="MathJax-Span-248" class="mi">C</span><span id="MathJax-Span-249" class="mn">0</span></span><span id="MathJax-Span-250" class="mo">+</span><span id="MathJax-Span-251" class="mfrac"><span id="MathJax-Span-252" class="mi">λ</span><span id="MathJax-Span-253" class="mrow"><span id="MathJax-Span-254" class="mn">2</span><span id="MathJax-Span-255" class="mi">n</span></span></span><span id="MathJax-Span-256" class="munderover"><span id="MathJax-Span-257" class="mo">∑</span><span id="MathJax-Span-258" class="texatom"><span id="MathJax-Span-259" class="mrow"><span id="MathJax-Span-260" class="mi">w</span></span></span></span><span id="MathJax-Span-261" class="msubsup"><span id="MathJax-Span-262" class="mi">w</span><span id="MathJax-Span-263" class="texatom"><span id="MathJax-Span-264" class="mrow"><span id="MathJax-Span-265" class="mn">2</span></span></span></span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">其中<span id="MathJax-Element-25-Frame" class="MathJax"><span id="MathJax-Span-266" class="math"><span id="MathJax-Span-267" class="mrow"><span id="MathJax-Span-268" class="msubsup"><span id="MathJax-Span-269" class="mi">C</span><span id="MathJax-Span-270" class="mn">0</span></span></span></span></span>代表原始的代价函数,<span id="MathJax-Element-26-Frame" class="MathJax">n</span>是样本的个数,与L1正则化项前面的参数不同的是,L2项的参数称了<span id="MathJax-Element-27-Frame" class="MathJax"><span id="MathJax-Span-274" class="math"><span id="MathJax-Span-275" class="mrow"><span id="MathJax-Span-276" class="mfrac"><span id="MathJax-Span-277" class="mn">1</span><span id="MathJax-Span-278" class="mn">2</span></span></span></span></span>,是为了便于计算以及公式的美感性,因为平方项求导有个2,<span id="MathJax-Element-28-Frame" class="MathJax">λ</span>就是正则项系数,权衡正则项与<span id="MathJax-Element-29-Frame" class="MathJax"><span id="MathJax-Span-282" class="math"><span id="MathJax-Span-283" class="mrow"><span id="MathJax-Span-284" class="msubsup"><span id="MathJax-Span-285" class="mi">C</span><span id="MathJax-Span-286" class="mn">0</span></span></span></span></span>项的比重。后面那一项即为L2正则项。<br>L2正则化中则使用下式对模型参数进行更新:</p><div class="MathJax_Display"><span id="MathJax-Element-30-Frame" class="MathJax"><span id="MathJax-Span-287" class="math"><span id="MathJax-Span-288" class="mrow"><span id="MathJax-Span-289" class="mi">w</span><span id="MathJax-Span-290" class="mo">:=</span><span id="MathJax-Span-291" class="mi">w</span><span id="MathJax-Span-292" class="mo">+</span><span id="MathJax-Span-293" class="mi">α</span><span id="MathJax-Span-294" class="mfrac"><span id="MathJax-Span-295" class="mrow"><span id="MathJax-Span-296" class="mi">∂</span><span id="MathJax-Span-297" class="msubsup"><span id="MathJax-Span-298" class="mi">C</span><span id="MathJax-Span-299" class="mn">0</span></span></span><span id="MathJax-Span-300" class="mrow"><span id="MathJax-Span-301" class="mi">∂</span><span id="MathJax-Span-302" class="mi">w</span></span></span><span id="MathJax-Span-303" class="mo">+</span><span id="MathJax-Span-304" class="mi">β</span><span id="MathJax-Span-305" class="mfrac"><span id="MathJax-Span-306" class="mi">λ</span><span id="MathJax-Span-307" class="mi">n</span></span><span id="MathJax-Span-308" class="mi">w</span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">对于有些模型,如线性回归中(L2正则线性回归即为Ridge回归,岭回归),常数项<span id="MathJax-Element-31-Frame" class="MathJax">b</span>的更新方程不包括正则项,即:</p><div class="MathJax_Display"><span id="MathJax-Element-32-Frame" class="MathJax"><span id="MathJax-Span-312" class="math"><span id="MathJax-Span-313" class="mrow"><span id="MathJax-Span-314" class="mi">b</span><span id="MathJax-Span-315" class="mo">:=</span><span id="MathJax-Span-316" class="mi">b</span><span id="MathJax-Span-317" class="mo">+</span><span id="MathJax-Span-318" class="mi">α</span><span id="MathJax-Span-319" class="mfrac"><span id="MathJax-Span-320" class="mrow"><span id="MathJax-Span-321" class="mi">∂</span><span id="MathJax-Span-322" class="msubsup"><span id="MathJax-Span-323" class="mi">C</span><span id="MathJax-Span-324" class="mn">0</span></span></span><span id="MathJax-Span-325" class="mrow"><span id="MathJax-Span-326" class="mi">∂</span><span id="MathJax-Span-327" class="mi">b</span></span></span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">其中,梯度下降算法中,<span id="MathJax-Element-33-Frame" class="MathJax"><span id="MathJax-Span-328" class="math"><span id="MathJax-Span-329" class="mrow"><span id="MathJax-Span-330" class="mi">α</span><span id="MathJax-Span-331" class="mo">&lt;</span><span id="MathJax-Span-332" class="mn">0</span><span id="MathJax-Span-333" class="mo">,</span><span id="MathJax-Span-334" class="mi">β</span><span id="MathJax-Span-335" class="mo">&lt;</span><span id="MathJax-Span-336" class="mn">0</span></span></span></span>,而在梯度上升算法中则相反。<br>从上式可以看出,L2正则项起到使得参数<span id="MathJax-Element-34-Frame" class="MathJax">w</span>变小加剧的效果,但是为什么可以防止过拟合呢?一个通俗的理解便是:更小的参数值<span id="MathJax-Element-35-Frame" class="MathJax">w</span>意味着模型的复杂度更低,对训练数据的拟合刚刚好(奥卡姆剃刀),不会过分拟合训练数据,从而使得不会过拟合,以提高模型的泛化能力。<br>在这里需要提到的是,在对模型参数进行更新学习的时候,有两种更新方式,mini-batch (部分增量更新)与 full-batch(全增量更新),即在每一次更新学习的过程中(一次迭代,即一次epoch),在mini-batch中进行分批处理,先使用一部分 样本进行更新,然后再使用一部分样本进行更新。直到所有样本都使用了,这次epoch的损失函数值则为所有mini batch的平均损失值。设每次mini batch中样本个数为<span id="MathJax-Element-36-Frame" class="MathJax">m</span>,那么参数的更新方程中的正则项要改成:</p><div class="MathJax_Display"><span id="MathJax-Element-37-Frame" class="MathJax"><span id="MathJax-Span-346" class="math"><span id="MathJax-Span-347" class="mrow"><span id="MathJax-Span-348" class="mfrac"><span id="MathJax-Span-349" class="mi">λ</span><span id="MathJax-Span-350" class="mi">m</span></span><span id="MathJax-Span-351" class="munderover"><span id="MathJax-Span-352" class="mo">∑</span><span id="MathJax-Span-353" class="texatom"><span id="MathJax-Span-354" class="mrow"><span id="MathJax-Span-355" class="mi">w</span></span></span></span><span id="MathJax-Span-356" class="texatom"><span id="MathJax-Span-357" class="mrow"><span id="MathJax-Span-358" class="mo">|</span></span></span><span id="MathJax-Span-359" class="mi">w</span><span id="MathJax-Span-360" class="texatom"><span id="MathJax-Span-361" class="mrow"><span id="MathJax-Span-362" class="mo">|</span></span></span></span></span></span></div><div class="MathJax_Display"><span id="MathJax-Element-38-Frame" class="MathJax"><span id="MathJax-Span-363" class="math"><span id="MathJax-Span-364" class="mrow"><span id="MathJax-Span-365" class="mfrac"><span id="MathJax-Span-366" class="mi">λ</span><span id="MathJax-Span-367" class="mrow"><span id="MathJax-Span-368" class="mn">2</span><span id="MathJax-Span-369" class="mi">m</span></span></span><span id="MathJax-Span-370" class="munderover"><span id="MathJax-Span-371" class="mo">∑</span><span id="MathJax-Span-372" class="texatom"><span id="MathJax-Span-373" class="mrow"><span id="MathJax-Span-374" class="mi">w</span></span></span></span><span id="MathJax-Span-375" class="msubsup"><span id="MathJax-Span-376" class="mi">w</span><span id="MathJax-Span-377" class="texatom"><span id="MathJax-Span-378" class="mrow"><span id="MathJax-Span-379" class="mn">2</span></span></span></span></span></span></span></div><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85);">而full-batch即每一次epoch中,使用全部的训练样本进行更新,那么每次的损失函数值即为全部样本的误差之和。更新方程不变。</p></li><li style="margin-left: 0px; float: left; list-style: none;">总结<br>正则项是为了降低模型的复杂度,从而避免模型区过分拟合训练数据,包括噪声与异常点(outliers)。从另一个角度上来讲,正则化即是假设模型参 数服从先验概率,即为模型参数添加先验,只是不同的正则化方式的先验分布是不一样的。这样就规定了参数的分布,使得模型的复杂度降低(试想一下,限定条件 多了,是不是模型的复杂度降低了呢),这样模型对于噪声与异常点的抗干扰性的能力增强,从而提高模型的泛化能力。还有个解释便是,从贝叶斯学派来看:加了 先验,在数据少的时候,先验知识可以防止过拟合;从频率学派来看:正则项限定了参数的取值,从而提高了模型的稳定性,而稳定性强的模型不会过拟合,即控制 模型空间。<br>另外一个角度,过拟合从直观上理解便是,在对训练数据进行拟合时,需要照顾到每个点,从而使得拟合函数波动性非常大,即方差大。在某些小区间里,函数 值的变化性很剧烈,意味着函数在某些小区间里的导数值的绝对值非常大,由于自变量的值在给定的训练数据集中的一定的,因此只有系数足够大,才能保证导数的 绝对值足够大。如下图(引用知乎):<br><img title="" src="http://img.blog.csdn.net/20151026205322532" alt="这里写图片描述" style="border: 0px; height: auto; width: auto;"><br>另外一个解释,规则化项的引入,在训练(最小化cost)的过程中,当某一维的特征所对应的权重过大时,而此时模型的预测和真实数据之间距离很小,通 过规则化项就可以使整体的cost取较大的值,从而,在训练的过程中避免了去选择那些某一维(或几维)特征的权重过大的情况,即过分依赖某一维(或几维) 的特征(引用知乎)。<br>L2与L1的区别在于,L1正则是拉普拉斯先验,而L2正则则是高斯先验。它们都是服从均值为0,协方差为<span id="MathJax-Element-39-Frame" class="MathJax"><span id="MathJax-Span-380" class="math"><span id="MathJax-Span-381" class="mrow"><span id="MathJax-Span-382" class="mfrac"><span id="MathJax-Span-383" class="mn">1</span><span id="MathJax-Span-384" class="mi">λ</span></span></span></span></span>。当<span id="MathJax-Element-40-Frame" class="MathJax"><span id="MathJax-Span-385" class="math"><span id="MathJax-Span-386" class="mrow"><span id="MathJax-Span-387" class="mi">λ</span><span id="MathJax-Span-388" class="mo">=</span><span id="MathJax-Span-389" class="mn">0</span></span></span></span>时,即没有先验)没有正则项,则相当于先验分布具有无穷大的协方差,那么这个先验约束则会非常弱,模型为了拟合所有的训练集数据, 参数<span id="MathJax-Element-41-Frame" class="MathJax">w</span>可以变得任意大从而使得模型不稳定,即方差大而偏差小。<span id="MathJax-Element-42-Frame" class="MathJax">λ</span>越大,标明先验分布协方差越小,偏差越大,模型越稳定。即,加入正则项是在偏差bias与方差variance之间做平衡tradeoff(来自知乎)。下图即为L2与L1正则的区别:<br><img title="" src="http://img.blog.csdn.net/20151026204740369" alt="这里写图片描述" style="border: 0px; height: auto; width: auto;"><br>上图中的模型是线性回归,有两个特征,要优化的参数分别是w1和w2,左图的正则化是L2,右图是L1。蓝色线就是优化过程中遇到的等高线,一圈代表一个 目标函数值,圆心就是样本观测值(假设一个样本),半径就是误差值,受限条件就是红色边界(就是正则化那部分),二者相交处,才是最优参数。可见右边的最 优参数只可能在坐标轴上,所以就会出现0权重参数,使得模型稀疏。<br>其实拉普拉斯分布与高斯分布是数学家从实验中误差服从什么分布研究中得来的。一般直观上的认识是服从应该服从均值为0的对称分布,并且误差大的频率低,误差小的频率高,因此拉普拉斯使用拉普拉斯分布对误差的分布进行拟合,如下图:<br><img title="" src="http://img.blog.csdn.net/20151026204837282" alt="这里写图片描述" style="border: 0px; height: auto; width: auto;"><br>而拉普拉斯在最高点,即自变量为0处不可导,因为不便于计算,于是高斯在这基础上使用高斯分布对其进行拟合,如下图:<br><img title="" src="http://img.blog.csdn.net/20151026204901473" alt="这里写图片描述" style="border: 0px; height: auto; width: auto;"><br>具体参见:<a href="http://emma.memect.com/t/05400bfdb821bac2abba403afc4ad2e9d2e7d8ea845aa7227f2433ffbe7a9684/intro-normal-distribution.pdf" style="padding: 20px; color: rgb(102, 102, 102); ; text-decoration: none; -webkit-transition: color 0.3s; transition: color 0.3s; display: block;">正态分布的前世今生</a></li></ul><h3 id="dropout" style="margin-top: 10px; margin-bottom: 10px; margin-left: -36px; padding-right: 40px; padding-left: 63px; height: 40px; line-height: 40px; border-left-; border-left-style: solid; border-left-color: rgb(51, 134, 164); ; color: rgb(51, 134, 164); display: inline-block; border-top-left-radius: 8px; border-top-right-radius: 8px; border-bottom-right-radius: 8px; border-bottom-left-radius: 8px; font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px; background: rgb(247, 247, 247);"><a name="t5" style="color: rgb(102, 102, 102); ; -webkit-transition: color 0.3s; transition: color 0.3s;"></a>Dropout</h3><p style="max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; color: rgb(62, 62, 62); font-family: 'Helvetica Neue', Helvetica, 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; ; line-height: 25.6000003814697px; box-sizing: border-box !important;"><span style="color: rgb(101, 101, 101); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; ; letter-spacing: 0.5px; line-height: normal; white-space: normal;"></span><span style="color: rgb(101, 101, 101); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; ; letter-spacing: 0.5px; line-height: normal; white-space: normal;"></span><span style="color: rgb(101, 101, 101); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; ; letter-spacing: 0.5px; line-height: normal; white-space: normal;"></span><span style="color: rgb(101, 101, 101); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; ; letter-spacing: 0.5px; line-height: normal; white-space: normal;"></span></p><p style="padding-top: 10px; padding-bottom: 10px; clear: both; width: auto; ; line-height: 33px; text-indent: 35px; color: rgb(85, 85, 85); font-family: 'Microsoft Yahei', '冬青黑体简体中文 w3'; letter-spacing: 0.5px;">正则是通过在代价函数后面加上正则项来防止模型过拟合的。而在神经网络中,有一种方法是通过修改神经网络本身结构来实现的,其名为Dropout。该方法是在对网络进行训练时用一种技巧(trike),对于如下所示的三层人工神经网络:<br><img title="" src="http://img.blog.csdn.net/20151026204937425" alt="这里写图片描述" style="margin-right: auto; margin-left: auto; border: 0px; height: auto; width: auto; max-; display: block; background: url(<img src=" http:="" dataunion.org="" wp-content="" themes="" yzipi="" images="" imgbg.gif"="" border="0">) no-repeat;"&gt;<br>对于上图所示的网络,在训练开始时,随机得删除一些(可以设定为一半,也可以为1/3,1/4等)隐藏层神经元,即认为这些神经元不存在,同时保持输入层与输出层神经元的个数不变,这样便得到如下的ANN:<br><img title="" src="http://img.blog.csdn.net/20151026204952319" alt="这里写图片描述" style="margin-right: auto; margin-left: auto; border: 0px; height: auto; width: auto; max-; display: block; background: url(<img src=" http:="" dataunion.org="" wp-content="" themes="" yzipi="" images="" imgbg.gif"="" border="0">) no-repeat;"&gt;<br>然后按照BP学习算法对ANN中的参数进行学习更新(虚线连接的单元不更新,因为认为这些神经元被临时删除了)。这样一次迭代更新便完成了。下一次迭代中,同样随机删除一些神经元,与上次不一样,做随机选择。这样一直进行瑕疵,直至训练结束。<br>Dropout方法是通过修改ANN中隐藏层的神经元个数来防止ANN的过拟合。具体可参见<a href="http://papers.nips.cc/paper/4824-imagenet-classification-w" style="color: rgb(102, 102, 102); ; text-decoration: none; -webkit-transition: color 0.3s; transition: color 0.3s;">这里</a>。</p><p style="max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; color: rgb(62, 62, 62); font-family: 'Helvetica Neue', Helvetica, 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; ; line-height: 25.6000003814697px; box-sizing: border-box !important;">本文来自网络一只鸟的天空博客。</p><p style="max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; color: rgb(62, 62, 62); font-family: 'Helvetica Neue', Helvetica, 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; ; line-height: 25.6000003814697px; box-sizing: border-box !important;"><br></p><p style="max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; color: rgb(62, 62, 62); font-family: 'Helvetica Neue', Helvetica, 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; ; line-height: 25.6000003814697px; box-sizing: border-box !important;">微信原文:http://mp.weixin.qq.com/s?__biz=MzA3NDkyNTc4Ng==&amp;mid=402674815&amp;idx=2&amp;sn=e83cf2e04b4f031cc84fadaac14a28e9&amp;scene=4</p><p style="max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; color: rgb(62, 62, 62); font-family: 'Helvetica Neue', Helvetica, 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; ; line-height: 25.6000003814697px; box-sizing: border-box !important;"><br></p><p style="text-align: center; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; color: rgb(62, 62, 62); font-family: 'Helvetica Neue', Helvetica, 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; ; line-height: 25.6000003814697px; box-sizing: border-box !important;"><img src="https://dl2024-edu.jg.com.cn/forum/201602/24/1017208e7ijrjavjzris7o.jpg" width="640" height="600" border="0"></p></div><div><strong style="color: rgb(51, 51, 51); font-family: 'Microsoft Yahei', Tahoma, Simsun; ; line-height: 27px;"><font size="4"><font color="#ff8c00"><br></font></font></strong></div>
附件列表
_PTAC{1R{B3NY6U$M%{WSQD.png

原图尺寸 9.36 KB

_PTAC{1R{B3NY6U$M%{WSQD.png

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2016-2-25 18:06:18
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群