全部版块 我的主页
论坛 计量经济学与统计论坛 五区 计量经济学与统计软件 LATEX论坛
1210 0
2016-08-16
The excellent post by Zack Lipton (Deep Learning's Deep Flaws)'s Deep Flaws has examined the "flaws" found in deep learning algorithms, especially how one can generate adversarial examples that can fool the algorithms. Zack argued that all machine learning algorithms are susceptible to adversarial chosen examples, and we should not be surprised that deep learning have the same weakness as logistic regression.

This post generated many comments, including an interesting observation from Yoshua Bengio, one of the leading experts on Machine Learning and Deep Learning.

He wrote "I agree with (Zack Lipton) analysis, and I am glad that you have put this discussion online."

Yoshua continued:
My conjecture is that *good* unsupervised learning should generally be much more robust to adversarial distortions because it tries to discriminate the data manifold from its surroundings, in ALL non-manifold directions (at every point on the manifold). This is in contrast with supervised learning, which only needs to worry about the directions that discriminate between the observed classes. Because the number of classes is much less than the dimensionality of the space, for image data, supervised learning is therefore highly underconstrained, leaving many directions of changed "unchecked" (i.e. to which the model is either insensitive when it should not or too sensitive in the wrong way).

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群