全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 MATLAB等数学软件专版
2963 23
2018-10-28
GGRRh43rzg2f7CgTe7hQPoWR4Jylq7w7.png

English | July 31, 2017 | ISBN: 1974082040 | 160 pages | AZW3 (pic)
Neural networks are inherently parallel algorithms. Multicore CPUs, graphical processing units (GPUs), and clusters of computers with multiple CPUs and GPUs can take advantage of this parallelism. Parallel Computing Toolbox, when used in conjunction with Neural Network Toolbox, enables neural network training and simulation to take advantage of each mode of parallelism. Parallel Computing Toolbox allows neural network training and simulation to run across multiple CPU cores on a single PC, or across multiple CPUs on multiple computers on a network using MATLAB Distributed Computing Server. Using multiple cores can speed calculations. Using multiple computers can allow you to solve problems using data sets too big to fit in the RAM of a single computer. The only limit to problem size is the total quantity of RAM available across all computers. Distributed and GPU computing can be combined to run calculations across multiple CPUs and/or GPUs on a single computer, or on a cluster with MATLAB Distributed Computing Server. It is desirable to determine the optimal regularization parameters in an automated fashion. One approach to this process is the Bayesian framework. In this framework, the weights and biases of the network are assumed to be random variables with specified distributions. The regularization parameters are related to the unknown variances associated with these distributions. You can then estimate these parameters using statistical techniques. It is very difficult to know which training algorithm will be the fastest for a given problem. It depends on many factors, including the complexity of the problem, the number of data points in the training set, the number of weights and biases in the network, the error goal, and whether the network is being used for pattern recognition (discriminant analysis) or function approximation (regression). This book compares the various training algorithms. One of the problems that occur during neural network training is called overfitting. The error on the training set is driven to a very small value, but when new data is presented to the network the error is large. The network has memorized the training examples, but it has not learned to generalize to new situations.

This book develops the following topics:
• “Neural Networks with Parallel and GPU Computing”
• “Deep Learning”
• “Optimize Neural Network Training Speed and Memory”
• “Improve Neural Network Generalization and Avoid Overfitting”
• “Create and Train Custom Neural Network Architectures”
• “Deploy Training of Neural Networks”
• “Perceptron Neural Networks”
• “Linear Neural Networks”
• “Hopfield Neural Network”
• “Neural Network Object Reference”
• “Neural Network Simulink Block Library”
• “Deploy Neural Network Simulink Diagrams”

本帖隐藏的内容

Advanced Topics in Neural Networks With Matlab - Parallel Computing, Optimize An.rar
大小:(6.29 MB)

只需: 5 个论坛币  马上下载

本附件包括:

  • Advanced Topics in Neural Networks With Matlab - Parallel Computing, Optimize And Training.azw3




二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2018-10-28 12:13:38
kankan
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2018-10-28 12:16:28
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2018-10-28 12:23:40
好书,支持一下
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2018-10-28 12:32:02
谢谢分享
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2018-10-28 12:36:05
igs816 发表于 2018-10-28 12:03
English | July 31, 2017 | ISBN: 1974082040 | 160 pages | AZW3 (pic)
Neural networks are inheren ...
   谢谢分享
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

点击查看更多内容…
相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群