资料下载网
首页 计算机 考试教辅
Deep Learning Tutorial_LISA lab, University of Montreal pdf电子书免费下载
首页 > 计算机 > AI人工智能 > Deep Learning Tutorial_LISA lab, University of Montreal pdf电子书免费下载

《Deep Learning Tutorial_LISA lab, University of Montreal》pdf电子书免费下载


下载方式一:

百度网盘下载地址:https://pan.baidu.com/s/1T-lLKzywPNwUMQSfyfPZig
百度网盘密码:1111

下载方式二:

http://ziliaoshare.cn/Download/ab_123504_pd_DeepLearningTutorial_LISAlab,UniversityofMontreal.zip

 


Deep Learning Tutorial_LISA lab, University of Montreal

作者:LISA lab, University of Montreal

页数:171

出版社:University of Montreal

《Deep Learning Tutorial_LISA lab, University of Montreal》介绍

深度学习是机器学习研究的一个新领域,它的提出是为了使机器学习更接近其最初的目标之一:人工智能。有关人工智能机器学习和深度学习算法的简介,请参阅这些课程笔记。深度学习是关于学习多层次的表示和抽象,有助于理解图像、声音和文本等数据。有关深度学习算法的更多信息,请参见以下示例:•专著或评论论文learning deep architecture For AI(Foundation&Trends in Machine learning,2009)。•ICML 2009学习功能层次研讨会网页上有一个参考文献列表LISA公共wiki有一个阅读列表和参考书目杰夫·辛顿有2009年NIPS教程的阅读资料。这里提供的教程将向您介绍一些最重要的深度学习算法,还将向您展示如何使用Theano运行它们。Theano是一个python库,它使编写深度学习模型变得容易,并提供了在GPU上训练模型的选项。算法教程有一些先决条件。你应该了解一些python,并且熟悉numpy。因为本教程是关于使用Theano的,所以您应该先阅读Theano基础教程。完成后,请阅读我们的入门一章–它介绍了算法教程中使用的符号和[可下载]数据集,以及我们通过随机梯度下降进行优化的方法。纯监督学习算法的阅读顺序是:1。逻辑回归-使用Theano进行简单的2。多层感知器-第3层简介。深度卷积网络-LeNet5的简化版本无监督和半监督学习算法可以以任何顺序读取(自动编码器可以独立于RBM/DBN线程读取):•自动编码器,去噪自动编码器-自动编码器描述•叠加去噪自动编码器-无监督深网预训练的简单步骤•受限玻耳兹曼机器-单层生成RBM模型•深度信念网络-叠加RBM的无监督生成预训练,然后监督微调3

Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. See these course notes for a brief introduction to Machine Learning for AI and an introduction to Deep Learning algorithms. Deep Learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text. For more about deep learning algorithms, see for example: • The monograph or review paper Learning Deep Architectures for AI (Foundations & Trends in Machine Learning, 2009). • The ICML 2009 Workshop on Learning Feature Hierarchies webpage has a list of references. • The LISA public wiki has a reading list and a bibliography. • Geoff Hinton has readings from 2009’s NIPS tutorial. The tutorials presented here will introduce you to some of the most important deep learning algorithms and will also show you how to run them using Theano. Theano is a python library that makes writing deep learning models easy, and gives the option of training them on a GPU. The algorithm tutorials have some prerequisites. You should know some python, and be familiar with numpy. Since this tutorial is about using Theano, you should read over the Theano basic tutorial first. Once you’ve done that, read through our Getting Started chapter – it introduces the notation, and [downloadable] datasets used in the algorithm tutorials, and the way we do optimization by stochastic gradient descent. The purely supervised learning algorithms are meant to be read in order: 1. Logistic Regression - using Theano for something simple 2. Multilayer perceptron - introduction to layers 3. Deep Convolutional Network - a simplified version of LeNet5 The unsupervised and semi-supervised learning algorithms can be read in any order (the auto-encoders can be read independently of the RBM/DBN thread): • Auto Encoders, Denoising Autoencoders - description of autoencoders • Stacked Denoising Auto-Encoders - easy steps into unsupervised pre-training for deep nets • Restricted Boltzmann Machines - single layer generative RBM model • Deep Belief Networks - unsupervised generative pre-training of stacked RBMs followed by supervised fine-tuning 3


《Deep Learning Tutorial_LISA lab, University of Montreal》目录

1 LICENSE 1

2 Deep Learning Tutorials 3

3 Getting Started 5

3.1 Download . . 5

3.2 Datasets 5

3.3 Notation . . 7

3.4 A Primer on Supervised Optimization for Deep Learning . 8

3.5 Theano/Python Tips . . 14

4 Classifying MNIST digits using Logistic Regression 17

4.1 The Model . 17

4.2 Defining a Loss Function 18

4.3 Creating a LogisticRegression class 19

4.4 Learning the Model . . 22

4.5 Testing the model 23

4.6 Putting it All Together . 24

5 Multilayer Perceptron 35

5.1 The Model . 35

5.2 Going from logistic regression to MLP . 36

5.3 Putting it All Together . 40

5.4 Tips and Tricks for training MLPs . 48

6 Convolutional Neural Networks (LeNet) 51

6.1 Motivation . 51

6.2 Sparse Connectivity . . 52

6.3 Shared Weights . . 52

6.4 Details and Notation . . 53

6.5 The Convolution Operator . . 54

6.6 MaxPooling 56

6.7 The Full Model: LeNet . 57

6.8 Putting it All Together . 58

6.9 Running the Code 62

6.10 Tips and Tricks . . 62

7 Denoising Autoencoders (dA) 65

7.1 Autoencoders 65

7.2 Denoising Autoencoders 71

7.3 Putting it All Together . 75

7.4 Running the Code 76

8 Stacked Denoising Autoencoders (SdA) 79

8.1 Stacked Autoencoders . 79

8.2 Putting it all together . . 85

8.3 Running the Code 86

8.4 Tips and Tricks . . 86

9 Restricted Boltzmann Machines (RBM) 89

9.1 Energy-Based Models (EBM) 89

9.2 Restricted Boltzmann Machines (RBM) . 91

9.3 Sampling in an RBM . . 92

9.4 Implementation . . 93

9.5 Results 104

10 Deep Belief Networks 107

10.1 Deep Belief Networks . 107

10.2 Justifying Greedy-Layer Wise Pre-Training . . 108

10.3 Implementation . . 109

10.4 Putting it all together . . 114

10.5 Running the Code 115

10.6 Tips and Tricks . . 116

11 Hybrid Monte-Carlo Sampling 117

11.1 Theory 117

11.2 Implementing HMC Using Theano 119

11.3 Testing our Sampler . . 128

11.4 References . 130

12 Recurrent Neural Networks with Word Embeddings 131

12.1 Summary . . 131

12.2 Code - Citations - Contact . . 131

12.3 Task . . 132

12.4 Dataset 132

12.5 Recurrent Neural Network Model . 133

12.6 Evaluation . 137

12.7 Training . . 138

12.8 Running the Code 138

13 LSTM Networks for Sentiment Analysis 141

13.1 Summary . . 141

13.2 Data . . 141

13.3 Model . 141

13.4 Code - Citations - Contact . . 143

13.5 References . 145

ii

14 Modeling and generating sequences of polyphonic music with the RNN-RBM 147

14.1 The RNN-RBM . 147

14.2 Implementation . . 148

14.3 Results 153

14.4 How to improve this code . . 155

15 Miscellaneous 157

15.1 Plotting Samples and Filters . 157

16 References 161

Bibliography 163

Index 165

1许可证1

2深度学习教程3

3入门5

3.1下载。5

3.2数据集5

3.3符号。7

3.4深度学习的监督优化入门。8

3.5 Theano/Python提示。14

4使用逻辑回归对MNIST数字进行分类17

4.1模型。17

4.2定义损失函数18

4.3创建逻辑回归类19

4.4学习模型。22

4.5测试模型23

4.6把它们放在一起。24

5多层感知器35

5.1模型。35

5.2从logistic回归到MLP。36

5.3综合考虑。40

5.4训练MLP的技巧和窍门。48

6卷积神经网络(LeNet)51

6.1动机。51

6.2稀疏连接。52

6.3共用重量。52

6.4细节和符号。53

6.5卷积算子。54

6.6最大池56

6.7完整型号:LeNet。57

6.8把它们放在一起。58

6.9运行代码62

6.10提示和窍门。62

7去噪自动编码器(dA)65

7.1自动编码器65

7.2自动编码器去噪71

7.3把它们放在一起。75

7.4运行代码76

8叠加去噪自动编码器(SdA)79

8.1堆叠式自动编码器。79

8.2把它们放在一起。85

8.3运行代码86

8.4提示和窍门。86

9受限玻耳兹曼机器(RBM)89

9.1基于能量的模型(EBM)89

9.2受限玻耳兹曼机器(RBM)。91

9.3在RBM中取样。92

9.4实施。93

9.5结果104

十大信念网络107

10.1深层信仰网络。107

10.2证明贪婪的分层预培训的合理性。108

10.3实施。109

10.4把它们放在一起。114

10.5运行代码115

10.6提示和窍门。116

11混合蒙特卡罗抽样117

11.1理论117

11.2使用Theano 119实现HMC

11.3测试我们的采样器。128

11.4参考文献。130

12具有单词嵌入的递归神经网络131

12.1概述。131

12.2代码-引用-联系人。131

12.3任务。132

12.4数据集132

12.5递归神经网络模型。133

12.6评估。137

12.7培训。138

12.8运行代码138

13情绪分析的LSTM网络141

13.1概述。141

13.2数据。141

13.3型号。141

13.4代码-引用-联系方式。143

13.5参考文献。145

14用RNN-RBM 147建模和生成复调音乐序列

14.1 RNN-RBM。147

14.2实施。148

14.3结果153

14.4如何改进本规范。155

15其他157

15.1绘制样本和过滤器。157

16参考文献161

参考书目163

索引165

计算机


python
AI人工智能
javascript
计算机网络/服务器
数据库技术
计算机F

考试教辅


考研考博
英语四六级

沪ICP备18046276号-5