Deep learning with PyTorch a practical approach to building neural network models using PyTorch

Build neural network models in text, vision and advanced analytics using PyTorch About This Book Learn PyTorch for implementing cutting-edge deep learning algorithms. Train your neural networks for higher speed and flexibility and learn how to implement them in various scenarios; Cover various advan...

Descripción completa

Detalles Bibliográficos
Otros Autores: Subramanian, Vishnu, author (author), Agarwal, Manas, writer of foreword (writer of foreword)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham, [England] ; Mumbai, [India] : Packt Publishing 2018.
Edición:First edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009631632506719
Tabla de Contenidos:
  • Cover
  • Copyright and Credits
  • Dedication
  • Packt Upsell
  • Foreword
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Getting Started with Deep Learning Using PyTorch
  • Artificial intelligence
  • The history of AI
  • Machine learning
  • Examples of machine learning in real life
  • Deep learning
  • Applications of deep learning
  • Hype associated with deep learning
  • The history of deep learning
  • Why now?
  • Hardware availability
  • Data and algorithms
  • Deep learning frameworks
  • PyTorch
  • Summary
  • Chapter 2: Building Blocks of Neural Networks
  • Installing PyTorch
  • Our first neural network
  • Data preparation
  • Scalar (0-D tensors)
  • Vectors (1-D tensors)
  • Matrix (2-D tensors)
  • 3-D tensors
  • Slicing tensors
  • 4-D tensors
  • 5-D tensors
  • Tensors on GPU
  • Variables
  • Creating data for our neural network
  • Creating learnable parameters
  • Neural network model
  • Network implementation
  • Loss function
  • Optimize the neural network
  • Loading data
  • Dataset class
  • DataLoader class
  • Summary
  • Chapter 3: Diving Deep into Neural Networks
  • Deep dive into the building blocks of neural networks
  • Layers - fundamental blocks of neural networks
  • Non-linear activations
  • Sigmoid
  • Tanh
  • ReLU
  • Leaky ReLU
  • PyTorch non-linear activations
  • The PyTorch way of building deep learning algorithms
  • Model architecture for different machine learning problems
  • Loss functions
  • Optimizing network architecture
  • Image classification using deep learning
  • Loading data into PyTorch tensors
  • Loading PyTorch tensors as batches
  • Building the network architecture
  • Training the model
  • Summary
  • Chapter 4: Fundamentals of Machine Learning
  • Three kinds of machine learning problems
  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning
  • Machine learning glossary.
  • Evaluating machine learning models
  • Training, validation, and test split
  • Simple holdout validation
  • K-fold validation
  • K-fold validation with shuffling
  • Data representativeness
  • Time sensitivity
  • Data redundancy
  • Data preprocessing and feature engineering
  • Vectorization
  • Value normalization
  • Handling missing values
  • Feature engineering
  • Overfitting and underfitting
  • Getting more data
  • Reducing the size of the network
  • Applying weight regularization
  • Dropout
  • Underfitting
  • Workflow of a machine learning project
  • Problem definition and dataset creation
  • Measure of success
  • Evaluation protocol
  • Prepare your data
  • Baseline model
  • Large model enough to overfit
  • Applying regularization
  • Learning rate picking strategies
  • Summary
  • Chapter 5: Deep Learning for Computer Vision
  • Introduction to neural networks
  • MNIST - getting data
  • Building a CNN model from scratch
  • Conv2d
  • Pooling
  • Nonlinear activation - ReLU
  • View
  • Linear layer
  • Training the model
  • Classifying dogs and cats - CNN from scratch
  • Classifying dogs and cats using transfer learning
  • Creating and exploring a VGG16 model
  • Freezing the layers
  • Fine-tuning VGG16
  • Training the VGG16 model
  • Calculating pre-convoluted features
  • Understanding what a CNN model learns
  • Visualizing outputs from intermediate layers
  • Visualizing weights of the CNN layer
  • Summary
  • Chapter 6: Deep Learning with Sequence Data and Text
  • Working with text data
  • Tokenization
  • Converting text into characters
  • Converting text into words
  • N-gram representation
  • Vectorization
  • One-hot encoding
  • Word embedding
  • Training word embedding by building a sentiment classifier
  • Downloading IMDB data and performing text tokenization
  • torchtext.data
  • torchtext.datasets
  • Building vocabulary
  • Generate batches of vectors.
  • Creating a network model with embedding
  • Training the model
  • Using pretrained word embeddings
  • Downloading the embeddings
  • Loading the embeddings in the model
  • Freeze the embedding layer weights
  • Recursive neural networks
  • Understanding how RNN works with an example
  • LSTM
  • Long-term dependency
  • LSTM networks
  • Preparing the data
  • Creating batches
  • Creating the network
  • Training the model
  • Convolutional network on sequence data
  • Understanding one-dimensional convolution for sequence data
  • Creating the network
  • Training the model
  • Summary
  • Chapter 7: Generative Networks
  • Neural style transfer
  • Loading the data
  • Creating the VGG model
  • Content loss
  • Style loss
  • Extracting the losses
  • Creating loss function for each layers
  • Creating the optimizer
  • Training
  • Generative adversarial networks
  • Deep convolutional GAN
  • Defining the generator network
  • Transposed convolutions
  • Batch normalization
  • Generator
  • Defining the discriminator network
  • Defining loss and optimizer
  • Training the discriminator
  • Training the discriminator with real images
  • Training the discriminator with fake images
  • Training the generator network
  • Training the complete network
  • Inspecting the generated images
  • Language modeling
  • Preparing the data
  • Generating the batches
  • Batches
  • Backpropagation through time
  • Defining a model based on LSTM
  • Defining the train and evaluate functions
  • Training the model
  • Summary
  • Chapter 8: Modern Network Architectures
  • Modern network architectures
  • ResNet
  • Creating PyTorch datasets
  • Creating loaders for training and validation
  • Creating a ResNet model
  • Extracting convolutional features
  • Creating a custom PyTorch dataset class for the pre-convoluted features and loader
  • Creating a simple linear model
  • Training and validating the model.
  • Inception
  • Creating an Inception model
  • Extracting convolutional features using register_forward_hook
  • Creating a new dataset for the convoluted features
  • Creating a fully connected model
  • Training and validating the model
  • Densely connected convolutional networks - DenseNet
  • DenseBlock
  • DenseLayer
  • Creating a DenseNet model
  • Extracting DenseNet features
  • Creating a dataset and loaders
  • Creating a fully connected model and train
  • Model ensembling
  • Creating models
  • Extracting the image features
  • Creating a custom dataset along with data loaders
  • Creating an ensembling model
  • Training and validating the model
  • Encoder-decoder architecture
  • Encoder
  • Decoder
  • Summary
  • Chapter 9: What Next?
  • What next?
  • Overview
  • Interesting ideas to explore
  • Object detection
  • Image segmentation
  • OpenNMT in PyTorch
  • Alien NLP
  • fast.ai - making neural nets uncool again
  • Open Neural Network Exchange
  • How to keep yourself updated
  • Summary
  • Other Books You May Enjoy
  • Index.