Deep learning quick reference useful hacks for training and optimizing deep neural networks with TensorFlow and Keras
Dive deeper into neural networks and get your models trained, optimized with this quick reference guide About This Book A quick reference to all important deep learning concepts and their implementations Essential tips, tricks, and hacks to train a variety of deep learning models such as CNNs, RNNs,...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Birmingham, England ; Mumbai, [India] :
Packt
2018.
|
Edición: | First edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009631599206719 |
Tabla de Contenidos:
- Cover
- Copyright and Credits
- Dedication
- Packt Upsell
- Foreword
- Contributors
- Table of Contents
- Preface
- Chapter 1: The Building Blocks of Deep Learning
- The deep neural network architectures
- Neurons
- The neuron linear function
- Neuron activation functions
- The loss and cost functions in deep learning
- The forward propagation process
- The back propagation function
- Stochastic and minibatch gradient descents
- Optimization algorithms for deep learning
- Using momentum with gradient descent
- The RMSProp algorithm
- The Adam optimizer
- Deep learning frameworks
- What is TensorFlow?
- What is Keras?
- Popular alternatives to TensorFlow
- GPU requirements for TensorFlow and Keras
- Installing Nvidia CUDA Toolkit and cuDNN
- Installing Python
- Installing TensorFlow and Keras
- Building datasets for deep learning
- Bias and variance errors in deep learning
- The train, val, and test datasets
- Managing bias and variance in deep neural networks
- K-Fold cross-validation
- Summary
- Chapter 2: Using Deep Learning to Solve Regression Problems
- Regression analysis and deep neural networks
- Benefits of using a neural network for regression
- Drawbacks to consider when using a neural network for regression
- Using deep neural networks for regression
- How to plan a machine learning problem
- Defining our example problem
- Loading the dataset
- Defining our cost function
- Building an MLP in Keras
- Input layer shape
- Hidden layer shape
- Output layer shape
- Neural network architecture
- Training the Keras model
- Measuring the performance of our model
- Building a deep neural network in Keras
- Measuring the deep neural network performance
- Tuning the model hyperparameters
- Saving and loading a trained Keras model
- Summary.
- Chapter 3: Monitoring Network Training Using TensorBoard
- A brief overview of TensorBoard
- Setting up TensorBoard
- Installing TensorBoard
- How TensorBoard talks to Keras/TensorFlow
- Running TensorBoard
- Connecting Keras to TensorBoard
- Introducing Keras callbacks
- Creating a TensorBoard callback
- Using TensorBoard
- Visualizing training
- Visualizing network graphs
- Visualizing a broken network
- Summary
- Chapter 4: Using Deep Learning to Solve Binary Classification Problems
- Binary classification and deep neural networks
- Benefits of deep neural networks
- Drawbacks of deep neural networks
- Case study - epileptic seizure recognition
- Defining our dataset
- Loading data
- Model inputs and outputs
- The cost function
- Using metrics to assess the performance
- Building a binary classifier in Keras
- The input layer
- The hidden layers
- What happens if we use too many neurons?
- What happens if we use too few neurons?
- Choosing a hidden layer architecture
- Coding the hidden layers for our example
- The output layer
- Putting it all together
- Training our model
- Using the checkpoint callback in Keras
- Measuring ROC AUC in a custom callback
- Measuring precision, recall, and f1-score
- Summary
- Chapter 5: Using Keras to Solve Multiclass Classification Problems
- Multiclass classification and deep neural networks
- Benefits
- Drawbacks
- Case study - handwritten digit classification
- Problem definition
- Model inputs and outputs
- Flattening inputs
- Categorical outputs
- Cost function
- Metrics
- Building a multiclass classifier in Keras
- Loading MNIST
- Input layer
- Hidden layers
- Output layer
- Softmax activation
- Putting it all together
- Training
- Using scikit-learn metrics with multiclass models
- Controlling variance with dropout.
- Controlling variance with regularization
- Summary
- Chapter 6: Hyperparameter Optimization
- Should network architecture be considered a hyperparameter?
- Finding a giant and then standing on his shoulders
- Adding until you overfit, then regularizing
- Practical advice
- Which hyperparameters should we optimize?
- Hyperparameter optimization strategies
- Common strategies
- Using random search with scikit-learn
- Hyperband
- Summary
- Chapter 7: Training a CNN from Scratch
- Introducing convolutions
- How do convolutional layers work?
- Convolutions in three dimensions
- A layer of convolutions
- Benefits of convolutional layers
- Parameter sharing
- Local connectivity
- Pooling layers
- Batch normalization
- Training a convolutional neural network in Keras
- Input
- Output
- Cost function and metrics
- Convolutional layers
- Fully connected layers
- Multi-GPU models in Keras
- Training
- Using data augmentation
- The Keras ImageDataGenerator
- Training with a generator
- Summary
- Chapter 8: Transfer Learning with Pretrained CNNs
- Overview of transfer learning
- When transfer learning should be used
- Limited data
- Common problem domains
- The impact of source/target volume and similarity
- More data is always beneficial
- Source/target domain similarity
- Transfer learning in Keras
- Target domain overview
- Source domain overview
- Source network architecture
- Transfer network architecture
- Data preparation
- Data input
- Training (feature extraction)
- Training (fine-tuning)
- Summary
- Chapter 9: Training an RNN from scratch
- Introducing recurrent neural networks
- What makes a neuron recurrent?
- Long Short Term Memory Networks
- Backpropagation through time
- A refresher on time series problems
- Stock and flow
- ARIMA and ARIMAX forecasting.
- Using an LSTM for time series prediction
- Data preparation
- Loading the dataset
- Slicing train and test by date
- Differencing a time series
- Scaling a time series
- Creating a lagged training set
- Input shape
- Data preparation glue
- Network output
- Network architecture
- Stateful versus stateless LSTMs
- Training
- Measuring performance
- Summary
- Chapter 10: Training LSTMs with Word Embeddings from Scratch
- An introduction to natural language processing
- Semantic analysis
- Document classification
- Vectorizing text
- NLP terminology
- Bag of Word models
- Stemming, lemmatization, and stopwords
- Count and TF-IDF vectorization
- Word embedding
- A quick example
- Learning word embeddings with prediction
- Learning word embeddings with counting
- Getting from words to documents
- Keras embedding layer
- 1D CNNs for natural language processing
- Case studies for document classifications
- Sentiment analysis with Keras embedding layers and LSTMs
- Preparing the data
- Input and embedding layer architecture
- LSTM layer
- Output layer
- Putting it all together
- Training the network
- Performance
- Document classification with and without GloVe
- Preparing the data
- Loading pretrained word vectors
- Input and embedding layer architecture
- Without GloVe vectors
- With GloVe vectors
- Convolution layers
- Output layer
- Putting it all together
- Training
- Performance
- Summary
- Chapter 11: Training Seq2Seq Models
- Sequence-to-sequence models
- Sequence-to-sequence model applications
- Sequence-to-sequence model architecture
- Encoders and decoders
- Characters versus words
- Teacher forcing
- Attention
- Translation metrics
- Machine translation
- Understanding the data
- Loading data
- One hot encoding
- Training network architecture
- Network architecture (for inference).
- Putting it all together
- Training
- Inference
- Loading data
- Creating reverse indices
- Loading models
- Translating a sequence
- Decoding a sequence
- Example translations
- Summary
- Chapter 12: Using Deep Reinforcement Learning
- Reinforcement learning overview
- Markov Decision Processes
- Q Learning
- Infinite state space
- Deep Q networks
- Online learning
- Memory and experience replay
- Exploitation versus exploration
- DeepMind
- The Keras reinforcement learning framework
- Installing Keras-RL
- Installing OpenAI gym
- Using OpenAI gym
- Building a reinforcement learning agent in Keras
- CartPole
- CartPole neural network architecture
- Memory
- Policy
- Agent
- Training
- Results
- Lunar Lander
- Lunar Lander network architecture
- Memory and policy
- Agent
- Training
- Results
- Summary
- Chapter 13: Generative Adversarial Networks
- An overview of the GAN
- Deep Convolutional GAN architecture
- Adversarial training architecture
- Generator architecture
- Discriminator architecture
- Stacked training
- Step 1 - train the discriminator
- Step 2 - train the stack
- How GANs can fail
- Stability
- Mode collapse
- Safe choices for GAN
- Generating MNIST images using a Keras GAN
- Loading the dataset
- Building the generator
- Building the discriminator
- Building the stacked model
- The training loop
- Model evaluation
- Generating CIFAR-10 images using a Keras GAN
- Loading CIFAR-10
- Building the generator
- Building the discriminator
- The training loop
- Model evaluation
- Summary
- Other Books You May Enjoy
- Index.