Hands-on convolutional neural networks with tensorflow solve computer vision problems with modeling in tensorflow and python

Learn how to apply TensorFlow to a wide range of deep learning and Machine Learning problems with this practical guide on training CNNs for image classification, image recognition, object detection and many computer vision challenges. Key Features Learn the fundamentals of Convolutional Neural Netwo...

Descripción completa

Detalles Bibliográficos
Otros Autores: Zafar, Iffat, author (author), Tzanidou, Giounona, author, Burton, Richard, author, Patel, Nimesh, author, Araujo, Leonardo, author
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham : Packt Publishing Ltd [2018]
Edición:1st edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630729306719
Tabla de Contenidos:
  • Cover
  • Title Page
  • Copyright and Credits
  • Packt Upsell
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Setup and Introduction to TensorFlow
  • The TensorFlow way of thinking
  • Setting up and installing TensorFlow
  • Conda environments
  • Checking whether your installation works
  • TensorFlow API levels
  • Eager execution
  • Building your first TensorFlow model
  • One-hot vectors
  • Splitting into training and test sets
  • Creating TensorFlow graphs
  • Variables
  • Operations
  • Feeding data with placeholders
  • Initializing variables
  • Training our model
  • Loss functions
  • Optimization
  • Evaluating a trained model
  • The session
  • Summary
  • Chapter 2: Deep Learning and Convolutional Neural Networks
  • AI and ML
  • Types of ML
  • Old versus new ML
  • Artificial neural networks
  • Activation functions
  • The XOR problem
  • Training neural networks
  • Backpropagation and the chain rule
  • Batches
  • Loss functions
  • The optimizer and its hyperparameters
  • Underfitting versus overfitting
  • Feature scaling
  • Fully connected layers
  • A TensorFlow example for the XOR problem
  • Convolutional neural networks
  • Convolution
  • Input padding
  • Calculating the number of parameters (weights)
  • Calculating the number of operations
  • Converting convolution layers into fully connected layers
  • The pooling layer
  • 1x1 Convolution
  • Calculating the receptive field
  • Building a CNN model in TensorFlow
  • TensorBoard
  • Other types of convolutions
  • Summary
  • Chapter 3: Image Classification in TensorFlow
  • CNN model architecture
  • Cross-entropy loss (log loss)
  • Multi-class cross entropy loss
  • The train/test dataset split
  • Datasets
  • ImageNet
  • CIFAR
  • Loading CIFAR
  • Image classification with TensorFlow
  • Building the CNN graph
  • Learning rate scheduling
  • Introduction to the tf.data API.
  • The main training loop
  • Model Initialization
  • Do not initialize all weights with zeros
  • Initializing with a mean zero distribution
  • Xavier-Bengio and the Initializer
  • Improving generalization by regularizing
  • L2 and L1 regularization
  • Dropout
  • The batch norm layer
  • Summary
  • Chapter 4: Object Detection and Segmentation
  • Image classification with localization
  • Localization as regression
  • TensorFlow implementation
  • Other applications of localization
  • Object detection as classification - Sliding window
  • Using heuristics to guide us (R-CNN)
  • Problems
  • Fast R-CNN
  • Faster R-CNN
  • Region Proposal Network
  • RoI Pooling layer
  • Conversion from traditional CNN to Fully Convnets
  • Single Shot Detectors - You Only Look Once
  • Creating training set for Yolo object detection
  • Evaluating detection (Intersection Over Union)
  • Filtering output
  • Anchor Box
  • Testing/Predicting in Yolo
  • Detector Loss function (YOLO loss)
  • Loss Part 1
  • Loss Part 2
  • Loss Part 3
  • Semantic segmentation
  • Max Unpooling
  • Deconvolution layer (Transposed convolution)
  • The loss function
  • Labels
  • Improving results
  • Instance segmentation
  • Mask R-CNN
  • Summary
  • Chapter 5: VGG, Inception Modules, Residuals, and MobileNets
  • Substituting big convolutions
  • Substituting the 3x3 convolution
  • VGGNet
  • Architecture
  • Parameters and memory calculation
  • Code
  • More about VGG
  • GoogLeNet
  • Inception module
  • More about GoogLeNet
  • Residual Networks
  • MobileNets
  • Depthwise separable convolution
  • Control parameters
  • More about MobileNets
  • Summary
  • Chapter 6: Autoencoders, Variational Autoencoders, and Generative Adversarial Networks
  • Why generative models
  • Autoencoders
  • Convolutional autoencoder example
  • Uses and limitations of autoencoders
  • Variational autoencoders.
  • Parameters to define a normal distribution
  • VAE loss function
  • Kullback-Leibler divergence
  • Training the VAE
  • The reparameterization trick
  • Convolutional Variational Autoencoder code
  • Generating new data
  • Generative adversarial networks
  • The discriminator
  • The generator
  • GAN loss function
  • Generator loss
  • Discriminator loss
  • Putting the losses together
  • Training the GAN
  • Deep convolutional GAN
  • WGAN
  • BEGAN
  • Conditional GANs
  • Problems with GANs
  • Loss interpretability
  • Mode collapse
  • Techniques to improve GANs' trainability
  • Minibatch discriminator
  • Summary
  • Chapter 7: Transfer Learning
  • When?
  • How? An overview
  • How? Code example
  • TensorFlow useful elements
  • An autoencoder without the decoder
  • Selecting layers
  • Training only some layers
  • Complete source
  • Summary
  • Chapter 8: Machine Learning Best Practices and Troubleshooting
  • Building Machine Learning Systems
  • Data Preparation
  • Split of Train/Development/Test set
  • Mismatch of the Dev and Test set
  • When to Change Dev/Test Set
  • Bias and Variance
  • Data Imbalance
  • Collecting more data
  • Look at your performance metric
  • Data synthesis/Augmentation
  • Resample Data
  • Loss function Weighting
  • Evaluation Metrics
  • Code Structure best Practice
  • Singleton Pattern
  • Recipe for CNN creation
  • Summary
  • Chapter 9: Training at Scale
  • Storing data in TFRecords
  • Making a TFRecord
  • Storing encoded images
  • Sharding
  • Making efficient pipelines
  • Parallel calls for map transformations
  • Getting a batch
  • Prefetching
  • Tracing your graph
  • Distributed computing in TensorFlow
  • Model/data parallelism
  • Synchronous/asynchronous SGD
  • When data does not fit on one computer
  • The advantages of NoSQL systems
  • Installing Cassandra (Ubuntu 16.04)
  • The CQLSH tool
  • Creating databases, tables, and indexes.
  • Doing queries in Python
  • Populating tables in Python
  • Doing backups
  • Scaling computation in the cloud
  • EC2
  • AMI
  • Storage (S3)
  • SageMaker
  • Summary
  • References
  • Chapter 1
  • Chapter 2
  • Chapter 3
  • Chapter 4
  • Chapter 5
  • Chapter 7
  • Chapter 9
  • Other Books You May Enjoy
  • Index.