Mastering machine learning algorithms expert techniques to implement popular machine learning algorithms and fine-tune your models

Explore and master the most important algorithms for solving complex machine learning problems. About This Book Discover high-performing machine learning algorithms and understand how they work in depth. One-stop solution to mastering supervised, unsupervised, and semi-supervised machine learning al...

Full description

Bibliographic Details
Other Authors: Bonaccorso, Giuseppe, author (author)
Format: eBook
Language:Inglés
Published: Birmingham ; Mumbai : Packt [2018]
Edition:1st edition
Subjects:
See on Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630690806719
Table of Contents:
  • Cover
  • Copyright and Credits
  • Dedication
  • Packt Upsell
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Machine Learning Model Fundamentals
  • Models and data
  • Zero-centering and whitening
  • Training and validation sets
  • Cross-validation
  • Features of a machine learning model
  • Capacity of a model
  • Vapnik-Chervonenkis capacity
  • Bias of an estimator
  • Underfitting
  • Variance of an estimator
  • Overfitting
  • The Cramér-Rao bound
  • Loss and cost functions
  • Examples of cost functions
  • Mean squared error
  • Huber cost function
  • Hinge cost function
  • Categorical cross-entropy
  • Regularization
  • Ridge
  • Lasso
  • ElasticNet
  • Early stopping
  • Summary
  • Chapter 2: Introduction to Semi-Supervised Learning
  • Semi-supervised scenario
  • Transductive learning
  • Inductive learning
  • Semi-supervised assumptions
  • Smoothness assumption
  • Cluster assumption
  • Manifold assumption
  • Generative Gaussian mixtures
  • Example of a generative Gaussian mixture
  • Weighted log-likelihood
  • Contrastive pessimistic likelihood estimation
  • Example of contrastive pessimistic likelihood estimation
  • Semi-supervised Support Vector Machines (S3VM)
  • Example of S3VM
  • Transductive Support Vector Machines (TSVM)
  • Example of TSVM
  • Summary
  • Chapter 3: Graph-Based Semi-Supervised Learning
  • Label propagation
  • Example of label propagation
  • Label propagation in Scikit-Learn
  • Label spreading
  • Example of label spreading
  • Label propagation based on Markov random walks
  • Example of label propagation based on Markov random walks
  • Manifold learning
  • Isomap
  • Example of Isomap
  • Locally linear embedding
  • Example of locally linear embedding
  • Laplacian Spectral Embedding
  • Example of Laplacian Spectral Embedding
  • t-SNE
  • Example of t-distributed stochastic neighbor embedding
  • Summary.
  • Chapter 4: Bayesian Networks and Hidden Markov Models
  • Conditional probabilities and Bayes' theorem
  • Bayesian networks
  • Sampling from a Bayesian network
  • Direct sampling
  • Example of direct sampling
  • A gentle introduction to Markov chains
  • Gibbs sampling
  • Metropolis-Hastings sampling
  • Example of Metropolis-Hastings sampling
  • Sampling example using PyMC3
  • Hidden Markov Models (HMMs)
  • Forward-backward algorithm
  • Forward phase
  • Backward phase
  • HMM parameter estimation
  • Example of HMM training with hmmlearn
  • Viterbi algorithm
  • Finding the most likely hidden state sequence with hmmlearn
  • Summary
  • Chapter 5: EM Algorithm and Applications
  • MLE and MAP learning
  • EM algorithm
  • An example of parameter estimation
  • Gaussian mixture
  • An example of Gaussian Mixtures using Scikit-Learn
  • Factor analysis
  • An example of factor analysis with Scikit-Learn
  • Principal Component Analysis
  • An example of PCA with Scikit-Learn
  • Independent component analysis
  • An example of FastICA with Scikit-Learn
  • Addendum to HMMs
  • Summary
  • Chapter 6: Hebbian Learning and Self-Organizing Maps
  • Hebb's rule
  • Analysis of the covariance rule
  • Example of covariance rule application
  • Weight vector stabilization and Oja's rule
  • Sanger's network
  • Example of Sanger's network
  • Rubner-Tavan's network
  • Example of Rubner-Tavan's network
  • Self-organizing maps
  • Example of SOM
  • Summary
  • Chapter 7: Clustering Algorithms
  • k-Nearest Neighbors
  • KD Trees
  • Ball Trees
  • Example of KNN with Scikit-Learn
  • K-means
  • K-means++
  • Example of K-means with Scikit-Learn
  • Evaluation metrics
  • Homogeneity score
  • Completeness score
  • Adjusted Rand Index
  • Silhouette score
  • Fuzzy C-means
  • Example of fuzzy C-means with Scikit-Fuzzy
  • Spectral clustering
  • Example of spectral clustering with Scikit-Learn
  • Summary.
  • Chapter 8: Ensemble Learning
  • Ensemble learning fundamentals
  • Random forests
  • Example of random forest with Scikit-Learn
  • AdaBoost
  • AdaBoost.SAMME
  • AdaBoost.SAMME.R
  • AdaBoost.R2
  • Example of AdaBoost with Scikit-Learn
  • Gradient boosting
  • Example of gradient tree boosting with Scikit-Learn
  • Ensembles of voting classifiers
  • Example of voting classifiers with Scikit-Learn
  • Ensemble learning as model selection
  • Summary
  • Chapter 9: Neural Networks for Machine Learning
  • The basic artificial neuron
  • Perceptron
  • Example of a perceptron with Scikit-Learn
  • Multilayer perceptrons
  • Activation functions
  • Sigmoid and hyperbolic tangent
  • Rectifier activation functions
  • Softmax
  • Back-propagation algorithm
  • Stochastic gradient descent
  • Weight initialization
  • Example of MLP with Keras
  • Optimization algorithms
  • Gradient perturbation
  • Momentum and Nesterov momentum
  • SGD with momentum in Keras
  • RMSProp
  • RMSProp with Keras
  • Adam
  • Adam with Keras
  • AdaGrad
  • AdaGrad with Keras
  • AdaDelta
  • AdaDelta with Keras
  • Regularization and dropout
  • Dropout
  • Example of dropout with Keras
  • Batch normalization
  • Example of batch normalization with Keras
  • Summary
  • Chapter 10: Advanced Neural Models
  • Deep convolutional networks
  • Convolutions
  • Bidimensional discrete convolutions
  • Strides and padding
  • Atrous convolution
  • Separable convolution
  • Transpose convolution
  • Pooling layers
  • Other useful layers
  • Examples of deep convolutional networks with Keras
  • Example of a deep convolutional network with Keras and data augmentation
  • Recurrent networks
  • Backpropagation through time (BPTT)
  • LSTM
  • GRU
  • Example of an LSTM network with Keras
  • Transfer learning
  • Summary
  • Chapter 11: Autoencoders
  • Autoencoders
  • An example of a deep convolutional autoencoder with TensorFlow.
  • Denoising autoencoders
  • An example of a denoising autoencoder with TensorFlow
  • Sparse autoencoders
  • Adding sparseness to the Fashion MNIST deep convolutional autoencoder
  • Variational autoencoders
  • An example of a variational autoencoder with TensorFlow
  • Summary
  • Chapter 12: Generative Adversarial Networks
  • Adversarial training
  • Example of DCGAN with TensorFlow
  • Wasserstein GAN (WGAN)
  • Example of WGAN with TensorFlow
  • Summary
  • Chapter 13: Deep Belief Networks
  • MRF
  • RBMs
  • DBNs
  • Example of unsupervised DBN in Python
  • Example of Supervised DBN with Python
  • Summary
  • Chapter 14: Introduction to Reinforcement Learning
  • Reinforcement Learning fundamentals
  • Environment
  • Rewards
  • Checkerboard environment in Python
  • Policy
  • Policy iteration
  • Policy iteration in the checkerboard environment
  • Value iteration
  • Value iteration in the checkerboard environment
  • TD(0) algorithm
  • TD(0) in the checkerboard environment
  • Summary
  • Chapter 15: Advanced Policy Estimation Algorithms
  • TD(λ) algorithm
  • TD(λ) in a more complex Checkerboard environment
  • Actor-Critic TD(0) in the checkerboard environment
  • SARSA algorithm
  • SARSA in the checkerboard environment
  • Q-learning
  • Q-learning in the checkerboard environment
  • Q-learning using a neural network
  • Summary
  • Other Books You May Enjoy
  • Index.