Machine learning quick reference quick and essential machine learning hacks for training smart data models

Your hands-on reference guide to developing, training, and optimizing your machine learning models Key Features Your guide to learning efficient machine learning processes from scratch Explore expert techniques and hacks for a variety of machine learning concepts Write effective code in R, Python, S...

Descripción completa

Detalles Bibliográficos
Otros Autores: Rahul Kumar, author (author)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham : Packt 2019.
Edición:1st edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009631956106719
Tabla de Contenidos:
  • Cover
  • Title Page
  • Copyright and Credits
  • About Packt
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Quantifying Learning Algorithms
  • Statistical models
  • Learning curve
  • Machine learning
  • Wright's model
  • Curve fitting
  • Residual
  • Statistical modeling - the two cultures of Leo Breiman
  • Training data development data - test data
  • Size of the training, development, and test set
  • Bias-variance trade off
  • Regularization
  • Ridge regression (L2)
  • Least absolute shrinkage and selection operator
  • Cross-validation and model selection
  • K-fold cross-validation
  • Model selection using cross-validation
  • 0.632 rule in bootstrapping
  • Model evaluation
  • Confusion matrix
  • Receiver operating characteristic curve
  • Area under ROC
  • H-measure
  • Dimensionality reduction
  • Summary
  • Chapter 2: Evaluating Kernel Learning
  • Introduction to vectors
  • Magnitude of the vector
  • Dot product
  • Linear separability
  • Hyperplanes
  • SVM
  • Support vector
  • Kernel trick
  • Kernel
  • Back to Kernel trick
  • Kernel types
  • Linear kernel
  • Polynomial kernel
  • Gaussian kernel
  • SVM example and parameter optimization through grid search
  • Summary
  • Chapter 3: Performance in Ensemble Learning
  • What is ensemble learning?
  • Ensemble methods
  • Bootstrapping
  • Bagging
  • Decision tree
  • Tree splitting
  • Parameters of tree splitting
  • Random forest algorithm
  • Case study
  • Boosting
  • Gradient boosting
  • Parameters of gradient boosting
  • Summary
  • Chapter 4: Training Neural Networks
  • Neural networks
  • How a neural network works
  • Model initialization
  • Loss function
  • Optimization
  • Computation in neural networks
  • Calculation of activation for H1
  • Backward propagation
  • Activation function
  • Types of activation functions
  • Network initialization
  • Backpropagation
  • Overfitting.
  • Prevention of overfitting in NNs
  • Vanishing gradient
  • Overcoming vanishing gradient
  • Recurrent neural networks
  • Limitations of RNNs
  • Use case
  • Summary
  • Chapter 5: Time Series Analysis
  • Introduction to time series analysis
  • White noise
  • Detection of white noise in a series
  • Random walk
  • Autoregression
  • Autocorrelation
  • Stationarity
  • Detection of stationarity
  • AR model
  • Moving average model
  • Autoregressive integrated moving average
  • Optimization of parameters
  • AR model
  • ARIMA model
  • Anomaly detection
  • Summary
  • Chapter 6: Natural Language Processing
  • Text corpus
  • Sentences
  • Words
  • Bags of words
  • TF-IDF
  • Executing the count vectorizer
  • Executing TF-IDF in Python
  • Sentiment analysis
  • Sentiment classification
  • TF-IDF feature extraction
  • Count vectorizer bag of words feature extraction
  • Model building count vectorization
  • Topic modeling
  • LDA architecture
  • Evaluating the model
  • Visualizing the LDA
  • The Naive Bayes technique in text classification
  • The Bayes theorem
  • How the Naive Bayes classifier works
  • Summary
  • Chapter 7: Temporal and Sequential Pattern Discovery
  • Association rules
  • Apriori algorithm
  • Finding association rules
  • Frequent pattern growth
  • Frequent pattern tree growth
  • Validation
  • Importing the library
  • Summary
  • Chapter 8: Probabilistic Graphical Models
  • Key concepts
  • Bayes rule
  • Bayes network
  • Probabilities of nodes
  • CPT
  • Example of the training and test set
  • Summary
  • Chapter 9: Selected Topics in Deep Learning
  • Deep neural networks
  • Why do we need a deep learning model?
  • Deep neural network notation
  • Forward propagation in a deep network
  • Parameters W and b
  • Forward and backward propagation
  • Error computation
  • Backward propagation
  • Forward propagation equation
  • Backward propagation equation.
  • Parameters and hyperparameters
  • Bias initialization
  • Hyperparameters
  • Use case - digit recognizer
  • Generative adversarial networks
  • Hinton's Capsule network
  • The Capsule Network and convolutional neural networks
  • Summary
  • Chapter 10: Causal Inference
  • Granger causality
  • F-test
  • Limitations
  • Use case
  • Graphical causal models
  • Summary
  • Chapter 11: Advanced Methods
  • Introduction
  • Kernel PCA
  • Independent component analysis
  • Preprocessing for ICA
  • Approach
  • Compressed sensing
  • Our goal
  • Self-organizing maps
  • SOM
  • Bayesian multiple imputation
  • Summary
  • Other Books You May Enjoy
  • Index.