Machine Learning for Industrial Applications
The main goal of the book is to provide a comprehensive and accessible guide that empowers readers to understand, apply, and leverage machine learning algorithms and techniques effectively in real-world scenarios. Welcome to the exciting world of machine learning! In recent years, machine learning h...
Autor principal: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Newark :
John Wiley & Sons, Incorporated
2024.
|
Edición: | 1st ed |
Colección: | Next-generation computing and communication engineering
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009847336806719 |
Tabla de Contenidos:
- Cover
- Series Page
- Title Page
- Copyright Page
- Dedication Page
- Contents
- Preface
- Chapter 1 Overview of Machine Learning
- 1.1 Introduction
- 1.2 Sorts of Machine Learning
- 1.3 Regulated Gaining Knowledge of Dog and Human
- 1.4 Solo Learning
- 1.5 Support Mastering
- 1.6 Bundles or Applications of Machine Learning
- 1.6.1 Photograph Reputation
- 1.6.2 Discourse Recognition
- 1.6.3 Traffic Prediction
- 1.6.4 Item Recommendations
- 1.6.5 Self-Using Vehicles
- 1.6.6 Electronic Mail Unsolicited Mail And Malware Filtering
- 1.6.7 Computerized Private Assistant
- 1.6.8 Online Fraud Detection
- 1.6.9 Securities Exchange Buying and Selling
- 1.6.10 Clinical Prognosis
- 1.6.11 Computerized Language Translation
- 1.6.12 Online Media Features
- 1.6.13 Feeling Evaluation
- 1.6.14 Robotizing Employee Get Right of Entry to Manipulate
- 1.6.15 Marine Flora and Fauna Protection
- 1.6.16 Anticipate Potential Coronary Heart Failure
- 1.6.17 Directing Healthcare Efficiency and Scientific Offerings
- 1.6.18 Transportation and Commuting (Uber)
- 1.6.19 Dynamic Pricing
- 1.6.19.1 How Does Uber Decide the Cost of Your Excursion?
- 1.6.20 Online Video Streaming (Netflix)
- 1.7 Challenges in Machine Learning
- 1.8 Limitations of Machine Learning
- 1.9 Projects in Machine Learning
- References
- Chapter 2 Machine Learning Building Blocks
- 2.1 Data Collection
- 2.1.1 Importing the Data from CSV Files
- 2.2 Data Preparation
- 2.2.1 Data Exploration
- 2.2.2 Data Pre-Processing
- 2.3 Data Wrangling
- 2.4 Data Analysis
- 2.5 Model Selection
- 2.6 Model Building
- 2.7 Model Evaluation
- 2.7.1 Classification Metrics
- 2.7.1.1 Accuracy
- 2.7.1.2 Precision
- 2.7.1.3 Recall
- 2.7.2 Regression Metrics
- 2.7.2.1 Mean Squared Error
- 2.7.2.2 Root Mean Squared Error
- 2.7.2.3 Mean Absolute Error
- 2.8 Deployment.
- 2.8.1 Machine Learning Projects
- 2.8.2 Spam Detection Using Machine Learning
- 2.8.3 Spam Detection for YouTube Comments Using Naïve Bayes Classifier
- 2.8.4 Fake News Detection
- 2.8.5 House Price Prediction
- 2.8.6 Gold Price Prediction
- Bibliography
- Chapter 3 Multilayer Perceptron (in Neural Networks)
- 3.1 Multilayer Perceptron for Digit Classification
- 3.1.1 Implementation of MLP using TensorFlow for Classifying Image Data
- 3.2 Training Multilayer Perceptron
- 3.3 Backpropagation
- References
- Chapter 4 Kernel Machines
- 4.1 Different Kernels and Their Applications
- 4.2 Some Other Kernel Functions
- 4.2.1 Gaussian Radial Basis Function (RBF)
- 4.2.2 Laplace RBF Kernel
- 4.2.3 Hyperbolic Tangent Kernel
- 4.2.4 Bessel Function of the First-Kind Kernel
- 4.2.5 ANOVA Radial Basis Kernel
- 4.2.6 Linear Splines Kernel in One Dimension
- 4.2.7 Exponential Kernel
- 4.2.8 Kernels in Support Vector Machine
- References
- Chapter 5 Linear and Rule-Based Models
- 5.1 Least Squares Methods
- 5.2 The Perceptron
- 5.2.1 Bias
- 5.2.2 Perceptron Weighted Sum
- 5.2.3 Activation Function
- 5.2.3.1 Types of Activation Functions
- 5.2.4 Perceptron Training
- 5.2.5 Online Learning
- 5.2.6 Perceptron Training Error
- 5.3 Support Vector Machines
- 5.4 Linearity with Kernel Methods
- References
- Chapter 6 Distance-Based Models
- 6.1 Introduction
- 6.1.1 Distance-Based Clustering
- 6.2 K-Means Algorithm
- 6.2.1 K-Means Algorithm Working Process
- 6.3 Elbow Method
- 6.4 K-Median
- 6.4.1 Algorithm
- 6.5 K-Medoids, PAM (Partitioning Around Medoids)
- 6.5.1 Advantages
- 6.5.2 Drawbacks
- 6.5.3 Algorithm
- 6.6 CLARA (Clustering Large Applications)
- 6.6.1 Advantages
- 6.6.2 Disadvantages
- 6.7 CLARANS (Clustering Large Applications Based on Randomized Search)
- 6.7.1 Advantages
- 6.7.2 Disadvantages.
- 6.7.3 Algorithm
- 6.8 Hierarchical Clustering
- 6.9 Agglomerative Nesting Hierarchical Clustering (AGNES)
- 6.10 DIANA
- References
- Chapter 7 Model Ensembles
- 7.1 Bagging
- 7.1.1 Advantages
- 7.1.2 Disadvantages
- 7.1.3 Bagging Workage
- 7.1.4 Algorithm
- 7.2 Boosting
- 7.2.1 Types of Boosting
- 7.2.2 Advantages
- 7.2.3 Disadvantages
- 7.2.4 Algorithm
- 7.3 Stacking
- 7.3.1 Architecture of Stacking
- 7.3.2 Stacking Ensemble Family
- References
- Chapter 8 Binary and Beyond Binary Classification
- 8.1 Binary Classification
- 8.2 Logistic Regression
- 8.3 Support Vector Machine
- 8.4 Estimating Class Probabilities
- 8.5 Confusion Matrix
- 8.6 Beyond Binary Classification
- 8.7 Multi-Class Classification
- 8.8 Multi-Label Classification
- Reference
- Chapter 9 Model Selection
- 9.1 Model Selection Considerations
- 9.1.1 What Do We Care Approximately When Choosing the Final Version?
- 9.2 Model Selection Strategies
- 9.3 Types of Model Selection
- 9.3.1 Methods of Re-Sampling
- 9.3.2 Random Separation
- 9.3.3 Time Divide
- 9.3.4 K-Fold Cross-Validation
- 9.3.5 Stratified K-Fold
- 9.3.6 Bootstrap
- 9.3.7 Possible Steps
- 9.3.8 Akaike Information Criterion (AIC)
- 9.3.9 Bayesian Information Criterion (BIC)
- 9.3.10 Minimum Definition Length (MDL)
- 9.3.11 Building Risk Reduction (SRM)
- 9.3.12 Excessive Installation (Overfitting)
- 9.4 The Principle of Parsimony
- 9.5 Examples of Model Selection Criterions
- 9.6 Other Popular Properties
- 9.7 Key Considerations
- 9.8 Model Validation
- 9.8.1 Why is Model Validation Important?
- 9.8.2 How to Validate the Model
- 9.8.3 What is a Model Validation Test?
- 9.8.4 Benefits of Modeling Validation
- 9.8.5 Model Validation Traps
- 9.8.6 Data Verification
- 9.8.7 Model Performance and Validation
- 9.9 Self-Driving Cars
- 9.10 K-Fold Cross Validation.
- 9.11 No One-Size-Fits-All Model Validation
- 9.12 Validation Strategies
- 9.13 K-Fold Cross-Validation
- 9.14 Capture Confirmation Using Hold-Out Validation
- 9.15 Comparison of Validation Strategy
- References
- Chapter 10 Support Vector Machines
- 10.1 History
- 10.2 Model
- 10.3 Kinds of Support Vector Machine
- 10.3.1 Straight SVM
- 10.3.2 Non-Direct SVM
- 10.3.3 Benefits of Help Vector Machines
- 10.3.4 The Negative Marks of Help Vector Machines
- 10.3.5 Applications
- 10.4 Hyperplane and Support Vectors Inside the SVM Set of Rules
- 10.4.1 Hyperplane
- 10.5 Support Vectors
- 10.6 SVM Kernel
- 10.7 How Can It Function?
- 10.7.1 See the Right Hyperplane (Circumstance 1)
- 10.7.2 See the Appropriate Hyperplane (Situation 2)
- 10.7.3 Distinguish the Right Hyper-Airplane (Situation 3)
- 10.7.4 Would We Have the Option to Organize Models (Circumstance 4)?
- 10.7.5 Track Down the Hyperplane to Isolate Into Guidelines (Situation 5)
- 10.8 SVM for Classification
- 10.9 SVM for Regression
- 10.10 Python Implementation of Support Vector Machine
- 10.10.1 Data Pre-Taking Care of Step
- 10.10.2 Fitting the SVM Classifier to the Readiness Set
- 10.10.2.1 Outcome
- 10.10.3 Anticipating the Investigated Set Final Product
- 10.10.3.1 Yield
- 10.10.4 Fostering the Disarray Lattice
- 10.10.5 Picturing the Preparation Set Outcome
- 10.10.5.1 Yield
- 10.10.6 Imagining the Investigated Set Outcome
- 10.10.6.1 Yield
- 10.10.7 Part or Kernel
- 10.10.8 Support Vector Machine (SVM) Code in Python
- 10.10.9 Intricacy of SVM
- References
- Chapter 11 Clustering
- 11.1 Example
- 11.2 Kinds of Clustering
- 11.2.1 Hard Clustering
- 11.2.2 Delicate Clustering
- 11.2.2.1 Dividing Clustering or Partitioning Clustering
- 11.2.2.2 Thickness Essentially Based Clustering or Density Fundamentally Based Clustering.
- 11.2.2.3 Transport Model-Based Clustering or Distribution Model-Based Clustering
- 11.2.2.4 Progressive Clustering or Hierarchical Clustering
- 11.2.2.5 Fluffy Clustering or Fuzzy Clustering
- 11.3 What are the Utilization of Clustering?
- 11.4 Models
- 11.5 Uses of Clustering
- 11.5.1 In Character of Most Tumor Cells
- 11.5.2 In Web Crawlers Like Google
- 11.5.3 Shopper Segmentation
- 11.5.4 In Biology
- 11.5.5 In Land Use
- 11.6 Bunching Algorithms or Clustering Algorithms
- 11.6.1 K-Means Clustering
- 11.6.2 Mean-Shift Clustering
- 11.6.3 Thickness or Density-Based Spatial Clustering of Application with Noise (DBSCAN)
- 11.6.4 Assumption Maximization Clustering Utilizing Gaussian Combination Models
- 11.6.5 Agglomerative Hierarchical Clustering
- 11.7 Instances of Clustering Algorithms
- 11.7.1 Library Setup
- 11.7.2 Grouping or Clustering Dataset
- 11.7.3 Fondness or Affinity Propagation
- 11.7.4 Agglomerative Clustering
- 11.7.5 BIRCH
- 11.7.6 DBSCAN
- 11.7.7 K-Means
- 11.7.8 Mini-Batch K-Means
- 11.7.9 Mean Shift
- 11.7.10 OPTICS
- 11.7.11 Unearthly or Spectral Clustering
- 11.7.12 Gaussian Mixture Model
- 11.8 Python Implementation of K-Means
- 11.8.1 Stacking the Data
- 11.8.2 Plotting the Information
- 11.8.3 Choosing the Component
- 11.8.4 Clustering
- 11.8.5 Clustering Results
- 11.8.6 WCSS and Elbow Technique
- 11.8.7 Uses of K-Mean Bunching
- 11.8.8 Benefits of K-Means
- 11.8.9 Bad Marks of K-MEAN
- References
- Chapter 12 Reinforcement Learning
- 12.1 Model
- 12.2 Terms Utilized in Reinforcement Learning
- 12.3 Key Elements of Reinforcement Learning
- 12.4 Instances of Reinforcement Learning
- 12.5 Advantages of Reinforcement Learning
- 12.6 Challenges with Reinforcement Learning
- 12.7 Sorts of Reinforcement
- 12.7.1 Positive
- 12.7.2 Negative.
- 12.8 What are the Useful Utilizations of Reinforcement Learning?.