Interpretable machine learning with Python learn to build interpretable high-performance models with hands-on real-world examples
This hands-on book will help you make your machine learning models fairer, safer, and more reliable and in turn improve business outcomes. Every chapter introduces a new mission where you learn how to apply interpretation methods to realistic use cases with methods that work for any model type as we...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Birmingham, England ; Mumbai :
Packt
[2021]
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009631723406719 |
Tabla de Contenidos:
- Cover
- Title Page
- Copyright and Credits
- Contributors
- Table of Contents
- Preface
- Section 1: Introduction to Machine Learning Interpretation
- Chapter 1: Interpretation, Interpretability, and Explainability
- and Why Does It All Matter?
- Technical requirements
- What is machine learning interpretation?
- Understanding a simple weight prediction model
- Understanding the difference between interpretability and explainability
- What is interpretability?
- What is explainability?
- A business case for interpretability
- Better decisions
- More trusted brands
- More ethical
- More profitable
- Summary
- Image sources
- Further reading
- Chapter 2: Key Concepts of Interpretability
- Technical requirements
- The mission
- Details about CVD
- The approach
- Preparations
- Loading the libraries
- Understanding and preparing the data
- Learning about interpretation method types and scopes
- Model interpretability method types
- Model interpretability scopes
- Interpreting individual predictions with logistic regression
- Appreciating what hinders machine learning interpretability
- Non-linearity
- Interactivity
- Non-monotonicity
- Mission accomplished
- Summary
- Further reading
- Chapter 3: Interpretation Challenges
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Reviewing traditional model interpretation methods
- Predicting minutes delayed with various regression methods
- Classifying flights as delayed or not delayed with various classification methods
- Visualizing delayed flights with dimensionality reduction methods
- Understanding limitations of traditional model interpretation methods
- Studying intrinsically interpretable (white-box) models
- Generalized Linear Models (GLMs).
- Decision trees
- RuleFit
- Nearest neighbors
- Naïve Bayes
- Recognizing the trade-off between performance and interpretability
- Special model properties
- Assessing performance
- Discovering newer interpretable (glass-box) models
- Explainable Boosting Machine (EBM)
- Skoped Rules
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Section 2: Mastering Interpretation Methods
- Chapter 4: Fundamentals of Feature Importance and Impact
- Technical requirements
- The mission
- Personality and birth order
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Measuring the impact of a feature on the outcome
- Feature importance for tree-based models
- Feature importance for Logistic Regression
- Feature importance for LDA
- Feature importance for the Multi-layer Perceptron
- Practicing PFI
- Disadvantages of PFI
- Interpreting PDPs
- Interaction PDPs
- Disadvantages of PDP
- Explaining ICE plots
- Disadvantages of ICE
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 5: Global Model-Agnostic Interpretation Methods
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Learning about Shapley values
- Interpreting SHAP summary and dependence plots
- Generating SHAP summary plots
- Understanding interactions
- SHAP dependence plots
- SHAP force plots
- Accumulated Local Effects (ALE) plots
- Global surrogates
- Mission accomplished
- Summary
- Further reading
- Chapter 6: Local Model-Agnostic Interpretation Methods
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data.
- Leveraging SHAP's KernelExplainer for local interpretations with SHAP values
- Employing LIME
- Using LIME for NLP
- Trying SHAP for NLP
- Comparing SHAP with LIME
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 7: Anchor and Counterfactual Explanations
- Technical requirements
- The mission
- Unfair bias in recidivisim risk assessments
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Understanding anchor explanations
- Preparations for anchor and counterfactual explanations with alibi
- Local interpretations for anchor explanations
- Exploring counterfactual explanations
- Counterfactual explanations guided by prototypes
- Counterfactual instances and much more with the What-If Tool (WIT)
- Comparing with CEM
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 8: Visualizing Convolutional Neural Networks
- Technical requirements
- The mission
- The approach
- Preparations
- Loading the libraries
- Understanding and preparing the data
- Assessing the CNN classifier with traditional interpretation methods
- Visualizing the learning process with activation-based methods
- Intermediate activations
- Activation maximization
- Evaluating misclassifications with gradient-based attribution methods
- Saliency maps
- Grad-CAM
- Integrated gradients
- Tying it all together
- Understanding classifications with perturbation-based attribution methods
- Occlusion sensitivity
- LIME's ImageExplainer
- CEM
- Tying it all together
- Bonus method: SHAP's DeepExplainer
- Mission accomplished
- Summary
- Dataset and image sources
- Further reading
- Chapter 9: Interpretation Methods for Multivariate Forecasting and Sensitivity Analysis
- Technical requirements
- The mission
- The approach
- The preparation.
- Loading the libraries
- Understanding and preparing the data
- Assessing time series models with traditional interpretation methods
- Generating LSTM attributions with integrated gradients
- Computing global and local attributions with SHAP's KernelExplainer
- Identifying influential features with factor prioritization
- Quantifying uncertainty and cost sensitivity with factor fixing
- Mission accomplished
- Summary
- Dataset and image sources
- References
- Section 3: Tuning for Interpretability
- Chapter 10: Feature Selection and Engineering for Interpretability
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Understanding the effect of irrelevant features
- Reviewing filter-based feature selection methods
- Basic filter-based methods
- Correlation filter-based methods
- Ranking filter-based methods
- Comparing filter-based methods
- Exploring embedded feature selection methods
- Discovering wrapper, hybrid, and advanced feature selection methods
- Wrapper methods
- Hybrid methods
- Advanced methods
- Evaluating all feature-selected models
- Considering feature engineering
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 11: Bias Mitigation and Causal Inference Methods
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Detecting bias
- Visualizing dataset bias
- Quantifying dataset bias
- Quantifying model bias
- Mitigating bias
- Pre-processing bias mitigation methods
- In-processing bias mitigation methods
- Post-processing bias mitigation methods
- Tying it all together!
- Creating a causal model
- Understanding the results of the experiment
- Understanding causal models.
- Initializing the linear doubly robust learner
- Fitting the causal model
- Understanding heterogeneous treatment effects
- Choosing policies
- Testing estimate robustness
- Adding random common cause
- Replacing treatment with a random variable
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 12: Monotonic Constraints and Model Tuning for Interpretability
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Placing guardrails with feature engineering
- Ordinalization
- Discretization
- Interaction terms and non-linear transformations
- Categorical encoding
- Other preparations
- Tuning models for interpretability
- Tuning a Keras neural network
- Tuning other popular model classes
- Optimizing for fairness with Bayesian hyperparameter tuning and custom metrics
- Implementing model constraints
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 13: Adversarial Robustness
- Technical requirements
- The mission
- The approach
- The preparations
- Loading the libraries
- Understanding and preparing the data
- Loading the CNN base model
- Assessing the CNN base classifier
- Learning about evasion attacks
- Defending against targeted attacks with preprocessing
- Shielding against any evasion attack via adversarial training of a robust classifier
- Evaluating and certifying adversarial robustness
- Comparing model robustness with attack strength
- Certifying robustness with randomized smoothing
- Mission accomplished
- Summary
- Dataset sources
- Further reading
- Chapter 14: What's Next for Machine Learning Interpretability?
- Understanding the current landscape of ML interpretability
- Tying everything together!
- Current trends.
- Speculating on the future of ML interpretability.