The Definitive Guide to Google Vertex AI Accelerate Your Machine Learning Journey with Google Cloud Vertex AI and MLOps Best Practices
Implement machine learning pipelines with Google Cloud Vertex AI Key Features Understand the role of an AI platform and MLOps practices in machine learning projects Get acquainted with Google Vertex AI tools and offerings that help accelerate the creation of end-to-end ML solutions Implement Vision,...
Otros Autores: | , |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Birmingham, UK :
Packt Publishing Ltd
2023.
Birmingham, England : [2023] |
Edición: | First edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009790332706719 |
Tabla de Contenidos:
- Cover
- Title Page
- Copyright and Credit
- Dedicated
- Contributors
- Table of Contents
- Preface
- Part 1: The Importance of MLOps in a Real-World ML Deployment
- Chapter 1: Machine Learning Project Life Cycle and Challenges
- ML project life cycle
- Common challenges in developing real-world ML solutions
- Data collection and security
- Non-representative training data
- Poor quality of data
- Underfitting the training dataset
- Overfitting the training dataset
- Infrastructure requirements
- Limitations of ML
- Data-related concerns
- Deterministic nature of problems
- Lack of interpretability and reproducibility
- Concerns related to cost and customizations
- Ethical concerns and bias
- Summary
- Chapter 2: What Is MLOps, and Why Is It So Important for Every ML Team?
- Why is MLOps important?
- Implementing different MLOps maturity levels
- MLOps maturity level 0
- MLOps maturity level 1 - automating basic ML steps
- MLOps maturity level 2 - automated model deployments
- How can Vertex AI help with implementing MLOps?
- Summary
- Part 2: Machine Learning Tools for Custom Models on Google Cloud
- Chapter 3: It's All About Data - Options to Store and Transform ML Datasets
- Moving data to Google Cloud
- Google Cloud Storage Transfer tools
- BigQuery Data Transfer Service
- Storage Transfer Service
- Transfer Appliance
- Where to store data
- GCS - object storage
- BQ - data warehouse
- Transforming data
- Ad hoc transformations within Jupyter Notebook
- Cloud Data Fusion
- Dataflow pipelines for scalable data transformations
- Summary
- Chapter 4: Vertex AI Workbench - a One-Stop Tool for AI/ML Development Needs
- What is Jupyter Notebook?
- Getting started with Jupyter Notebook
- Vertex AI Workbench
- Getting started with Vertex AI Workbench
- Custom containers for Vertex AI Workbench.
- Scheduling notebooks in Vertex AI
- Configuring notebook executions
- Summary
- Chapter 5: No-Code Options for Building ML Models
- ML modeling options in Google Cloud
- What is AutoML?
- Vertex AI AutoML
- How to create a Vertex AI AutoML model using tabular data
- Importing data to use with Vertex AI AutoML
- Training the AutoML model for tabular/structured data
- Generating predictions using the recently trained model
- Deploying a model in Vertex AI
- Generating predictions
- Generating predictions programmatically
- Summary
- Chapter 6: Low-Code Options for Building ML Models
- What is BQML?
- Getting started with BigQuery
- Using BQML for feature transformations
- Manual preprocessing
- Building ML models with BQML
- Creating BQML models
- Hyperparameter tuning with BQML
- Evaluating trained models
- Doing inference with BQML
- User exercise
- Summary
- Chapter 7: Training Fully Custom ML Models with Vertex AI
- Technical requirements
- Building a basic deep learning model with TensorFlow
- Experiment - converting black-and-white images into color images
- Packaging a model to submit it to Vertex AI as a training job
- Monitoring model training progress
- Evaluating trained models
- Summary
- Chapter 8: ML Model Explainability
- What is Explainable AI and why is it important for MLOps practitioners?
- Building trust and confidence
- Explainable AI techniques
- Global versus local explainability
- Techniques for image data
- Techniques for tabular data
- Techniques for text data
- Explainable AI features available in Google Cloud Vertex AI
- Feature-based explanation techniques available on Vertex AI
- Using the model feature importance (SHAP-based) capability with AutoML for tabular data
- Exercise 1
- Exercise 2
- Example-based explanations
- Key steps to use example-based explanations.
- Exercise 3
- Summary
- References
- Chapter 9: Model Optimizations - Hyperparameter Tuning and NAS
- Technical requirements
- What is HPT and why is it important?
- What are hyperparameters?
- Why HPT?
- Search algorithms
- Setting up HPT jobs on Vertex AI
- What is NAS and how is it different from HPT?
- Search space
- Optimization method
- Evaluation method
- NAS on Vertex AI overview
- NAS best practices
- Summary
- Chapte 10: Vertex AI Deployment and Automation Tools - Orchestration through Managed Kubeflow Pipeli
- Technical requirements
- Orchestrating ML workflows using Vertex AI Pipelines (managed Kubeflow pipelines)
- Developing Vertex AI Pipeline using Python
- Pipeline components
- Orchestrating ML workflows using Cloud Composer (managed Airflow)
- Creating a Cloud Composer environment
- Vertex AI Pipelines versus Cloud Composer
- Getting predictions on Vertex AI
- Getting online predictions
- Getting batch predictions
- Managing deployed models on Vertex AI
- Multiple models - single endpoint
- Single model - multiple endpoints
- Compute resources and scaling
- Summary
- Chapter 11: MLOps Governance with Vertex AI
- What is MLOps governance and what are its key components?
- Data governance
- Model governance
- Enterprise scenarios that highlight the importance of MLOps governance
- Scenario 1 - limiting bias in AI solutions
- Scenario 2 - the need to constantly monitor shifts in feature distributions
- Scenario 3 - the need to monitor costs
- Scenario 4 - monitoring how the training data is sourced
- Tools in Vertex AI that can help with governance
- Model Registry
- Metadata Store
- Feature Store
- Vertex AI pipelines
- Model Monitoring
- Billing monitoring
- Summary
- References
- Part 3: Prebuilt/Turnkey ML Solutions Available in GCP
- Chapter 12: Vertex AI - Generative AI Tools.
- GenAI fundamentals
- GenAI versus traditional AI
- Types of GenAI models
- Challenges of GenAI
- LLM evaluation
- GenAI with Vertex AI
- Understanding foundation models
- What is a prompt?
- Using Vertex AI GenAI models through GenAI Studio
- Example 1 - using GenAI Studio language models to generate text
- Example 2 - submitting examples along with the text prompt in structured format to get generated output in a specific format
- Example 3 - generating images using GenAI Studio (Vision)
- Example 4 - generating code samples
- Building and deploying GenAI applications with Vertex AI
- Enhancing GenAI performance with model tuning in Vertex AI
- Using Vertex AI supervised tuning
- Safety filters for generated content
- Summary
- References
- Chapter 13: Document AI - An End-to-End Solution for Processing Documents
- Technical requirements
- What is Document AI?
- Document AI processors
- Overview of existing Document AI processors
- Using Document AI processors
- Creating custom Document AI processors
- Summary
- Chapter 14: ML APIs for Vision, NLP, and Speech
- Vision AI on Google Cloud
- Vision AI
- Video AI
- Translation AI on Google Cloud
- Cloud Translation API
- AutoML Translation
- Translation Hub
- Natural Language AI on Google Cloud
- AutoML for Text Analysis
- Natural Language API
- Healthcare Natural Language API
- Speech AI on Google Cloud
- Speech-to-Text
- Text-to-Speech
- Summary
- Part 4: Building Real-World ML Solutions with Google Cloud
- Chapter 15: Recommender Systems - Predict What Movies a User Would Like to Watch
- Different types of recommender systems
- Real-world evaluation of recommender systems
- Deploying a movie recommender system on Vertex AI
- Data preparation
- Model building
- Local model testing
- Deploying the model on Google Cloud.
- Using the model for inference
- Summary
- References
- Chapter 16: Vision-Based Defect Detection System - Machines Can See Now!
- Technical requirements
- Vision-based defect detection
- Dataset
- Importing useful libraries
- Loading and verifying data
- Checking few samples
- Data preparation
- Splitting data into train and test
- Final preparation of training and testing data
- TF model architecture
- Compiling the model
- Training the model
- Plotting the training progress
- Results
- Deploying a vision model to a Vertex AI endpoint
- Saving model to Google Cloud Storage (GCS)
- Uploading the TF model to the Vertex Model Registry
- Creating a Vertex AI endpoint
- Deploying a model to the Vertex AI endpoint
- Getting online predictions from a vision model
- Summary
- Chapter 17: Natural Language Models - Detecting Fake News Articles!
- Technical requirements
- Detecting fake news using NLP
- Fake news classification with random forest
- About the dataset
- Importing useful libraries
- Reading and verifying the data
- NULL value check
- Combining title and text into a single column
- Cleaning and pre-processing data
- Separating the data and labels
- Converting text into numeric data
- Splitting the data
- Defining the random forest classifier
- Training the model
- Predicting the test data
- Checking the results/metrics on the test dataset
- Confusion matrix
- Launching model training on Vertex AI
- Setting configurations
- Initializing the Vertex AI SDK
- Defining the Vertex AI training job
- Running the Vertex AI job
- BERT-based fake news classification
- BERT for fake news classification
- Importing useful libraries
- The dataset
- Data preparation
- Splitting the data
- Creating data loader objects for batching
- Loading the pre-trained BERT model
- Scheduler
- Training BERT.
- Loading model weights for evaluation.