Python Deep Learning Understand How Deep Neural Networks Work and Apply Them to Real-World Tasks
Master effective navigation of neural networks, including convolutions and transformers, to tackle computer vision and NLP tasks using Python Key Features Understand the theory, mathematical foundations and the structure of deep neural networks Become familiar with transformers, large language model...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Birmingham, England :
Packt Publishing Ltd
[2023]
|
Edición: | Third edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009785406806719 |
Tabla de Contenidos:
- Cover
- Title Page
- Copyright and Credit
- Contributors
- Table of Contents
- Preface
- Part 1: Introduction to Neural Networks
- Chapter 1: Machine Learning - an Introduction
- Technical requirements
- Introduction to ML
- Different ML approaches
- Supervised learning
- Unsupervised learning
- Reinforcement learning
- Components of an ML solution
- Neural networks
- Introducing PyTorch
- Summary
- Chapter 2: Neural Networks
- Technical requirements
- The need for NNs
- The math of NNs
- Linear algebra
- An introduction to probability
- Differential calculus
- An introduction to NNs
- Units - the smallest NN building block
- Layers as operations
- Multi-layer NNs
- Activation functions
- The universal approximation theorem
- Training NNs
- GD
- Backpropagation
- A code example of an NN for the XOR function
- Summary
- Chapter 3: Deep Learning Fundamentals
- Technical requirements
- Introduction to DL
- Fundamental DL concepts
- Feature learning
- The reasons for DL's popularity
- Deep neural networks
- Training deep neural networks
- Improved activation functions
- DNN regularization
- Applications of DL
- Introducing popular DL libraries
- Classifying digits with Keras
- Classifying digits with PyTorch
- Summary
- Part 2: Deep Neural Networks for Computer Vision
- Chapter 4: Computer Vision with Convolutional Networks
- Technical requirements
- Intuition and justification for CNNs
- Convolutional layers
- A coding example of the convolution operation
- Cross-channel and depthwise convolutions
- Stride and padding in convolutional layers
- Pooling layers
- The structure of a convolutional network
- Classifying images with PyTorch and Keras
- Convolutional layers in deep learning libraries
- Data augmentation
- Classifying images with PyTorch
- Classifying images with Keras.
- Advanced types of convolutions
- 1D, 2D, and 3D convolutions
- 1×1 convolutions
- Depthwise separable convolutions
- Dilated convolutions
- Transposed convolutions
- Advanced CNN models
- Introducing residual networks
- Inception networks
- Introducing Xception
- Squeeze-and-Excitation Networks
- Introducing MobileNet
- EfficientNet
- Using pre-trained models with PyTorch and Keras
- Summary
- Chapter 5: Advanced Computer Vision Applications
- Technical requirements
- Transfer learning (TL)
- Transfer learning with PyTorch
- Transfer learning with Keras
- Object detection
- Approaches to object detection
- Object detection with YOLO
- Object detection with Faster R-CNN
- Introducing image segmentation
- Semantic segmentation with U-Net
- Instance segmentation with Mask R-CNN
- Image generation with diffusion models
- Introducing generative models
- Denoising Diffusion Probabilistic Models
- Summary
- Part 3: Natural Language Processing and Transformers
- Chapter 6: Natural Language Processing and Recurrent Neural Networks
- Technical requirements
- Natural language processing
- Tokenization
- Introducing word embeddings
- Word2Vec
- Visualizing embedding vectors
- Language modeling
- Introducing RNNs
- RNN implementation and training
- Backpropagation through time
- Vanishing and exploding gradients
- Long-short term memory
- Gated recurrent units
- Implementing text classification
- Summary
- Chapter 7: The Attention Mechanism and Transformers
- Technical requirements
- Introducing seq2seq models
- Understanding the attention mechanism
- Bahdanau attention
- Luong attention
- General attention
- Transformer attention
- Implementing TA
- Building transformers with attention
- Transformer encoder
- Transformer decoder
- Putting it all together
- Decoder-only and encoder-only models.
- Bidirectional Encoder Representations from Transformers
- Generative Pre-trained Transformer
- Summary
- Chapter 8: Exploring Large Language Models in Depth
- Technical requirements
- Introducing LLMs
- LLM architecture
- LLM attention variants
- Prefix decoder
- Transformer nuts and bolts
- Models
- Training LLMs
- Training datasets
- Pre-training properties
- FT with RLHF
- Emergent abilities of LLMs
- Introducing Hugging Face Transformers
- Summary
- Chapter 9: Advanced Applications of Large Language Models
- Technical requirements
- Classifying images with Vision Transformer
- Using ViT with Hugging Face Transformers
- Understanding the DEtection TRansformer
- Using DetR with Hugging Face Transformers
- Generating images with stable diffusion
- Autoencoder
- Conditioning transformer
- Diffusion model
- Using stable diffusion with Hugging Face Transformers
- Exploring fine-tuning transformers
- Harnessing the power of LLMs with LangChain
- Using LangChain in practice
- Summary
- Part 4: Developing and Deploying Deep Neural Networks
- Chapter 10: Machine Learning Operations (MLOps)
- Technical requirements
- Understanding model development
- Choosing an NN framework
- PyTorch versus TensorFlow versus JAX
- Open Neural Network Exchange
- Introducing TensorBoard
- Developing NN models for edge devices with TF Lite
- Mixed-precision training with PyTorch
- Exploring model deployment
- Deploying NN models with Flask
- Building ML web apps with Gradio
- Summary
- Index
- Other Books You May Enjoy.