Generative AI Foundations in Python Discover Key Techniques and Navigate Modern Challenges in LLMs
Begin your generative AI journey with Python as you explore large language models, understand responsible generative AI practices, and apply your knowledge to real-world applications through guided tutorials Key Features Gain expertise in prompt engineering, LLM fine-tuning, and domain adaptation Us...
Otros Autores: | , |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Birmingham, England :
Packt Publishing
[2024]
|
Edición: | First edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009841739806719 |
Tabla de Contenidos:
- Intro
- Title Page
- Copyright and Credits
- Dedications
- Foreword
- Contributors
- Table of Contents
- Preface
- Part 1: Foundations of Generative AI and the Evolution of Large Language Models
- Chapter 1: Understanding Generative AI: An Introduction
- Generative AI
- Distinguishing generative AI from other AI models
- Briefly surveying generative approaches
- Clarifying misconceptions between discriminative and generative paradigms
- Choosing the right paradigm
- Looking back at the evolution of generative AI
- Overview of traditional methods in NLP
- Arrival and evolution of transformer-based models
- Development and impact of GPT-4
- Looking ahead at risks and implications
- Introducing use cases of generative AI
- The future of generative AI applications
- Summary
- References
- Chapter 2: Surveying GenAI Types and Modes: An Overview of GANs, Diffusers, and Transformers
- Understanding General Artificial Intelligence (GAI) Types - distinguishing features of GANs, diffusers, and transformers
- Deconstructing GAI methods - exploring GANs, diffusers, and transformers
- A closer look at GANs
- A closer look at diffusion models
- A closer look at generative transformers
- Applying GAI models - image generation using GANs, diffusers, and transformers
- Working with Jupyter Notebook and Google Colab
- Stable diffusion transformer
- Scoring with the CLIP model
- Summary
- References
- Chapter 3: Tracing the Foundations of Natural Language Processing and the Impact of the Transformer
- Early approaches in NLP
- Advent of neural language models
- Distributed representations
- Transfer Learning
- Advent of NNs in NLP
- The emergence of the Transformer in advanced language models
- Components of the transformer architecture
- Sequence-to-sequence learning.
- Evolving language models - the AR Transformer and its role in GenAI
- Implementing the original Transformer
- Data loading and preparation
- Tokenization
- Data tensorization
- Dataset creation
- Embeddings layer
- Positional encoding
- Multi-head self-attention
- FFN
- Encoder layer
- Encoder
- Decoder layer
- Decoder
- Complete transformer
- Training function
- Translation function
- Main execution
- Summary
- References
- Chapter 4: Applying Pretrained Generative Models: From Prototype to Production
- Prototyping environments
- Transitioning to production
- Mapping features to production setup
- Setting up a production-ready environment
- Local development setup
- Visual Studio Code
- Project initialization
- Docker setup
- Requirements file
- Application code
- Creating a code repository
- CI/CD setup
- Model selection - choosing the right pretrained generative model
- Meeting project objectives
- Model size and computational complexity
- Benchmarking
- Updating the prototyping environment
- GPU configuration
- Loading pretrained models with LangChain
- Setting up testing data
- Quantitative metrics evaluation
- Alignment with CLIP
- Interpreting outcomes
- Responsible AI considerations
- Addressing and mitigating biases
- Transparency and explainability
- Final deployment
- Testing and monitoring
- Maintenance and reliability
- Summary
- Part 2: Practical Applications of Generative AI
- Chapter 5: Fine-Tuning Generative Models for Specific Tasks
- Foundation and relevance - an introduction to fine-tuning
- PEFT
- LoRA
- AdaLoRA
- In-context learning
- Fine-tuning versus in-context learning
- Practice project: Fine-tuning for Q&
- A using PEFT
- Background regarding question-answering fine-tuning
- Implementation in Python
- Evaluation of results
- Summary
- References.
- Chapter 6: Understanding Domain Adaptation for Large Language Models
- Demystifying domain adaptation - understanding its history and importance
- Practice project: Transfer learning for the finance domain
- Training methodologies for financial domain adaptation
- Evaluation and outcome analysis - the ROUGE metric
- Summary
- References
- Chapter 7: Mastering the Fundamentals of Prompt Engineering
- The shift to prompt-based approaches
- Basic prompting - guiding principles, types, and structures
- Guiding principles for model interaction
- Prompt elements and structure
- Elevating prompts - iteration and influencing model behaviors
- LLMs respond to emotional cues
- Effect of personas
- Situational prompting or role-play
- Advanced prompting in action - few-shot learning and prompt chaining
- Practice project: Implementing RAG with LlamaIndex using Python
- Summary
- References
- Chapter 8: Addressing Ethical Considerations and Charting a Path Toward Trustworthy Generative AI
- Ethical norms and values in the context of generative AI
- Investigating and minimizing bias in generative LLMs and generative image models
- Constrained generation and eliciting trustworthy outcomes
- Constrained generation with fine-tuning
- Constrained generation through prompt engineering
- Understanding jailbreaking and harmful behaviors
- Practice project: Minimizing harmful behaviors with filtering
- Summary
- References
- Index
- About Packt
- Other Books You May Enjoy.