Generative AI with LangChain Build Large Language Model (LLM) Apps with Python, ChatGPT, and Other LLMs

Get to grips with the LangChain framework to develop production-ready applications, including agents and personal assistants, integrating with web searches, and code execution. Purchase of the print or Kindle book includes a free PDF eBook. Key Features Learn how to leverage LLMs' capabilities...

Descripción completa

Detalles Bibliográficos
Otros Autores: Auffarth, Ben, author (author)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham, England : Packt Publishing Ltd [2023]
Edición:First edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009790334406719
Tabla de Contenidos:
  • Cover
  • Copyright
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: What Is Generative AI?
  • Introducing generative AI
  • What are generative models?
  • Why now?
  • Understanding LLMs
  • What is a GPT?
  • Other LLMs
  • Major players
  • How do GPT models work?
  • Pre-training
  • Tokenization
  • Scaling
  • Conditioning
  • How to try out these models
  • What are text-to-image models?
  • What can AI do in other domains?
  • Summary
  • Questions
  • Chapter 2: LangChain for LLM Apps
  • Going beyond stochastic parrots
  • What are the limitations of LLMs?
  • How can we mitigate LLM limitations?
  • What is an LLM app?
  • What is LangChain?
  • Exploring key components of LangChain
  • What are chains?
  • What are agents?
  • What is memory?
  • What are tools?
  • How does LangChain work?
  • Comparing LangChain with other frameworks
  • Summary
  • Questions
  • Chapter 3: Getting Started with LangChain
  • How to set up the dependencies for this book
  • pip
  • Poetry
  • Conda
  • Docker
  • Exploring API model integrations
  • Fake LLM
  • OpenAI
  • Hugging Face
  • Google Cloud Platform
  • Jina AI
  • Replicate
  • Others
  • Azure
  • Anthropic
  • Exploring local models
  • Hugging Face Transformers
  • llama.cpp
  • GPT4All
  • Building an application for customer service
  • Summary
  • Questions
  • Chapter 4: Building Capable Assistants
  • Mitigating hallucinations through fact-checking
  • Summarizing information
  • Basic prompting
  • Prompt templates
  • Chain of density
  • Map-Reduce pipelines
  • Monitoring token usage
  • Extracting information from documents
  • Answering questions with tools
  • Information retrieval with tools
  • Building a visual interface
  • Exploring reasoning strategies
  • Summary
  • Questions
  • Chapter 5: Building a Chatbot like ChatGPT
  • What is a chatbot?
  • Understanding retrieval and vectors
  • Embeddings
  • Vector storage.
  • Vector indexing
  • Vector libraries
  • Vector databases
  • Loading and retrieving in LangChain
  • Document loaders
  • Retrievers in LangChain
  • kNN retriever
  • PubMed retriever
  • Custom retrievers
  • Implementing a chatbot
  • Document loader
  • Vector storage
  • Memory
  • Conversation buffers
  • Remembering conversation summaries
  • Storing knowledge graphs
  • Combining several memory mechanisms
  • Long-term persistence
  • Moderating responses
  • Summary
  • Questions
  • Chapter 6: Developing Software with Generative AI
  • Software development and AI
  • Code LLMs
  • Writing code with LLMs
  • StarCoder
  • StarChat
  • Llama 2
  • Small local model
  • Automating software development
  • Summary
  • Questions
  • Chapter 7: LLMs for Data Science
  • The impact of generative models on data science
  • Automated data science
  • Data collection
  • Visualization and EDA
  • Preprocessing and feature extraction
  • AutoML
  • Using agents to answer data science questions
  • Data exploration with LLMs
  • Summary
  • Questions
  • Chapter 8: Customizing LLMs and Their Output
  • Conditioning LLMs
  • Methods for conditioning
  • Reinforcement learning with human feedback
  • Low-rank adaptation
  • Inference-time conditioning
  • Fine-tuning
  • Setup for fine-tuning
  • Open-source models
  • Commercial models
  • Prompt engineering
  • Prompt techniques
  • Zero-shot prompting
  • Few-shot learning
  • Chain-of-thought prompting
  • Self-consistency
  • Tree-of-thought
  • Summary
  • Questions
  • Chapter 9: Generative AI in Production
  • How to get LLM apps ready for production
  • Terminology
  • How to evaluate LLM apps
  • Comparing two outputs
  • Comparing against criteria
  • String and semantic comparisons
  • Running evaluations against datasets
  • How to deploy LLM apps
  • FastAPI web server
  • Ray
  • How to observe LLM apps
  • Tracking responses
  • Observability tools.
  • LangSmith
  • PromptWatch
  • Summary
  • Questions
  • Chapter 10: The Future of Generative Models
  • The current state of generative AI
  • Challenges
  • Trends in model development
  • Big Tech vs. small enterprises
  • Artificial General Intelligence
  • Economic consequences
  • Creative industries and advertising
  • Education
  • Law
  • Manufacturing
  • Medicine
  • Military
  • Societal implications
  • Misinformation and cybersecurity
  • Regulations and implementation challenges
  • The road ahead
  • Other Books You May Enjoy
  • Index.