Building LLM Powered Applications Create Intelligent Apps and Agents with Large Language Models
Get hands-on with GPT 3.5, GPT 4, LangChain, Llama 2, Falcon LLM and more, to build LLM-powered sophisticated AI applications Key Features Embed LLMs into real-world applications Use LangChain to orchestrate LLMs and their components within applications Grasp basic and advanced techniques of prompt...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Birmingham, England :
Packt Publishing Ltd
[2024]
|
Edición: | First edition |
Colección: | Expert insight.
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009820529806719 |
Tabla de Contenidos:
- Cover
- Copyright
- Contributors
- Table of Contents
- Preface
- Chapter 1: Introduction to Large Language Models
- What are large foundation models and LLMs?
- AI paradigm shift - an introduction to foundation models
- Under the hood of an LLM
- Most popular LLM transformers-based architectures
- Early experiments
- Introducing the transformer architecture
- Training and evaluating LLMs
- Training an LLM
- Model evaluation
- Base models versus customized models
- How to customize your model
- Summary
- References
- Chapter 2: LLMs for AI-Powered Applications
- How LLMs are changing software development
- The copilot system
- Introducing AI orchestrators to embed LLMs into applications
- The main components of AI orchestrators
- LangChain
- Haystack
- Semantic Kernel
- How to choose a framework
- Summary
- References
- Chapter 3: Choosing an LLM for Your Application
- The most promising LLMs in the market
- Proprietary models
- GPT-4
- Gemini 1.5
- Claude 2
- Open-source models
- LLaMA-2
- Falcon LLM
- Mistral
- Beyond language models
- A decision framework to pick the right LLM
- Considerations
- Case study
- Summary
- References
- Chapter 4: Prompt Engineering
- Technical requirements
- What is prompt engineering?
- Principles of prompt engineering
- Clear instructions
- Split complex tasks into subtasks
- Ask for justification
- Generate many outputs, then use the model to pick the best one
- Repeat instructions at the end
- Use delimiters
- Advanced techniques
- Few-shot approach
- Chain of thought
- ReAct
- Summary
- References
- Chapter 5: Embedding LLMs within Your Applications
- Technical requirements
- A brief note about LangChain
- Getting started with LangChain
- Models and prompts
- Data connections
- Memory
- Chains
- Agents.
- Working with LLMs via the Hugging Face Hub
- Create a Hugging Face user access token
- Storing your secrets in an .env file
- Start using open-source LLMs
- Summary
- References
- Chapter 6: Building Conversational Applications
- Technical requirements
- Getting started with conversational applications
- Creating a plain vanilla bot
- Adding memory
- Adding non-parametric knowledge
- Adding external tools
- Developing the front-end with Streamlit
- Summary
- References
- Chapter 7: Search and Recommendation Engines with LLMs
- Technical requirements
- Introduction to recommendation systems
- Existing recommendation systems
- K-nearest neighbors
- Matrix factorization
- Neural networks
- How LLMs are changing recommendation systems
- Implementing an LLM-powered recommendation system
- Data preprocessing
- Building a QA recommendation chatbot in a cold-start scenario
- Building a content-based system
- Developing the front-end with Streamlit
- Summary
- References
- Chapter 8: Using LLMs with Structured Data
- Technical requirements
- What is structured data?
- Getting started with relational databases
- Introduction to relational databases
- Overview of the Chinook database
- How to work with relational databases in Python
- Implementing the DBCopilot with LangChain
- LangChain agents and SQL Agent
- Prompt engineering
- Adding further tools
- Developing the front-end with Streamlit
- Summary
- References
- Chapter 9: Working with Code
- Technical requirements
- Choosing the right LLM for code
- Code understanding and generation
- Falcon LLM
- CodeLlama
- StarCoder
- Act as an algorithm
- Leveraging Code Interpreter
- Summary
- References
- Chapter 10: Building Multimodal Applications with LLMs
- Technical requirements
- Why multimodality?
- Building a multimodal agent with LangChain.
- Option 1: Using an out-of-the-box toolkit for Azure AI Services
- Getting Started with AzureCognitiveServicesToolkit
- Setting up the toolkit
- Leveraging a single tool
- Leveraging multiple tools
- Building an end-to-end application for invoice analysis
- Option 2: Combining single tools into one agent
- YouTube tools and Whisper
- DALL·E and text generation
- Putting it all together
- Option 3: Hard-coded approach with a sequential chain
- Comparing the three options
- Developing the front-end with Streamlit
- Summary
- References
- Chapter 11: Fine-Tuning Large Language Models
- Technical requirements
- What is fine-tuning?
- When is fine-tuning necessary?
- Getting started with fine-tuning
- Obtaining the dataset
- Tokenizing the data
- Fine-tuning the model
- Using evaluation metrics
- Training and saving
- Summary
- References
- Chapter 12: Responsible AI
- What is Responsible AI and why do we need it?
- Responsible AI architecture
- Model level
- Metaprompt level
- User interface level
- Regulations surrounding Responsible AI
- Summary
- References
- Chapter 13: Emerging Trends and Innovations
- The latest trends in language models and generative AI
- GPT-4V(ision)
- DALL-E 3
- AutoGen
- Small language models
- Companies embracing generative AI
- Coca-Cola
- Notion
- Malbek
- Microsoft
- Summary
- References
- Packt Page
- Other Books You May Enjoy
- Index.