AI for games

Artificial Intelligence is an integral part of every video game. This book helps propfessionals keep up with the constantly evolving technological advances in the fast growing game industry and equips students with up-to-date infortmation they need to jumpstart their careers.

Detalles Bibliográficos
Otros Autores: Millington, Ian, author (author)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Boca Raton : CRC Press 2019.
Edición:Third edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009634667706719
Tabla de Contenidos:
  • Cover
  • Half Title
  • Title Page
  • Copyright Page
  • Dedication
  • Table of Contents
  • PART I: AI and Games
  • CHAPTER 1: INTRODUCTION
  • 1.1 What Is AI?
  • 1.1.1 Academic AI
  • 1.1.2 Game AI
  • 1.2 Model of Game AI
  • 1.2.1 Movement
  • 1.2.2 Decision Making
  • 1.2.3 Strategy
  • 1.2.4 Infrastructure
  • 1.2.5 Agent-Based AI
  • 1.2.6 In the Book
  • 1.3 Algorithms and Data Structures
  • 1.3.1 Algorithms
  • 1.3.2 Representations
  • 1.3.3 Implementation
  • 1.4 Layout of the Book
  • CHAPTER 2: GAME AI
  • 2.1 The Complexity Fallacy
  • 2.1.1 When Simple Things Look Good
  • 2.1.2 When Complex Things Look Bad
  • 2.1.3 The Perception Window
  • 2.1.4 Changes of Behavior
  • 2.2 The Kind of AI in Games
  • 2.2.1 Hacks
  • 2.2.2 Heuristics
  • 2.2.3 Algorithms
  • 2.3 Speed and Memory Constraints
  • 2.3.1 Processor Issues
  • 2.3.2 Memory Concerns
  • 2.3.3 Platforms
  • 2.4 The AI Engine
  • 2.4.1 Structure of an AI Engine
  • 2.4.2 Tool Concerns
  • 2.4.3 Putting It All Together
  • PART II: Techniques
  • CHAPTER 3: MOVEMENT
  • 3.1 The Basics of Movement Algorithms
  • 3.1.1 Two-Dimensional Movement
  • 3.1.2 Statics
  • 3.1.3 Kinematics
  • 3.2 Kinematic Movement Algorithms
  • 3.2.1 Seek
  • 3.2.2 Wandering
  • 3.3 Steering Behaviors
  • 3.3.1 Steering Basics
  • 3.3.2 Variable Matching
  • 3.3.3 Seek and Flee
  • 3.3.4 Arrive
  • 3.3.5 Align
  • 3.3.6 Velocity Matching
  • 3.3.7 Delegated Behaviors
  • 3.3.8 Pursue and Evade
  • 3.3.9 Face
  • 3.3.10 Looking Where You're Going
  • 3.3.11 Wander
  • 3.3.12 Path Following
  • 3.3.13 Separation
  • 3.3.14 Collision Avoidance
  • 3.3.15 Obstacle and Wall Avoidance
  • 3.3.16 Summary
  • 3.4 Combining Steering Behaviors
  • 3.4.1 Blending and Arbitration
  • 3.4.2 Weighted Blending
  • 3.4.3 Priorities
  • 3.4.4 Cooperative Arbitration
  • 3.4.5 Steering Pipeline
  • 3.5 Predicting Physics
  • 3.5.1 Aiming and Shooting
  • 3.5.2 Projectile Trajectory.
  • 3.5.3 The Firing Solution
  • 3.5.4 Projectiles with Drag
  • 3.5.5 Iterative Targeting
  • 3.6 Jumping
  • 3.6.1 Jump Points
  • 3.6.2 Landing Pads
  • 3.6.3 Hole Fillers
  • 3.7 Coordinated Movement
  • 3.7.1 Fixed Formations
  • 3.7.2 Scalable Formations
  • 3.7.3 Emergent Formations
  • 3.7.4 Two-Level Formation Steering
  • 3.7.5 Implementation
  • 3.7.6 Extending to More Than Two Levels
  • 3.7.7 Slot Roles and Better Assignment
  • 3.7.8 Slot Assignment
  • 3.7.9 Dynamic Slots and Plays
  • 3.7.10 Tactical Movement
  • 3.8 Motor Control
  • 3.8.1 Output Filtering
  • 3.8.2 Capability-Sensitive Steering
  • 3.8.3 Common Actuation Properties
  • 3.9 Movement in the Third Dimension
  • 3.9.1 Rotation in Three Dimensions
  • 3.9.2 Converting Steering Behaviors to Three Dimensions
  • 3.9.3 Align
  • 3.9.4 Align to Vector
  • 3.9.5 Face
  • 3.9.6 Look Where You're Going
  • 3.9.7 Wander
  • 3.9.8 Faking Rotation Axes
  • CHAPTER 4: PATHFINDING
  • 4.1 The Pathfinding Graph
  • 4.1.1 Graphs
  • 4.1.2 Weighted Graphs
  • 4.1.3 Directed Weighted Graphs
  • 4.1.4 Terminology
  • 4.1.5 Representation
  • 4.2 Dijkstra
  • 4.2.1 The Problem
  • 4.2.2 The Algorithm
  • 4.2.3 Pseudo-Code
  • 4.2.4 Data Structures and Interfaces
  • 4.2.5 Performance of Dijkstra
  • 4.2.6 Weaknesses
  • 4.3 A*
  • 4.3.1 The Problem
  • 4.3.2 The Algorithm
  • 4.3.3 Pseudo-Code
  • 4.3.4 Data Structures and Interfaces
  • 4.3.5 Implementation Notes
  • 4.3.6 Algorithm Performance
  • 4.3.7 Node Array A*
  • 4.3.8 Choosing a Heuristic
  • 4.4 World Representations
  • 4.4.1 Tile Graphs
  • 4.4.2 Dirichlet Domains
  • 4.4.3 Points of Visibility
  • 4.4.4 Navigation Meshes
  • 4.4.5 Non-Translational Problems
  • 4.4.6 Cost Functions
  • 4.4.7 Path Smoothing
  • 4.5 Improving on A*
  • 4.6 Hierarchical Pathfinding
  • 4.6.1 The Hierarchical Pathfinding Graph
  • 4.6.2 Pathfinding on the Hierarchical Graph.
  • 4.6.3 Hierarchical Pathfinding on Exclusions
  • 4.6.4 Strange Effects of Hierarchies on Pathfinding
  • 4.6.5 Instanced Geometry
  • 4.7 Other Ideas in Pathfinding
  • 4.7.1 Open Goal Pathfinding
  • 4.7.2 Dynamic Pathfinding
  • 4.7.3 Other Kinds of Information Reuse
  • 4.7.4 Low Memory Algorithms
  • 4.7.5 Interruptible Pathfinding
  • 4.7.6 Pooling Planners
  • 4.8 Continuous Time Pathfinding
  • 4.8.1 The Problem
  • 4.8.2 The Algorithm
  • 4.8.3 Implementation Notes
  • 4.8.4 Performance
  • 4.8.5 Weaknesses
  • 4.9 Movement Planning
  • 4.9.1 Animations
  • 4.9.2 Movement Planning
  • 4.9.3 Example
  • 4.9.4 Footfalls
  • CHAPTER 5: DECISION MAKING
  • 5.1 Overview of Decision Making
  • 5.2 Decision Trees
  • 5.2.1 The Problem
  • 5.2.2 The Algorithm
  • 5.2.3 Pseudo-Code
  • 5.2.4 Knowledge Representation
  • 5.2.5 Implementation Notes
  • 5.2.6 Performance of Decision Trees
  • 5.2.7 Balancing the Tree
  • 5.2.8 Beyond the Tree
  • 5.2.9 Random Decision Trees
  • 5.3 State Machines
  • 5.3.1 The Problem
  • 5.3.2 The Algorithm
  • 5.3.3 Pseudo-Code
  • 5.3.4 Data Structures and Interfaces
  • 5.3.5 Performance
  • 5.3.6 Implementation Notes
  • 5.3.7 Hard-Coded FSM
  • 5.3.8 Hierarchical State Machines
  • 5.3.9 Combining Decision Trees and State Machines
  • 5.4 Behavior Trees
  • 5.4.1 Implementing Behavior Trees
  • 5.4.2 Pseudo-Code
  • 5.4.3 Decorators
  • 5.4.4 Concurrency and Timing
  • 5.4.5 Adding Data to Behavior Trees
  • 5.4.6 Reusing Trees
  • 5.4.7 Limitations of Behavior Trees
  • 5.5 Fuzzy Logic
  • 5.5.1 A Warning
  • 5.5.2 Introduction to Fuzzy Logic
  • 5.5.3 Fuzzy Logic Decision Making
  • 5.5.4 Fuzzy State Machines
  • 5.6 Markov Systems
  • 5.6.1 Markov Processes
  • 5.6.2 Markov State Machine
  • 5.7 Goal-Oriented Behavior
  • 5.7.1 Goal-Oriented Behavior
  • 5.7.2 Simple Selection
  • 5.7.3 Overall Utility
  • 5.7.4 Timing
  • 5.7.5 Overall Utility GOAP
  • 5.7.6 GOAP with IDA*.
  • 5.7.7 Smelly GOB
  • 5.8 Rule-Based Systems
  • 5.8.1 The Problem
  • 5.8.2 The Algorithm
  • 5.8.3 Pseudo-Code
  • 5.8.4 Data Structures and Interfaces
  • 5.8.5 Rule Arbitration
  • 5.8.6 Unification
  • 5.8.7 Rete
  • 5.8.8 Extensions
  • 5.8.9 Where Next
  • 5.9 Blackboard Architectures
  • 5.9.1 The Problem
  • 5.9.2 The Algorithm
  • 5.9.3 Pseudo-Code
  • 5.9.4 Data Structures and Interfaces
  • 5.9.5 Performance
  • 5.9.6 Other Things Are Blackboard Systems
  • 5.10 Action Execution
  • 5.10.1 Types of Action
  • 5.10.2 The Algorithm
  • 5.10.3 Pseudo-Code
  • 5.10.4 Data Structures and Interfaces
  • 5.10.5 Implementation Notes
  • 5.10.6 Performance
  • 5.10.7 Putting It All Together
  • CHAPTER 6: TACTICAL AND STRATEGIC AI
  • 6.1 Waypoint Tactics
  • 6.1.1 Tactical Locations
  • 6.1.2 Using Tactical Locations
  • 6.1.3 Generating the Tactical Properties of a Waypoint
  • 6.1.4 Automatically Generating the Waypoints
  • 6.1.5 The Condensation Algorithm
  • 6.2 Tactical Analyses
  • 6.2.1 Representing the Game Level
  • 6.2.2 Simple Influence Maps
  • 6.2.3 Terrain Analysis
  • 6.2.4 Learning with Tactical Analyses
  • 6.2.5 A Structure for Tactical Analyses
  • 6.2.6 Map Flooding
  • 6.2.7 Convolution Filters
  • 6.2.8 Cellular Automata
  • 6.3 Tactical Pathfinding
  • 6.3.1 The Cost Function
  • 6.3.2 Tactic Weights and Concern Blending
  • 6.3.3 Modifying the Pathfinding Heuristic
  • 6.3.4 Tactical Graphs for Pathfinding
  • 6.3.5 Using Tactical Waypoints
  • 6.4 Coordinated Action
  • 6.4.1 Multi-Tier AI
  • 6.4.2 Emergent Cooperation
  • 6.4.3 Scripting Group Actions
  • 6.4.4 Military Tactics
  • CHAPTER 7: LEARNING
  • 7.1 Learning Basics
  • 7.1.1 Online or Offline Learning
  • 7.1.2 Intra-Behavior Learning
  • 7.1.3 Inter-Behavior Learning
  • 7.1.4 A Warning
  • 7.1.5 Over-Learning
  • 7.1.6 The Zoo of Learning Algorithms
  • 7.1.7 The Balance of Effort
  • 7.2 Parameter Modification.
  • 7.2.1 The Parameter Landscape
  • 7.2.2 Hill Climbing
  • 7.2.3 Extensions to Basic Hill Climbing
  • 7.2.4 Annealing
  • 7.3 Action Prediction
  • 7.3.1 Left or Right
  • 7.3.2 Raw Probability
  • 7.3.3 String Matching
  • 7.3.4 N-Grams
  • 7.3.5 Window Size
  • 7.3.6 Hierarchical N-Grams
  • 7.3.7 Application in Combat
  • 7.4 Decision Learning
  • 7.4.1 The Structure of Decision Learning
  • 7.4.2 What Should You Learn?
  • 7.4.3 Four Techniques
  • 7.5 Naive Bayes Classifiers
  • 7.5.1 Pseudo-Code
  • 7.5.2 Implementation Notes
  • 7.6 Decision Tree Learning
  • 7.6.1 ID3
  • 7.6.2 ID3 with Continuous Attributes
  • 7.6.3 Incremental Decision Tree Learning
  • 7.7 Reinforcement Learning
  • 7.7.1 The Problem
  • 7.7.2 The Algorithm
  • 7.7.3 Pseudo-Code
  • 7.7.4 Data Structures and Interfaces
  • 7.7.5 Implementation Notes
  • 7.7.6 Performance
  • 7.7.7 Tailoring Parameters
  • 7.7.8 Weaknesses and Realistic Applications
  • 7.7.9 Other Ideas in Reinforcement Learning
  • 7.8 Artificial Neural Networks
  • 7.8.1 Overview
  • 7.8.2 The Problem
  • 7.8.3 The Algorithm
  • 7.8.4 Pseudo-Code
  • 7.8.5 Data Structures and Interfaces
  • 7.8.6 Implementation Caveats
  • 7.8.7 Performance
  • 7.8.8 Other Approaches
  • 7.9 Deep Learning
  • 7.9.1 What is Deep Learning?
  • 7.9.2 Data
  • CHAPTER 8: PROCEDURAL CONTENT GENERATION
  • 8.1 Pseudorandom Numbers
  • 8.1.1 Numeric Mixing and Game Seeds
  • 8.1.2 Halton Sequence
  • 8.1.3 Phyllotaxic Angles
  • 8.1.4 Poisson Disk
  • 8.2 Lindenmayer Systems
  • 8.2.1 Simple L-systems
  • 8.2.2 Adding Randomness to L-systems
  • 8.2.3 Stage-Specific Rules
  • 8.3 Landscape Generation
  • 8.3.1 Modifiers and Height-Maps
  • 8.3.2 Noise
  • 8.3.3 Perlin Noise
  • 8.3.4 Faults
  • 8.3.5 Thermal Erosion
  • 8.3.6 Hydraulic Erosion
  • 8.3.7 Altitude Filtering
  • 8.4 Dungeons and Maze Generation
  • 8.4.1 Mazes by Depth First Backtracking.
  • 8.4.2 Minimum Spanning Tree Algorithms.