Modern distributed tracing in .Net a practical guide to observability and performance analysis for microservices

As distributed systems become more complex and dynamic, their observability needs grow to require holistic solutions for performance or usage analysis and debugging. Distributed tracing brings structure, correlation, causation, and consistency to your telemetry, allowing to answer arbitrary question...

Descripción completa

Detalles Bibliográficos
Otros Autores: Molkova, Liudmila, author (author), Kanzhelev, Sergey, foreword by (foreword by)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham, England : Packt Publishing Ltd [2023]
Edición:1st ed
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009755147606719
Tabla de Contenidos:
  • Cover
  • Title Page
  • Copyright and Credit
  • Dedicated
  • Foreword
  • Contributors
  • Table of Contents
  • Preface
  • Part 1: Introducing Distributed Tracing
  • Chapter 1: Observability Needs of Modern Applications
  • Understanding why logs and counters are not enough
  • Logs
  • Events
  • Metrics and counters
  • What's missing?
  • Introducing distributed tracing
  • Span
  • Tracing - building blocks
  • Reviewing context propagation
  • In-process propagation
  • Out-of-process propagation
  • Ensuring consistency and structure
  • Building application topology
  • Resource attributes
  • Performance analysis overview
  • The baseline
  • Investigating performance issues
  • Summary
  • Questions
  • Further reading
  • Chapter 2: Native Monitoring in .NET
  • Technical requirements
  • Building a sample application
  • Log correlation
  • On-demand logging with dotnet-monitor
  • Monitoring with runtime counters
  • Enabling auto-collection with OpenTelemetry
  • Installing and configuring OpenTelemetry
  • Exploring auto-generated telemetry
  • Debugging
  • Performance
  • Summary
  • Questions
  • Chapter 3: The .NET Observability Ecosystem
  • Technical requirements
  • Configuring cloud storage
  • Using instrumentations for popular libraries
  • Instrumenting application
  • Leveraging infrastructure
  • Configuring secrets
  • Configuring observability on Dapr
  • Tracing
  • Metrics
  • Instrumenting serverless environments
  • AWS Lambda
  • Azure Functions
  • Summary
  • Questions
  • Chapter 4: Low-Level Performance Analysis with Diagnostic Tools
  • Technical requirements
  • Investigating common performance problems
  • Memory leaks
  • Thread pool starvation
  • Profiling
  • Inefficient code
  • Debugging locks
  • Using diagnostics tools in production
  • Continuous profiling
  • The dotnet-monitor tool
  • Summary
  • Questions
  • Part 2: Instrumenting .NET Applications.
  • Chapter 5: Configuration and Control Plane
  • Technical requirements
  • Controlling costs with sampling
  • Head-based sampling
  • Tail-based sampling
  • Enriching and filtering telemetry
  • Span processors
  • Customizing instrumentations
  • Resources
  • Metrics
  • Customizing context propagation
  • Processing a pipeline with the OpenTelemetry Collector
  • Summary
  • Questions
  • Chapter 6: Tracing Your Code
  • Technical requirements
  • Tracing with System.Diagnostics or the OpenTelemetry API shim
  • Tracing with System.Diagnostics
  • Tracing with the OpenTelemetry API shim
  • Using ambient context
  • Recording events
  • When to use events
  • The ActivityEvent API
  • Correlating spans with links
  • Using links
  • Testing your instrumentation
  • Intercepting activities
  • Filtering relevant activities
  • Summary
  • Questions
  • Chapter 7: Adding Custom Metrics
  • Technical requirements
  • Metrics in .NET - past and present
  • Cardinality
  • When to use metrics
  • Reporting metrics
  • Using counters
  • The Counter class
  • The UpDownCounter class
  • The ObservableCounter class
  • The ObservableUpDownCounter class
  • Using an asynchronous gauge
  • Using histograms
  • Summary
  • Questions
  • Chapter 8: Writing Structured and Correlated Logs
  • Technical requirements
  • Logging evolution in .NET
  • Console
  • Trace
  • EventSource
  • ILogger
  • Logging with ILogger
  • Optimizing logging
  • Capturing logs with OpenTelemetry
  • Managing logging costs
  • Pipelines
  • Backends
  • Summary
  • Questions
  • Part 3: Observability for Common Cloud Scenarios
  • Chapter 9: Best Practices
  • Technical requirements
  • Choosing the right signal
  • Getting more with less
  • Building a new application
  • Evolving applications
  • Performance-sensitive scenarios
  • Staying consistent with semantic conventions
  • Semantic conventions for HTTP requests.
  • General considerations
  • Summary
  • Questions
  • Chapter 10: Tracing Network Calls
  • Technical requirements
  • Instrumenting client calls
  • Instrumenting unary calls
  • Configuring instrumentation
  • Instrumenting server calls
  • Instrumenting streaming calls
  • Basic instrumentation
  • Tracing individual messages
  • Observability in action
  • Summary
  • Questions
  • Chapter 11: Instrumenting Messaging Scenarios
  • Technical requirements
  • Observability in messaging scenarios
  • Messaging semantic conventions
  • Instrumenting the producer
  • Trace context propagation
  • Tracing a publish call
  • Producer metrics
  • Instrumenting the consumer
  • Tracing consumer operations
  • Consumer metrics
  • Instrumenting batching scenarios
  • Batching on a transport level
  • Processing batches
  • Performance analysis in messaging scenarios
  • Summary
  • Questions
  • Chapter 12: Instrumenting Database Calls
  • Technical requirements
  • Instrumenting database calls
  • OpenTelemetry semantic conventions for databases
  • Tracing implementation
  • Tracing cache calls
  • Instrumenting composite calls
  • Adding metrics
  • Recording Redis metrics
  • Analyzing performance
  • Summary
  • Questions
  • Part 4: Implementing Distributed Tracing in Your Organization
  • Chapter 13: Driving Change
  • Understanding the importance of observability
  • The cost of insufficient observability
  • The cost of an observability solution
  • The onboarding process
  • The pilot phase
  • Avoiding pitfalls
  • Continuous observability
  • Incorporating observability into the design process
  • Housekeeping
  • Summary
  • Questions
  • Further reading
  • Chapter 14: Creating Your Own Conventions
  • Technical requirements
  • Defining custom conventions
  • Naming attributes
  • Sharing common schema and code
  • Sharing setup code
  • Codifying conventions
  • Using OpenTelemetry schemas and tools.
  • Semantic conventions schema
  • Defining event conventions
  • Summary
  • Questions
  • Chapter 15: Instrumenting Brownfield Applications
  • Technical requirements
  • Instrumenting legacy services
  • Legacy service as a leaf node
  • A legacy service in the middle
  • Choosing a reasonable level of instrumentation
  • Propagating context
  • Leveraging existing correlation formats
  • Passing context through a legacy service
  • Consolidating telemetry from legacy monitoring tools
  • Summary
  • Questions
  • Assessments
  • Index
  • Other Books You May Enjoy.