Azure databricks cookbook accelerate and scale real-time analytics solutions using the apache spark-based analytics service

Get to grips with building and productionizing end-to-end big data solutions in Azure and learn best practices for working with large datasetsKey FeaturesIntegrate with Azure Synapse Analytics, Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze your projects and build pipelinesUse Dat...

Full description

Bibliographic Details
Other Authors: Raj, Phani, author (author), Jaiswal, Vinod, author
Format: eBook
Language:Inglés
Published: Birmingham ; Mumbai : Packt Publishing [2021]
Subjects:
See on Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009644255506719
Table of Contents:
  • Cover
  • Title Page
  • Copyright and Credits
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Creating an Azure Databricks Service
  • Technical requirements
  • Creating a Databricks workspace in the Azure portal
  • Getting ready
  • How to do it…
  • How it works…
  • Creating a Databricks service using the Azure CLI (command-line interface)
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Creating a Databricks service using Azure Resource Manager (ARM) templates
  • Getting ready
  • How to do it…
  • How it works…
  • Adding users and groups to the workspace
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Creating a cluster from the user interface (UI)
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Getting started with notebooks and jobs in Azure Databricks
  • Getting ready
  • How to do it…
  • How it works…
  • Authenticating to Databricks using a PAT
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Chapter 2: Reading and Writing Data from and to Various Azure Services and File Formats
  • Technical requirements
  • Mounting ADLS Gen2 and Azure Blob storage to Azure DBFS
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Reading and writing data from and to Azure Blob storage
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Reading and writing data from and to ADLS Gen2
  • Getting ready
  • How to do it…
  • How it works…
  • Reading and writing data from and to an Azure SQL database using native connectors
  • Getting ready
  • How to do it…
  • How it works…
  • Reading and writing data from and to Azure Synapse SQL (dedicated SQL pool) using native connectors
  • Getting ready
  • How to do it…
  • How it works…
  • Reading and writing data from and to Azure Cosmos DB
  • Getting ready
  • How to do it…
  • How it works….
  • Reading and writing data from and to CSV and Parquet
  • Getting ready
  • How to do it…
  • How it works…
  • Reading and writing data from and to JSON, including nested JSON
  • Getting ready
  • How to do it…
  • How it works…
  • Chapter 3: Understanding Spark Query Execution
  • Technical requirements
  • Introduction to jobs, stages, and tasks
  • Getting ready
  • How to do it…
  • How it works…
  • Checking the execution details of all the executed Spark queries via the Spark UI
  • Getting ready
  • How to do it…
  • How it works…
  • Deep diving into schema inference
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Looking into the query execution plan
  • Getting ready
  • How to do it…
  • How it works…
  • How joins work in Spark
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Learning about input partitions
  • Getting ready
  • How to do it…
  • How it works…
  • Learning about output partitions
  • Getting ready
  • How to do it…
  • How it works…
  • Learning about shuffle partitions
  • Getting ready
  • How to do it…
  • How it works…
  • Storage benefits of different file types
  • Getting ready
  • How to do it…
  • How it works…
  • Chapter 4: Working with Streaming Data
  • Technical requirements
  • Reading streaming data from Apache Kafka
  • Getting ready
  • How to do it…
  • How it works…
  • Reading streaming data from Azure Event Hubs
  • Getting ready
  • How to do it…
  • How it works…
  • Reading data from Event Hubs for Kafka
  • Getting ready
  • How to do it…
  • How it works…
  • Streaming data from log files
  • Getting ready
  • How to do it…
  • How it works…
  • Understanding trigger options
  • Getting ready
  • How to do it…
  • How it works…
  • Understanding window aggregation on streaming data
  • Getting ready
  • How to do it…
  • How it works…
  • Understanding offsets and checkpoints
  • Getting ready
  • How to do it….
  • How it works…
  • Chapter 5: Integrating with Azure Key Vault, App Configuration, and Log Analytics
  • Technical requirements
  • Creating an Azure Key Vault to store secrets using the UI
  • Getting ready
  • How to do it…
  • How it works…
  • Creating an Azure Key Vault to store secrets using ARM templates
  • Getting ready
  • How to do it…
  • How it works…
  • Using Azure Key Vault secrets in Azure Databricks
  • Getting ready
  • How to do it…
  • How it works…
  • Creating an App Configuration resource
  • Getting ready
  • How to do it…
  • How it works…
  • Using App Configuration in an Azure Databricks notebook
  • Getting ready
  • How to do it…
  • How it works…
  • Creating a Log Analytics workspace
  • Getting ready
  • How to do it…
  • How it works…
  • Integrating a Log Analytics workspace with Azure Databricks
  • Getting ready
  • How to do it…
  • How it works…
  • Chapter 6: Exploring Delta Lake in Azure Databricks
  • Technical requirements
  • Delta table operations - create, read, and write
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Streaming reads and writes to Delta tables
  • Getting ready
  • How to do it…
  • How it works…
  • Delta table data format
  • Getting ready
  • How to do it…
  • How it works…
  • There's more…
  • Handling concurrency
  • Getting ready
  • How to do it…
  • How it works…
  • Delta table performance optimization
  • Getting ready
  • How to do it…
  • How it works…
  • Constraints in Delta tables
  • Getting ready
  • How to do it…
  • How it works…
  • Versioning in Delta tables
  • Getting ready
  • How to do it…
  • How it works…
  • Chapter 7: Implementing Near-Real-Time Analytics and Building a Modern Data Warehouse
  • Technical requirements
  • Understanding the scenario for an end-to-end (E2E) solution
  • Getting ready
  • How to do it…
  • How it works…
  • Creating required Azure resources for the E2E demonstration.
  • Getting ready
  • How to do it…
  • How it works…
  • Simulating a workload for streaming data
  • Getting ready
  • How to do it…
  • How it works…
  • Processing streaming and batch data using Structured Streaming
  • Getting ready
  • How to do it…
  • How it works…
  • Understanding the various stages of transforming data
  • Getting ready
  • How to do it…
  • How it works…
  • Loading the transformed data into Azure Cosmos DB and a Synapse dedicated pool
  • Getting ready
  • How to do it…
  • How it works…
  • Creating a visualization and dashboard in a notebook for near-real-time analytics
  • Getting ready
  • How to do it…
  • How it works…
  • Creating a visualization in Power BI for near-real-time analytics
  • Getting ready
  • How to do it…
  • How it works…
  • Using Azure Data Factory (ADF) to orchestrate the E2E pipeline
  • Getting ready
  • How to do it…
  • How it works…
  • Chapter 8: Databricks SQL
  • Technical requirements
  • How to create a user in Databricks SQL
  • Getting ready
  • How to do it…
  • How it works…
  • Creating SQL endpoints
  • Getting ready
  • How to do it…
  • How it works…
  • Granting access to objects to the user
  • Getting ready
  • How to do it…
  • How it works…
  • Running SQL queries in Databricks SQL
  • Getting ready
  • How to do it…
  • How it works…
  • Using query parameters and filters
  • Getting ready
  • How to do it…
  • How it works…
  • Introduction to visualizations in Databricks SQL
  • Getting ready
  • How to do it…
  • Creating dashboards in Databricks SQL
  • Getting ready
  • How to do it…
  • How it works…
  • Connecting Power BI to Databricks SQL
  • Getting ready
  • How to do it…
  • Chapter 9: DevOps Integrations and Implementing CI/CD for Azure Databricks
  • Technical requirements
  • How to integrate Azure DevOps with an Azure Databricks notebook
  • Getting ready
  • How to do it….
  • Using GitHub for Azure Databricks notebook version control
  • Getting ready
  • How to do it…
  • How it works…
  • Understanding the CI/CD process for Azure Databricks
  • Getting ready
  • How to do it…
  • How it works…
  • How to set up an Azure DevOps pipeline for deploying notebooks
  • Getting ready
  • How to do it…
  • How it works…
  • Deploying notebooks to multiple environments
  • Getting ready
  • How to do it…
  • How it works…
  • Enabling CI/CD in an Azure DevOps build and release pipeline
  • Getting ready
  • How to do it…
  • Deploying an Azure Databricks service using an Azure DevOps release pipeline
  • Getting ready
  • How to do it…
  • Chapter 10: Understanding Security and Monitoring in Azure Databricks
  • Technical requirements
  • Understanding and creating RBAC in Azure for ADLS Gen-2
  • Getting ready
  • How to do it…
  • Creating ACLs using Storage Explorer and PowerShell
  • Getting ready
  • How to do it…
  • How it works…
  • How to configure credential passthrough
  • Getting ready
  • How to do it…
  • How to restrict data access to users using RBAC
  • Getting ready
  • How to do it…
  • How to restrict data access to users using ACLs
  • Getting ready
  • How to do it…
  • Deploying Azure Databricks in a VNet and accessing a secure storage account
  • Getting ready
  • How to do it…
  • There's more…
  • Using Ganglia reports for cluster health
  • Getting ready
  • How to do it…
  • Cluster access control
  • Getting ready
  • How to do it…
  • About Packt
  • Other Books You May Enjoy
  • Index.