Mastering large datasets with Python parallelize and distribute your Python code
Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You’ll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You’ll then scale those same programs to in...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Shelter Island, New York :
Manning
[2019]
|
Edición: | 1st edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009631432806719 |
Tabla de Contenidos:
- Introduction
- Accelerating large dataset work: map and parallel computing
- Function pipelines for mapping complex transformations
- Processing large datasets with lazy workflows
- Accumulation operations with reduce
- Speeding up map and reduce with advanced parallelization
- Processing truly big datasets with Hadoop and Spark
- Best practices for large data with Apache Streaming and mrjob
- PageRank with map and reduce in PySpark
- Faster decision-making with machine learning and PySpark
- Large datasets in the cloud with Amazon Web Services and S3
- MapReduce in the cloud with Amazon's Elastic MapReduce.