Materias dentro de su búsqueda.
Materias dentro de su búsqueda.
- Historia 34,499
- Crítica e interpretación 11,076
- Biblia 10,896
- Filosofía 8,995
- Història 8,227
- Iglesia Católica 4,639
- Teología 4,556
- Historia y crítica 4,211
- Capuchinos 3,380
- Teología dogmática 3,286
- Moral cristiana 3,270
- Política 3,156
- Església Catòlica 3,053
- Derecho 3,050
- Política y gobierno 2,926
- Bíblia 2,726
- historia 2,663
- Sermones 2,557
- Derecho canónico 2,491
- Economía 2,342
- Misiones- 2,321
- Educación 2,220
- Jesucristo 2,009
- Arquitectura 2,001
- Biografías 1,898
- Biografia 1,848
- Arte 1,838
- Espiritualidad 1,817
- Cristianismo 1,704
- Liturgia 1,693
-
705241
-
705242
-
705243Publicado 2011Libro electrónico
-
705244Publicado 2011Libro electrónico
-
705245
-
705246
-
705247Publicado 2015Libro electrónico
-
705248The essential guide to user interface design an introduction to GUI design principles and techniquespor Galitz, Wilbert O.
Publicado 2002Libro electrónico -
705249
-
705250
-
705251
-
705252Publicado 1903“…Maschinenindustrie von P. Steller. Elektrotechnische industrie von J. …”
Libro electrónico -
705253Publicado 2022Tabla de Contenidos: “…Communication bottlenecks in data parallel training -- Analyzing the communication workloads -- Parameter server architecture -- The All-Reduce architecture -- The inefficiency of state-of-the-art communication schemes -- Leveraging idle links and host resources -- Tree All-Reduce -- Hybrid data transfer over PCIe and NVLink -- On-device memory bottlenecks -- Recomputation and quantization -- Recomputation -- Quantization -- Summary -- Section 2 - Model Parallelism -- Chapter 5: Splitting the Model -- Technical requirements -- Single-node training error - out of memory -- Fine-tuning BERT on a single GPU -- Trying to pack a giant model inside one state-of-the-art GPU -- ELMo, BERT, and GPT -- Basic concepts -- RNN -- ELMo -- BERT -- GPT -- Pre-training and fine-tuning -- State-of-the-art hardware -- P100, V100, and DGX-1 -- NVLink -- A100 and DGX-2 -- NVSwitch -- Summary -- Chapter 6: Pipeline Input and Layer Split -- Vanilla model parallelism is inefficient -- Forward propagation -- Backward propagation -- GPU idle time between forward and backward propagation -- Pipeline input -- Pros and cons of pipeline parallelism -- Advantages of pipeline parallelism -- Disadvantages of pipeline parallelism -- Layer split -- Notes on intra-layer model parallelism -- Summary -- Chapter 7: Implementing Model Parallel Training and Serving Workflows -- Technical requirements -- Wrapping up the whole model parallelism pipeline -- A model parallel training overview -- Implementing a model parallel training pipeline -- Specifying communication protocol among GPUs -- Model parallel serving -- Fine-tuning transformers -- Hyperparameter tuning in model parallelism -- Balancing the workload among GPUs -- Enabling/disabling pipeline parallelism -- NLP model serving -- Summary -- Chapter 8: Achieving Higher Throughput and Lower Latency -- Technical requirements…”
Libro electrónico -
705254
-
705255
-
705256
-
705257
-
705258
-
705259
-
705260