Materias dentro de su búsqueda.
Materias dentro de su búsqueda.
- Historia 34,119
- Crítica e interpretación 10,713
- Biblia 10,661
- Filosofía 8,884
- Història 8,255
- Iglesia Católica 4,610
- Historia y crítica 4,211
- Teología 4,120
- Capuchinos 3,378
- Moral cristiana 3,288
- Política 3,145
- Teología dogmática 3,129
- Església Catòlica 3,056
- Derecho 2,929
- Política y gobierno 2,920
- Bíblia 2,731
- historia 2,663
- Sermones 2,536
- Economía 2,340
- Misiones- 2,290
- Educación 2,198
- Derecho canónico 2,169
- Jesucristo 1,999
- Arquitectura 1,996
- Biografia 1,855
- Arte 1,839
- Espiritualidad 1,806
- Cristianismo 1,701
- Religión 1,608
- Crítica i interpretació 1,582
-
694501
-
694502
-
694503
-
694504
-
694505
-
694506
-
694507Publicado 2011Libro electrónico
-
694508Publicado 2011Libro electrónico
-
694509
-
694510
-
694511Publicado 2015Libro electrónico
-
694512The essential guide to user interface design an introduction to GUI design principles and techniquespor Galitz, Wilbert O.
Publicado 2002Libro electrónico -
694513
-
694514
-
694515
-
694516Publicado 1903“…Maschinenindustrie von P. Steller. Elektrotechnische industrie von J. …”
Libro electrónico -
694517Publicado 2022Tabla de Contenidos: “…Communication bottlenecks in data parallel training -- Analyzing the communication workloads -- Parameter server architecture -- The All-Reduce architecture -- The inefficiency of state-of-the-art communication schemes -- Leveraging idle links and host resources -- Tree All-Reduce -- Hybrid data transfer over PCIe and NVLink -- On-device memory bottlenecks -- Recomputation and quantization -- Recomputation -- Quantization -- Summary -- Section 2 - Model Parallelism -- Chapter 5: Splitting the Model -- Technical requirements -- Single-node training error - out of memory -- Fine-tuning BERT on a single GPU -- Trying to pack a giant model inside one state-of-the-art GPU -- ELMo, BERT, and GPT -- Basic concepts -- RNN -- ELMo -- BERT -- GPT -- Pre-training and fine-tuning -- State-of-the-art hardware -- P100, V100, and DGX-1 -- NVLink -- A100 and DGX-2 -- NVSwitch -- Summary -- Chapter 6: Pipeline Input and Layer Split -- Vanilla model parallelism is inefficient -- Forward propagation -- Backward propagation -- GPU idle time between forward and backward propagation -- Pipeline input -- Pros and cons of pipeline parallelism -- Advantages of pipeline parallelism -- Disadvantages of pipeline parallelism -- Layer split -- Notes on intra-layer model parallelism -- Summary -- Chapter 7: Implementing Model Parallel Training and Serving Workflows -- Technical requirements -- Wrapping up the whole model parallelism pipeline -- A model parallel training overview -- Implementing a model parallel training pipeline -- Specifying communication protocol among GPUs -- Model parallel serving -- Fine-tuning transformers -- Hyperparameter tuning in model parallelism -- Balancing the workload among GPUs -- Enabling/disabling pipeline parallelism -- NLP model serving -- Summary -- Chapter 8: Achieving Higher Throughput and Lower Latency -- Technical requirements…”
Libro electrónico -
694518
-
694519
-
694520