Strengthening deep neural networks making AI less susceptible to adversarial trickery
As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Beijing :
O'Reilly
[2019]
|
Edición: | 1st edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630820806719 |
Tabla de Contenidos:
- Part 1. An introduction to fooling AI. Introduction
- Attack motivations
- Deep neural network (DNN) fundamentals
- DNN processing for image, audio, and video
- Part 2. Generating adversarial input. The principles of adversarial input
- Methods for generating adversarial perturbation
- Part 3. Understanding the real-world threat. Attack patterns for real-world systems
- Physical-world attacks
- Part 4. Defense. Evaluating model robustness to adversarial inputs
- Defending against adversarial inputs
- Future trends : toward robust AI
- Mathematics terminology reference.