Deep learning through sparse and low-rank modeling
Deep Learning through Sparse Representation and Low-Rank Modeling bridges classical sparse and low rank models—those that emphasize problem-specific Interpretability—with recent deep network models that have enabled a larger learning capacity and better utilization of Big Data. It shows how the tool...
Otros Autores: | , , , |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
London :
Academic Press
[2019]
|
Edición: | First edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630528206719 |
Tabla de Contenidos:
- Front Cover
- Deep Learning Through Sparse and Low-Rank Modeling
- Copyright
- Contents
- Contributors
- About the Editors
- Preface
- Acknowledgments
- 1 Introduction
- 1.1 Basics of Deep Learning
- 1.2 Basics of Sparsity and Low-Rankness
- 1.3 Connecting Deep Learning to Sparsity and Low-Rankness
- 1.4 Organization
- References
- 2 Bi-Level Sparse Coding: A Hyperspectral Image Classi cation Example
- 2.1 Introduction
- 2.2 Formulation and Algorithm
- 2.2.1 Notations
- 2.2.2 Joint Feature Extraction and Classi cation
- 2.2.2.1 Sparse Coding for Feature Extraction
- 2.2.2.2 Task-Driven Functions for Classi cation
- 2.2.2.3 Spatial Laplacian Regularization
- 2.2.3 Bi-level Optimization Formulation
- 2.2.4 Algorithm
- 2.2.4.1 Stochastic Gradient Descent
- 2.2.4.2 Sparse Reconstruction
- 2.3 Experiments
- 2.3.1 Classi cation Performance on AVIRIS Indiana Pines Data
- 2.3.2 Classi cation Performance on AVIRIS Salinas Data
- 2.3.3 Classi cation Performance on University of Pavia Data
- 2.4 Conclusion
- 2.5 Appendix
- References
- 3 Deep l0 Encoders: A Model Unfolding Example
- 3.1 Introduction
- 3.2 Related Work
- 3.2.1 l0- and l1-Based Sparse Approximations
- 3.2.2 Network Implementation of l1-Approximation
- 3.3 Deep l0 Encoders
- 3.3.1 Deep l0-Regularized Encoder
- 3.3.2 Deep M-Sparse l0 Encoder
- 3.3.3 Theoretical Properties
- 3.4 Task-Driven Optimization
- 3.5 Experiment
- 3.5.1 Implementation
- 3.5.2 Simulation on l0 Sparse Approximation
- 3.5.3 Applications on Classi cation
- 3.5.4 Applications on Clustering
- 3.6 Conclusions and Discussions on Theoretical Properties
- References
- 4 Single Image Super-Resolution: From Sparse Coding to Deep Learning
- 4.1 Robust Single Image Super-Resolution via Deep Networks with Sparse Prior
- 4.1.1 Introduction
- 4.1.2 Related Work.
- 4.1.3 Sparse Coding Based Network for Image SR
- 4.1.3.1 Image SR Using Sparse Coding
- 4.1.3.2 Network Implementation of Sparse Coding
- 4.1.3.3 Network Architecture of SCN
- 4.1.3.4 Advantages over Previous Models
- 4.1.4 Network Cascade for Scalable SR
- 4.1.4.1 Network Cascade for SR of a Fixed Scaling Factor
- 4.1.4.2 Network Cascade for Scalable SR
- 4.1.4.3 Training Cascade of Networks
- 4.1.5 Robust SR for Real Scenarios
- 4.1.5.1 Data-Driven SR by Fine-Tuning
- 4.1.5.2 Iterative SR with Regularization
- Blurry Image Upscaling
- Noisy Image Upscaling
- 4.1.6 Implementation Details
- 4.1.7 Experiments
- 4.1.7.1 Algorithm Analysis
- 4.1.7.2 Comparison with State-of-the-Art
- 4.1.7.3 Robustness to Real SR Scenarios
- Data-Driven SR by Fine-Tuning
- Regularized Iterative SR
- 4.1.8 Subjective Evaluation
- 4.1.9 Conclusion and Future Work
- 4.2 Learning a Mixture of Deep Networks for Single Image Super-Resolution
- 4.2.1 Introduction
- 4.2.2 The Proposed Method
- 4.2.3 Implementation Details
- 4.2.4 Experimental Results
- 4.2.4.1 Network Architecture Analysis
- 4.2.4.2 Comparison with State-of-the-Art
- 4.2.4.3 Runtime Analysis
- 4.2.5 Conclusion and Future Work
- References
- 5 From Bi-Level Sparse Clustering to Deep Clustering
- 5.1 A Joint Optimization Framework of Sparse Coding and Discriminative Clustering
- 5.1.1 Introduction
- 5.1.2 Model Formulation
- 5.1.2.1 Sparse Coding with Graph Regularization
- 5.1.2.2 Bi-level Optimization Formulation
- 5.1.3 Clustering-Oriented Cost Functions
- 5.1.3.1 Entropy-Minimization Loss
- 5.1.3.2 Maximum-Margin Loss
- 5.1.4 Experiments
- 5.1.4.1 Datasets
- 5.1.4.2 Evaluation Metrics
- 5.1.4.3 Comparison Experiments
- Comparison Methods
- Comparison Analysis
- Varying the Number of Clusters
- Initialization and Parameters
- 5.1.5 Conclusion
- 5.1.6 Appendix.
- 5.2 Learning a Task-Speci c Deep Architecture for Clustering
- 5.2.1 Introduction
- 5.2.2 Related Work
- 5.2.2.1 Sparse Coding for Clustering
- 5.2.2.2 Deep Learning for Clustering
- 5.2.3 Model Formulation
- 5.2.3.1 TAGnet: Task-speci c And Graph-regularized Network
- 5.2.3.2 Clustering-Oriented Loss Functions
- 5.2.3.3 Connections to Existing Models
- 5.2.4 A Deeper Look: Hierarchical Clustering by DTAGnet
- 5.2.5 Experiment Results
- 5.2.5.1 Datasets and Measurements
- 5.2.5.2 Experiment Settings
- 5.2.5.3 Comparison Experiments and Analysis
- Bene ts of the Task-speci c Deep Architecture
- Effects of Graph Regularization
- Scalability and Robustness
- 5.2.5.4 Hierarchical Clustering on CMU MultiPIE
- 5.2.6 Conclusion
- References
- 6 Signal Processing
- 6.1 Deeply Optimized Compressive Sensing
- 6.1.1 Background
- 6.1.2 An End-to-End Optimization Model of CS
- 6.1.3 DOCS: Feed-Forward and Jointly Optimized CS
- Complexity
- Related Work
- 6.1.4 Experiments
- Settings
- Simulation
- Reconstruction Error
- Ef ciency
- Experiments on Image Reconstruction
- 6.1.5 Conclusion
- 6.2 Deep Learning for Speech Denoising
- 6.2.1 Introduction
- 6.2.2 Neural Networks for Spectral Denoising
- 6.2.2.1 Network Architecture
- 6.2.2.2 Implementation Details
- Activation Function
- Cost Function
- Training Strategy
- 6.2.2.3 Extracting Denoised Signals
- 6.2.2.4 Dealing with Gain
- 6.2.3 Experimental Results
- 6.2.3.1 Experimental Setup
- 6.2.3.2 Network Structure Analysis
- 6.2.3.3 Analysis of Robustness to Variations
- 6.2.3.4 Comparison with NMF
- 6.2.4 Conclusion and Future Work
- References
- 7 Dimensionality Reduction
- 7.1 Marginalized Denoising Dictionary Learning with Locality Constraint
- 7.1.1 Introduction
- 7.1.2 Related Works
- 7.1.2.1 Dictionary Learning
- 7.1.2.2 Auto-encoder.
- 7.1.3 Marginalized Denoising Dictionary Learning with Locality Constraint
- 7.1.3.1 Preliminaries and Motivations
- 7.1.3.2 LC-LRD Revisited
- 7.1.3.3 Marginalized Denoising Auto-encoder (mDA)
- 7.1.3.4 Proposed MDDL Model
- 7.1.3.5 Optimization
- 7.1.3.6 Classi cation Based on MDDL
- 7.1.4 Experiments
- 7.1.4.1 Experimental Settings
- 7.1.4.2 Face Recognition
- 7.1.4.3 Object Recognition
- 7.1.4.4 Digits Recognition
- 7.1.5 Conclusion
- 7.1.6 Future Works
- 7.2 Learning a Deep l8 Encoder for Hashing
- 7.2.1 Introduction
- 7.2.1.1 Problem De nition and Background
- 7.2.1.2 Related Work
- 7.2.2 ADMM Algorithm
- 7.2.3 Deep l8 Encoder
- 7.2.4 Deep l8 Siamese Network for Hashing
- 7.2.5 Experiments in Image Hashing
- 7.2.6 Conclusion
- References
- 8 Action Recognition
- 8.1 Deeply Learned View-Invariant Features for Cross-View Action Recognition
- 8.1.1 Introduction
- 8.1.2 Related Work
- 8.1.3 Deeply Learned View-Invariant Features
- 8.1.3.1 Sample-Af nity Matrix (SAM)
- 8.1.3.2 Preliminaries on Autoencoders
- 8.1.3.3 Single-Layer Feature Learning
- 8.1.3.4 Learning
- 8.1.3.5 Deep Architecture
- 8.1.4 Experiments
- 8.1.4.1 IXMAS Dataset
- One-to-One Cross-view Action Recognition
- Many-to-One Cross-view Action Recognition
- 8.1.4.2 Daily and Sports Activities Data Set
- Many-to-One Cross-view Action Classi cation
- 8.2 Hybrid Neural Network for Action Recognition from Depth Cameras
- 8.2.1 Introduction
- 8.2.2 Related Work
- 8.2.3 Hybrid Convolutional-Recursive Neural Networks
- 8.2.3.1 Architecture Overview
- 8.2.3.2 3D Convolutional Neural Networks
- 8.2.3.3 3D Recursive Neural Networks
- 8.2.3.4 Multiple 3D-RNNs
- 8.2.3.5 Model Learning
- 8.2.3.6 Classi cation
- 8.2.4 Experiments
- 8.2.4.1 MSR-Gesture3D Dataset
- 8.2.4.2 MSR-Action3D Dataset
- 8.3 Summary
- References.
- 9 Style Recognition and Kinship Understanding
- 9.1 Style Classi cation by Deep Learning
- 9.1.1 Background
- 9.1.2 Preliminary Knowledge of Stacked Autoencoder (SAE)
- 9.1.3 Style Centralizing Autoencoder
- 9.1.3.1 One Layer Basic SCAE
- 9.1.3.2 Stacked SCAE (SSCAE)
- 9.1.3.3 Visualization of Encoded Feature in SCAE
- 9.1.3.4 Geometric Interpretation of SCAE
- 9.1.4 Consensus Style Centralizing Autoencoder
- 9.1.4.1 Low-Rank Constraint on the Model
- 9.1.4.2 Group Sparsity Constraint on the Model
- 9.1.4.3 Rank-Constrained Group Sparsity Autoencoder
- 9.1.4.4 Ef cient Solutions for RCGSAE
- 9.1.4.5 Progressive CSCAE
- 9.1.5 Experiments
- 9.1.5.1 Dataset
- 9.1.5.2 Compared Methods
- 9.1.5.3 Experimental Results
- 9.2 Visual Kinship Understanding
- 9.2.1 Background
- 9.2.2 Related Work
- 9.2.3 Family Faces
- 9.2.4 Regularized Parallel Autoencoders
- 9.2.4.1 Problem Formulation
- 9.2.4.2 Low-Rank Reframing
- 9.2.4.3 Solution
- 9.2.5 Experimental Results
- 9.2.5.1 Kinship Veri cation
- 9.2.5.2 Family Membership Recognition
- 9.3 Research Challenges and Future Works
- References
- 10 Image Dehazing: Improved Techniques
- 10.1 Introduction
- 10.2 Review and Task Description
- 10.2.1 Haze Modeling and Dehazing Approaches
- 10.2.2 RESIDE Dataset
- 10.3 Task 1: Dehazing as Restoration
- 10.4 Task 2: Dehazing for Detection
- 10.4.1 Solution Set 1: Enhancing Dehazing and/or Detection Modules in the Cascade
- 10.4.2 Solution Set 2: Domain-Adaptive Mask-RCNN
- Experiments
- 10.5 Conclusion
- References
- 11 Biomedical Image Analytics: Automated Lung Cancer Diagnosis
- 11.1 Introduction
- 11.2 Related Work
- 11.3 Methodology
- Metrics for Scoring Images
- 11.4 Experiments
- 11.5 Conclusion
- Acknowledgments
- References
- Index
- Back Cover.