Statistical signal processing in engineering

A problem-solving approach to statistical signal processing for practicing engineers, technicians, and graduate students This book takes a pragmatic approach in solving a set of common problems engineers and technicians encounter when processing signals. In writing it, the author drew on his vast t...

Descripción completa

Detalles Bibliográficos
Otros Autores: Spagnolini, Umberto, author (author)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Hoboken, New Jersey : Wiley 2018.
Edición:1st edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009631502906719
Tabla de Contenidos:
  • Intro
  • Title Page
  • Copyright Page
  • Contents
  • List of Figures
  • List of Tables
  • Preface
  • List of Abbreviations
  • How to Use the Book
  • About the Companion Website
  • Prerequisites
  • Why are there so many matrixes in this book?
  • Chapter 1 Manipulations on Matrixes
  • 1.1 Matrix Properties
  • 1.1.1 Elementary Operations
  • 1.2 Eigen-Decompositions
  • 1.3 Eigenvectors in Everyday Life
  • 1.3.1 Conversations in a Noisy Restaurant
  • 1.3.2 Power Control in a Cellular Communication System
  • 1.3.3 Price Equilibrium in the Economy
  • 1.4 Derivative Rules
  • 1.4.1 Derivative with respect to x∈ Rn
  • 1.4.2 Derivative with respect to x∈ Cn
  • 1.4.3 Derivative with respect to the Matrix X∈Rm×n
  • 1.5 Quadratic Forms
  • 1.6 Diagonalization of a Quadratic Form
  • 1.7 Rayleigh Quotient
  • 1.8 Basics of Optimization
  • 1.8.1 Quadratic Function with Simple Linear Constraint (M=1)
  • 1.8.2 Quadratic Function with Multiple Linear Constraints
  • Appendix A: Arithmetic vs. Geometric Mean
  • Chapter 2 Linear Algebraic Systems
  • 2.1 Problem Definition and Vector Spaces
  • 2.1.1 Vector Spaces in Tomographic Radiometric Inversion
  • 2.2 Rotations
  • 2.3 Projection Matrixes and Data-Filtering
  • 2.3.1 Projections and Commercial FM Radio
  • 2.4 Singular Value Decomposition (SVD) and Subspaces
  • 2.4.1 How to Choose the Rank of A?
  • 2.5 QR and Cholesky Factorization
  • 2.6 Power Method for Leading Eigenvectors
  • 2.7 Least Squares Solution of Overdetermined Linear Equations
  • 2.8 Efficient Implementation of the LS Solution
  • 2.9 Iterative Methods
  • Chapter 3 Random Variables in Brief
  • 3.1 Probability Density Function (pdf), Moments, and Other Useful Properties
  • 3.2 Convexity and Jensen Inequality
  • 3.3 Uncorrelatedness and Statistical Independence
  • 3.4 Real-Valued Gaussian Random Variables.
  • 3.5 Conditional pdf for Real-Valued Gaussian Random Variables
  • 3.6 Conditional pdf in Additive Noise Model
  • 3.7 Complex Gaussian Random Variables
  • 3.7.1 Single Complex Gaussian Random Variable
  • 3.7.2 Circular Complex Gaussian Random Variable
  • 3.7.3 Multivariate Complex Gaussian Random Variables
  • 3.8 Sum of Square of Gaussians: Chi-Square
  • 3.9 Order Statistics for N rvs
  • Chapter 4 Random Processes and Linear Systems
  • 4.1 Moment Characterizations and Stationarity
  • 4.2 Random Processes and Linear Systems
  • 4.3 Complex-Valued Random Processes
  • 4.4 Pole-Zero and Rational Spectra (Discrete-Time)
  • 4.4.1 Stability of LTI Systems
  • 4.4.2 Rational PSD
  • 4.4.3 Paley-Wiener Theorem
  • 4.5 Gaussian Random Process (Discrete-Time)
  • 4.6 Measuring Moments in Stochastic Processes
  • Appendix A: Transforms for Continuous-Time Signals
  • Appendix B: Transforms for Discrete-Time Signals
  • Chapter 5 Models and Applications
  • 5.1 Linear Regression Model
  • 5.2 Linear Filtering Model
  • 5.2.1 Block-Wise Circular Convolution
  • 5.2.2 Discrete Fourier Transform and Circular Convolution Matrixes
  • 5.2.3 Identification and Deconvolution
  • 5.3 MIMO systems and Interference Models
  • 5.3.1 DSL System
  • 5.3.2 MIMO in Wireless Communication
  • 5.4 Sinusoidal Signal
  • 5.5 Irregular Sampling and Interpolation
  • 5.5.1 Sampling With Jitter
  • 5.6 Wavefield Sensing System
  • Chapter 6 Estimation Theory
  • 6.1 Historical Notes
  • 6.2 Non-Bayesian vs. Bayesian
  • 6.3 Performance Metrics and Bounds
  • 6.3.1 Bias
  • 6.3.2 Mean Square Error (MSE)
  • 6.3.3 Performance Bounds
  • 6.4 Statistics and Sufficient Statistics
  • 6.5 MVU and BLU Estimators
  • 6.6 BLUE for Linear Models
  • 6.7 Example: BLUE of the Mean Value of Gaussian rvs
  • Chapter 7 Parameter Estimation
  • 7.1 Maximum Likelihood Estimation (MLE)
  • 7.2 MLE for Gaussian Model xN(μ(θ),C(θ)).
  • 7.2.1 Additive Noise Model x=s(θ)+w with wN(0,Cw)
  • 7.2.2 Additive Noise Model x=H(ω)·α+w with wN(0,Cw)
  • 7.2.3 Additive Noise Model with Multiple Observations x=s(θ)+w with wN(0,Cw), Cw Known
  • 7.2.3.1 Linear Model = ⋅ +
  • 7.2.3.2 Model = ⋅ +
  • 7.2.3.3 Model = ( ) ⋅ +
  • 7.2.4 Model xN(0,C(θ))
  • 7.2.5 Additive Noise Model with Multiple Observations x=s(θ)+w with wN(0,Cw), Cw Unknown
  • 7.3 Other Noise Models
  • 7.4 MLE and Nuisance Parameters
  • 7.5 MLE for Continuous-Time Signals
  • 7.5.1 Example: Amplitude Estimation
  • 7.5.2 MLE for Correlated Noise Sw(f)
  • 7.6 MLE for Circular Complex Gaussian
  • 7.7 Estimation in Phase/Frequency Modulations
  • 7.7.1 MLE Phase Estimation
  • 7.7.2 Phase Locked Loops
  • 7.8 Least Squares (LS) Estimation
  • 7.8.1 Weighted LS with W = diag{c1,c2,...,cN}
  • 7.8.2 LS Estimation and Linear Models
  • 7.8.3 Under or Over-Parameterizing?
  • 7.8.4 Constrained LS Estimation
  • 7.9 Robust Estimation
  • Chapter 8 Cramér-Rao Bound
  • 8.1 Cramér-Rao Bound and Fisher Information Matrix
  • 8.1.1 CRB for Scalar Problem (P=1)
  • 8.1.2 CRB and Local Curvature of Log-Likelihood
  • 8.1.3 CRB for Multiple Parameters (p≥1)
  • 8.2 Interpretation of CRB and Remarks
  • 8.2.1 Variance of Each Parameter
  • 8.2.2 Compactness of the Estimates
  • 8.2.3 FIM for Known Parameters
  • 8.2.4 Approximation of the Inverse of FIM
  • 8.2.5 Estimation Decoupled From FIM
  • 8.2.6 CRB and Nuisance Parameters
  • 8.2.7 CRB for Non-Gaussian rv and Gaussian Bound
  • 8.3 CRB and Variable Transformations
  • 8.4 FIM for Gaussian Parametric Model xN(μ(θ),C(θ))
  • 8.4.1 FIM for x=s(θ)+w with wN(0,Cw)
  • 8.4.2 FIM for Continuous-Time Signals in Additive White Gaussian Noise
  • 8.4.3 FIM for Circular Complex Model
  • Appendix A: Proof of CRB
  • Appendix B: FIM for GaussianModel
  • Appendix C: Some Derivatives for MLE and CRB Computations.
  • Chapter 9 MLE and CRB for Some Selected Cases
  • 9.1 Linear Regressions
  • 9.2 Frequency Estimation x[n]=aocos(ω0n+ϕo)+w[n]
  • 9.3 Estimation of Complex Sinusoid
  • 9.3.1 Proper, Improper, and Non-Circular Signals
  • 9.4 Time of Delay Estimation
  • 9.5 Estimation of Max for Uniform pdf
  • 9.6 Estimation of Occurrence Probability for Binary pdf
  • 9.7 How to Optimize Histograms?
  • 9.8 Logistic Regression
  • Chapter 10 Numerical Analysis and Montecarlo Simulations
  • 10.1 System Identification and Channel Estimation
  • 10.1.1 Matlab Code and Results
  • 10.2 Frequency Estimation
  • 10.2.1 Variable (Coarse/Fine) Sampling
  • 10.2.2 Local Parabolic Regression
  • 10.2.3 Matlab Code and Results
  • 10.3 Time of Delay Estimation
  • 10.3.1 Granularity of Sampling in ToD Estimation
  • 10.3.2 Matlab Code and Results
  • 10.4 Doppler-Radar System by Frequency Estimation
  • 10.4.1 EM Method
  • 10.4.2 Matlab Code and Results
  • Chapter 11 Bayesian Estimation
  • 11.1 Additive Linear Model with Gaussian Noise
  • 11.1.1 Gaussian A-priori: θN(,σθ2)
  • 11.1.2 Non-Gaussian A-Priori
  • 11.1.3 Binary Signals: MMSE vs. MAP Estimators
  • 11.1.4 Example: Impulse Noise Mitigation
  • 11.2 Bayesian Estimation in Gaussian Settings
  • 11.2.1 MMSE Estimator
  • 11.2.2 MMSE Estimator for Linear Models
  • 11.3 LMMSE Estimation and Orthogonality
  • 11.4 Bayesian CRB
  • 11.5 Mixing Bayesian and Non-Bayesian
  • 11.5.1 Linear Model with Mixed Random/Deterministic Parameters
  • 11.5.2 Hybrid CRB
  • 11.6 Expectation-Maximization (EM)
  • 11.6.1 EM of the Sum of Signals in Gaussian Noise
  • 11.6.2 EM Method for the Time of Delay Estimation of Multiple Waveforms
  • 11.6.3 Remarks
  • Chapter 12 Optimal Filtering
  • 12.1 Wiener Filter
  • 12.2 MMSE Deconvolution (or Equalization)
  • 12.3 Linear Prediction
  • 12.3.1 Yule-Walker Equations
  • 12.4 LS Linear Prediction.
  • 12.5 Linear Prediction and AR Processes
  • 12.6 Levinson Recursion and Lattice Predictors
  • Chapter 13 Bayesian Tracking and Kalman Filter
  • 13.1 Bayesian Tracking of State in Dynamic Systems
  • 13.1.1 Evolution of the A-Posteriori pdf
  • 13.2 Kalman Filter (KF)
  • 13.2.1 KF Equations
  • 13.2.2 Remarks
  • 13.3 Identification of Time-Varying Filters in Wireless Communication
  • 13.4 Extended Kalman Filter (EKF) for Non-Linear Dynamic Systems
  • 13.5 Position Tracking by Multi-Lateration
  • 13.5.1 Positioning and Noise
  • 13.5.2 Example of Position Tracking
  • 13.6 Non-Gaussian Pdf and Particle Filters
  • Chapter 14 Spectral Analysis
  • 14.1 Periodogram
  • 14.1.1 Bias of the Periodogram
  • 14.1.2 Variance of the Periodogram
  • 14.1.3 Filterbank Interpretation
  • 14.1.4 Pdf of the Periodogram (White Gaussian Process)
  • 14.1.5 Bias and Resolution
  • 14.1.6 Variance Reduction and WOSA
  • 14.1.7 Numerical Example: Bandlimited Process and (Small) Sinusoid
  • 14.2 Parametric Spectral Analysis
  • 14.2.1 MLE and CRB
  • 14.2.2 General Model for AR, MA, ARMA Spectral Analysis
  • 14.3 AR Spectral Analysis
  • 14.3.1 MLE and CRB
  • 14.3.2 A Good Reason to Avoid Over-Parametrization in AR
  • 14.3.3 Cramér-Rao Bound of Poles in AR Spectral Analysis
  • 14.3.4 Example: Frequency Estimation by AR Spectral Analysis
  • 14.4 MA Spectral Analysis
  • 14.5 ARMA Spectral Analysis
  • 14.5.1 Cramér-Rao Bound for ARMA Spectral Analysis
  • Appendix A:Which Sample Estimate of the Autocorrelation to Use?
  • Appendix B: Eigenvectors and Eigenvalues of Correlation Matrix
  • Appendix C: Property of Monic Polynomial
  • Appendix D: Variance of Pole in AR(1)
  • Chapter 15 Adaptive Filtering
  • 15.1 Adaptive Interference Cancellation
  • 15.2 Adaptive Equalization in Communication Systems
  • 15.2.1 Wireless Communication Systems in Brief
  • 15.2.2 Adaptive Equalization.
  • 15.3 Steepest Descent MSE Minimization.