Machine learning a Bayesian and optimization perspective
This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques – together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models. The book...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Amsterdam, [Netherlands] :
Academic Press
2015.
|
Edición: | First edition |
Colección: | .NET Developers Series
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009628937406719 |
Tabla de Contenidos:
- Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables
- 2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density
- Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense
- Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting
- 3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations
- 4.2.1 The Cost Function Surface