Mixture Models Parametric, Semiparametric, and New Directions

"Mixture models are a powerful tool for analyzing complex and heterogeneous datasets across many scientific fields, from finance to genomics. Mixture Models: Parametric, Semiparametric, and New Directions provides an up-to-date introduction to these models, their recent developments, and their...

Full description

Bibliographic Details
Other Authors: Yao, Weixin, author (author), Xiang, Sijia, author
Format: eBook
Language:Inglés
Published: Boca Raton : CRC Press [2024]
Edition:First edition
Series:Monographs on statistics and applied probability (Series) ; 175.
Subjects:
See on Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009825853506719
Table of Contents:
  • Cover
  • Half Title
  • Series Page
  • Title Page
  • Copyright Page
  • Dedication
  • Contents
  • Preface
  • Symbols
  • Authors
  • 1. Introduction to mixture models
  • 1.1. Introduction
  • 1.2. Formulations of mixture models
  • 1.3. Identifiability
  • 1.4. Maximum likelihood estimation
  • 1.5. EMalgorithm
  • 1.5.1. Introduction of EM algorithm
  • 1.5.2. EM algorithm for mixture models
  • 1.5.3. Rate of convergence of the EM algorithm
  • 1.5.4. Classification EM algorithms
  • 1.5.5. Stochastic EM algorithms
  • 1.5.6. ECM algorithm and some other extensions
  • 1.5.7. Initial values
  • 1.5.8. Stopping rules
  • 1.6. Some applications of EM algorithm
  • 1.6.1. Mode estimation
  • 1.6.2. Maximize a mixture type objective function
  • 1.7. Multivariate normal mixtures
  • 1.7.1. Introduction
  • 1.7.2. Parsimonious multivariate normal mixture modeling
  • 1.8. The topography of finite normal mixture models
  • 1.8.1. Shapes of some univariate normal mixtures
  • 1.8.2. The topography of multivariate normal mixtures
  • 1.9. Unboundedness of normal mixture likelihood
  • 1.9.1. Restricted MLE
  • 1.9.2. Penalized likelihood estimator
  • 1.9.3. Profile likelihood method
  • 1.10. Consistent root selections for mixture models
  • 1.11. Mixture of skewed distributions
  • 1.11.1. Multivariate skew-symmetric distributions
  • 1.11.2. Finite mixtures of skew distributions
  • 1.12. Semi-supervised mixture models
  • 1.13. Nonparametric maximum likelihood estimate
  • 1.13.1. Introduction
  • 1.13.2. Computations of NPMLE
  • 1.13.3. Normal scale of mixture models
  • 1.14. Mixture models for matrix data
  • 1.14.1. Finite mixtures of matrix normal distributions
  • 1.14.2. Parsimonious models for modeling matrix data
  • 1.15. Fitting mixture models using R
  • 2. Mixture models for discrete data
  • 2.1. Mixture models for categorical data
  • 2.1.1. Mixture of ranking data.
  • 2.1.2. Mixture of multinomial distributions
  • 2.2. Mixture models for counting data
  • 2.2.1. Mixture models for univariate counting data
  • 2.2.2. Count data with excess zeros
  • 2.2.3. Mixture models for multivariate counting data
  • 2.3. Hidden Markov models
  • 2.3.1. The EM algorithm
  • 2.3.2. The forward-backward algorithm
  • 2.4. Latent class models
  • 2.4.1. Introduction
  • 2.4.2. Latent class models
  • 2.4.3. Latent class with covariates
  • 2.4.4. Latent class regression
  • 2.4.5. Latent class model with random effect
  • 2.4.6. Bayes latent class analysis
  • 2.4.7. Multi-level latent class model
  • 2.4.8. Latent transition analysis
  • 2.4.9. Case study
  • 2.4.10. Further reading
  • 2.5. Mixture models for mixed data
  • 2.5.1. Location mixture model
  • 2.5.2. Mixture of latent variable models
  • 2.5.3. The MFA-MD model
  • 2.5.4. The clustMD model
  • 2.6. Fitting mixture models for discrete data using R
  • 3. Mixture regression models
  • 3.1. Mixtures of linear regression models
  • 3.2. Unboundedness of mixture regression likelihood
  • 3.3. Mixture of experts model
  • 3.4. Mixture of generalized linear models
  • 3.4.1. Generalized linear models
  • 3.4.2. Mixtures of generalized linear models
  • 3.5. Hidden Markov model regression
  • 3.5.1. Model setting
  • 3.5.2. Estimation algorithm
  • 3.6. Mixtures of linear mixed-effects models
  • 3.7. Mixtures of multivariate regressions
  • 3.7.1. Multivariate regressions with normal mixture errors
  • 3.7.2. Parameter estimation
  • 3.7.3. Mixtures of multivariate regressions
  • 3.8. Seemingly unrelated clusterwise linear regression
  • 3.8.1. Mixtures of Gaussian seemingly unrelated linear regression models
  • 3.8.2. Maximum likelihood estimation
  • 3.9. Fitting mixture regression models using R
  • 4. Bayesian mixture models
  • 4.1. Introduction
  • 4.2. Markov chain Monte Carlo methods.
  • 4.2.1. Hastings-Metropolis algorithm
  • 4.2.2. Gibbs sampler
  • 4.3. Bayesian approach to mixture analysis
  • 4.4. Conjugate priors for Bayesian mixture models
  • 4.5. Bayesian normal mixture models
  • 4.5.1. Bayesian univariate normal mixture models
  • 4.5.2. Bayesian multivariate normal mixture models
  • 4.6. Improper priors
  • 4.7. Bayesian mixture models with unknown numbers of components
  • 4.8. Fitting Bayesian mixture models using R
  • 5. Label switching for mixture models
  • 5.1. Introduction of label switching
  • 5.2. Loss functions-based relabeling methods
  • 5.2.1. KL algorithm
  • 5.2.2. The K-means method
  • 5.2.3. The determinant-based loss
  • 5.2.4. Asymptotic normal likelihood method
  • 5.3. Modal relabeling methods
  • 5.3.1. Ideal labeling based on the highest posterior density region
  • 5.3.2. Introduction of the HPD modal labeling method
  • 5.3.3. ECM algorithm
  • 5.3.4. HPD modal labeling credibility
  • 5.4. Soft probabilistic relabeling methods
  • 5.4.1. Model-based labeling
  • 5.4.2. Probabilistic relabeling strategies
  • 5.5. Label switching for frequentist mixture models
  • 5.6. Solving label switching for mixture models using R
  • 6. Hypothesis testing and model selection for mixture models
  • 6.1. Likelihood ratio tests for mixture models
  • 6.2. LRT based on bootstrap
  • 6.3. Information criteria in model selection
  • 6.4. Cross-validation for mixture models
  • 6.5. Penalized mixture models
  • 6.6. EM-test for finite mixture models
  • 6.6.1. EM-test in single parameter component density
  • 6.6.2. EM-test in normal mixture models with equal variance
  • 6.6.3. EM-test in normal mixture models with unequal variance
  • 6.7. Hypothesis testing based on goodness-of-fit tests
  • 6.8. Model selection for mixture models using R
  • 7. Robust mixture regression models
  • 7.1. Robust linear regression
  • 7.1.1. M-estimators.
  • 7.1.2. Generalized M-estimators (GM-estimators)
  • 7.1.3. R-estimators
  • 7.1.4. LMS estimators
  • 7.1.5. LTS estimators
  • 7.1.6. S-estimators
  • 7.1.7. Generalized S-estimators (GS-estimators)
  • 7.1.8. MM-estimators
  • 7.1.9. Robust and efficient weighted least squares estimator
  • 7.1.10. Robust regression based on regularization of case-specific parameters
  • 7.1.11. Summary
  • 7.2. Robust estimator based on a modified EM algorithm
  • 7.3. Robust mixture modeling by heavy-tailed error densities
  • 7.3.1. Robust mixture regression using t-distribution
  • 7.3.2. Robust mixture regression using Laplace distribution
  • 7.4. Scale mixtures of skew-normal distributions
  • 7.5. Robust EM-type algorithm for log-concave mixture regression models
  • 7.6. Robust estimators based on trimming
  • 7.7. Robust mixture regression modeling by cluster-weighted modeling
  • 7.8. Robust mixture regression via mean-shift penalization
  • 7.9. Some numerical comparisons
  • 7.10. Fitting robust mixture regression models using R
  • 8. Mixture models for high-dimensional data
  • 8.1. Challenges of high-dimensional mixture models
  • 8.2. Mixtures of factor analyzers
  • 8.2.1. Factor analysis (FA)
  • 8.2.2. Mixtures of factor analyzers (MFA)
  • 8.2.3. Parsimonious mixtures of factor analyzers
  • 8.3. Model-based clustering based on reduced projections
  • 8.3.1. Clustering with envelope mixture models
  • 8.3.2. Envelope EM algorithm for CLEMM
  • 8.4. Regularized mixture modeling
  • 8.5. Subspace methods for mixture models
  • 8.5.1. Introduction
  • 8.5.2. High-dimensional GMM
  • 8.6. Variable selection for mixture models
  • 8.7. High-dimensional mixture modeling through random projections
  • 8.8. Multivariate generalized hyperbolic mixtures
  • 8.9. High-dimensional mixture regression models
  • 8.10. Fitting high-dimensional mixture models using R.
  • 9. Semiparametric mixture models
  • 9.1. Why semiparametric mixture models?
  • 9.2. Semiparametric location shifted mixture models
  • 9.3. Two-component semiparametric mixture models with one known component
  • 9.4. Semiparametric mixture models with shape constraints
  • 9.5. Semiparametric multivariate mixtures
  • 9.6. Semiparametric hidden Markov models
  • 9.6.1. Estimation methods
  • 9.7. Bayesian nonparametric mixture models
  • 9.8. Fitting semiparametric mixture models using R
  • 9.9. Proofs
  • 10. Semiparametric mixture regression models
  • 10.1. Why semiparametric regression models?
  • 10.2. Mixtures of nonparametric regression models
  • 10.3. Mixtures of regression models with varying proportions
  • 10.4. Machine learning embedded semiparametric mixtures of regressions
  • 10.5. Mixture of regression models with nonparametric errors
  • 10.6. Semiparametric regression models for longitudinal/functional data
  • 10.7. Semiparmetric hidden Markov models with covariates
  • 10.8. Some other semiparametric mixture regression models
  • 10.9. Fitting semiparametric mixture regression models using R
  • 10.10. Proofs
  • Bibliography
  • Index.