Foundations of linear and generalized linear models
"This book presents an overview of the foundations and the key ideas and results of linear and generalized linear models under one cover. Written by a prolific academic, researcher, and textbook writer, Foundations of Linear and Generalized Linear Models is soon to become the gold standard by w...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Hoboken, New Jersey :
John Wiley & Sons Inc
[2015]
|
Edición: | 1st ed |
Colección: | Wiley series in probability and statistics.
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009849090906719 |
Tabla de Contenidos:
- Intro
- Foundations of Linear and Generalized Linear Models
- Contents
- Preface
- Purpose of this book
- Use as a textbook
- Acknowledgments
- 1 Introduction to Linear and Generalized Linear Models
- 1.1 Components of a Generalized Linear Model
- 1.1.1 Random Component of a GLM
- 1.1.2 Linear Predictor of a GLM
- 1.1.3 Link Function of a GLM
- 1.1.4 A GLM with Identity Link Function is a "Linear Model"
- 1.1.5 GLMs for Normal, Binomial, and Poisson Responses
- 1.1.6 Advantages of GLMs versus Transforming the Data
- 1.2 Quantitative/Qualitative Explanatory Variables and Interpreting Effects
- 1.2.1 Quantitative and Qualitative Variables in Linear Predictors
- 1.2.2 Interval, Nominal, and Ordinal Variables
- 1.2.3 Interpreting Effects in Linear Models
- 1.3 Model Matrices and Model Vector Spaces
- 1.3.1 Model Matrices Induce Model Vector Spaces
- 1.3.2 Dimension of Model Space Equals Rank of Model Matrix
- 1.3.3 Example: The One-Way Layout
- 1.4 Identifiability and Estimability
- 1.4.1 Identifiability of GLM Model Parameters
- 1.4.2 Estimability in Linear Models
- 1.5 Example: Using Software to Fit a GLM
- 1.5.1 Example: Male Satellites for Female Horseshoe Crabs
- 1.5.2 Linear Model Using Weight to Predict Satellite Counts
- 1.5.3 Comparing Mean Numbers of Satellites by Crab Color
- Chapter Notes
- Exercises
- 2 Linear Models: Least Squares Theory
- 2.1 Least Squares Model Fitting
- 2.1.1 The Normal Equations and Least Squares Solution
- 2.1.2 Hat Matrix and Moments of Estimators
- 2.1.3 Bivariate Linear Model and Regression Toward the Mean
- 2.1.4 Least Squares Solutions When X Does Not Have Full Rank
- 2.1.5 Orthogonal Subspaces and Residuals
- 2.1.6 Alternatives to Least Squares
- 2.2 Projections of Data Onto Model Spaces
- 2.2.1 Projection Matrices
- 2.2.2 Projection Matrices for Linear Model Spaces.
- 2.2.3 Example: The Geometry of a Linear Model
- 2.2.4 Orthogonal Columns and Parameter Orthogonality
- 2.2.5 Pythagoras's Theorem Applications for Linear Models
- 2.3 Linear Model Examples: Projections and SS Decompositions
- 2.3.1 Example: Null Model
- 2.3.2 Example: Model for the One-way Layout
- 2.3.3 Sums of Squares and ANOVA Table for One-Way Layout
- 2.3.4 Example: Model for Two-Way Layout with Randomized Block Design
- 2.4 Summarizing Variability in a Linear Model
- 2.4.1 Estimating the Error Variance for a Linear Model
- 2.4.2 Sums of Squares: Error (SSE) and Regression (SSR)
- 2.4.3 Effect on SSR and SSE of Adding Explanatory Variables
- 2.4.4 Sequential and Partial Sums of Squares
- 2.4.5 Uncorrelated Predictors: Sequential SS = Partial SS = SSR Component
- 2.4.6 R-Squared and the Multiple Correlation
- 2.5 Residuals, Leverage, and Influence
- 2.5.1 Residuals and Fitted Values Are Uncorrelated
- 2.5.2 Plots of Residuals
- 2.5.3 Standardized and Studentized Residuals
- 2.5.4 Leverages from Hat Matrix Measure Potential Influence
- 2.5.5 Influential Points for Least Squares Fits
- 2.5.6 Adjusting for Explanatory Variables by Regressing Residuals
- 2.5.7 Partial Correlation
- 2.6 Example: Summarizing the Fit of a Linear Model
- 2.7 Optimality of Least Squares and Generalized Least Squares
- 2.7.1 The Gauss-Markov Theorem
- 2.7.2 Generalized Least Squares
- 2.7.3 Adjustment Using Estimated Heteroscedasticity
- Chapter Notes
- Exercises
- 3 Normal Linear Models: Statistical Inference
- 3.1 Distribution Theory for Normal Variates
- 3.1.1 Multivariate Normal Distribution
- 3.1.2 Chi-Squared, F, and t Distributions
- 3.1.3 Noncentral Distributions
- 3.1.4 Normal Quadratic Forms with Projection Matrices Are Chi-Squared
- 3.1.5 Proof of Cochran's Theorem
- 3.2 Significance Tests for Normal Linear Models.
- 3.2.1 Example: ANOVA for the One-Way Layout
- 3.2.2 Comparing Two Nested Normal Linear Models
- 3.2.3 Likelihood-Ratio Test Comparing Models
- 3.2.4 Example: Test That All Effects in a Normal Linear Model Equal Zero
- 3.2.5 Non-null Behavior of Statistic Comparing Nested Models
- 3.2.6 Expected Mean Squares and Power for One-Way ANOVA
- 3.2.7 Testing a General Linear Hypothesis
- 3.2.8 Example: Testing That a Single Model Parameter Equals Zero
- 3.2.9 Testing Terms in an Unbalanced Factorial ANOVA
- 3.3 Confidence Intervals and Prediction Intervals for Normal Linear Models
- 3.3.1 Confidence Interval for a Parameter of a Normal Linear Model
- 3.3.2 Confidence Interval for E(y) = x0
- 3.3.3 Prediction Interval for a Future y
- 3.3.4 Example: Confidence Interval and Prediction Interval for Simple Linear Regression
- 3.3.5 Interpretation and Limitations of Prediction Intervals
- 3.4 Example: Normal Linear Model Inference
- 3.4.1 Inference for Modeling House Selling Prices
- 3.4.2 Model Checking
- 3.4.3 Conditional versus Marginal Effects: Simpson's Paradox
- 3.4.4 Partial Correlation
- 3.4.5 Testing Contrasts as a General Linear Hypothesis
- 3.4.6 Selecting or Building a Model
- 3.5 Multiple Comparisons: Bonferroni, Tukey, and FDR Methods
- 3.5.1 Bonferroni Method for Multiple Inferences
- 3.5.2 Tukey Method of Multiple Comparisons
- 3.5.3 Controlling the False Discovery Rate
- Chapter Notes
- Exercises
- 4 Generalized Linear Models: Model Fitting and Inference
- 4.1 Exponential Dispersion Family Distributions for a Glm
- 4.1.1 Exponential Dispersion Family for a Random Component
- 4.1.2 Poisson, Binomial, and Normal in Exponential Dispersion Family
- 4.1.3 The Canonical Link Function of a Generalized Linear Model
- 4.2 Likelihood and Asymptotic Distributions for GLMS
- 4.2.1 Likelihood Equations for a GLM.
- 4.2.2 Likelihood Equations for Poisson Loglinear Model
- 4.2.3 The Key Role of the Mean-Variance Relation
- 4.2.4 Large-Sample Normal Distribution of Model Parameter Estimators
- 4.2.5 Delta Method Yields Covariance Matrix for Fitted Values
- 4.2.6 Model Misspecification: Robustness of GLMs with Correct Mean
- 4.3 Likelihood-Ratio/Wald/Score Methods of Inference for GLM Parameters
- 4.3.1 Likelihood-Ratio Tests
- 4.3.2 Wald Tests
- 4.3.3 Score Tests
- 4.3.4 Illustrating the Likelihood-Ratio, Wald, and Score Tests
- 4.3.5 Constructing Confidence Intervals by Inverting Tests
- 4.3.6 Profile Likelihood Confidence Intervals
- 4.4 Deviance of a GLM, Model Comparison, and Model Checking
- 4.4.1 Deviance Compares Chosen Model with Saturated Model
- 4.4.2 The Deviance for Poisson GLMs and Normal GLMs
- 4.4.3 Likelihood-Ratio Model Comparison Uses Deviance Difference
- 4.4.4 Score Tests and Pearson Statistics for Model Comparison
- 4.4.5 Residuals and Fitted Values Asymptotically Uncorrelated
- 4.4.6 Pearson, Deviance, and Standardized Residuals for GLMs
- 4.5 Fitting Generalized Linear Models
- 4.5.1 Newton-Raphson Method
- 4.5.2 Fisher Scoring Method
- 4.5.3 Newton-Raphson and Fisher Scoring for a Binomial Parameter
- 4.5.4 ML as Iteratively Reweighted Least Squares
- 4.5.5 Simplifications for Canonical Link Functions
- 4.6 Selecting Explanatory Variables for a GLM
- 4.6.1 Stepwise Procedures: Forward Selection and Backward Elimination
- 4.6.2 Model Selection: The Bias-Variance Tradeoff
- 4.6.3 AIC: Minimizing Distance of the Fit from the Truth
- 4.6.4 Summarizing Predictive Power: R-Squared and Other Measures
- 4.6.5 Effects of Collinearity
- 4.7 Example: Building a GLM
- 4.7.1 Backward Elimination with House Selling Price Data
- 4.7.2 Gamma GLM Has Standard Deviation Proportional to Mean.
- 4.7.3 Gamma GLMs for House Selling Price Data
- Appendix: GLM Analogs of Orthogonality Results for Linear Models
- Chapter Notes
- Exercises
- 5 Models for Binary Data
- 5.1 Link Functions for Binary Data
- 5.1.1 Ungrouped versus Grouped Binary Data
- 5.1.2 Latent Variable Threshold Model for Binary GLMs
- 5.1.3 Probit, Logistic, and Linear Probability Models
- 5.2 Logistic Regression: Properties and Interpretations
- 5.2.1 Interpreting : Effects on Probabilities and on Odds
- 5.2.2 Logistic Regression with Case-Control Studies
- 5.2.3 Logistic Regression is Implied by Normal Explanatory Variables
- 5.2.4 Summarizing Predictive Power: Classification Tables and ROC Curves
- 5.2.5 Summarizing Predictive Power: Correlation Measures
- 5.3 Inference About Parameters of Logistic Regression Models
- 5.3.1 Logistic Regression Likelihood Equations
- 5.3.2 Covariance Matrix of Logistic Parameter Estimators
- 5.3.3 Statistical Inference: Wald Method is Suboptimal
- 5.3.4 Conditional Logistic Regression to Eliminate Nuisance Parameters
- 5.4 Logistic Regression Model Fitting
- 5.4.1 Iterative Fitting of Logistic Regression Models
- 5.4.2 Infinite Parameter Estimates in Logistic Regression
- 5.5 Deviance and Goodness of Fit for Binary GLMS
- 5.5.1 Deviance and Pearson Goodness-of-Fit Statistics
- 5.5.2 Chi-Squared Tests of Fit and Model Comparisons
- 5.5.3 Residuals: Pearson, Deviance, and Standardized
- 5.5.4 Influence Diagnostics for Logistic Regression
- 5.6 Probit and Complementary Log-Log Models
- 5.6.1 Probit Models: Interpreting Effects
- 5.6.2 Probit Model Fitting
- 5.6.3 Log-Log and Complementary Log-Log Link Models
- 5.7 Examples: Binary Data Modeling
- 5.7.1 Example: Risk Factors for Endometrial Cancer Grade
- 5.7.2 Example: Dose-Response Study
- Chapter Notes
- Exercises
- 6 Multinomial Response Models.
- 6.1 Nominal Responses: Baseline-Category Logit Models.