Total survey error in practice
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Hoboken, New Jersey :
Wiley
2017.
|
Edición: | 1st ed |
Colección: | Wiley series in survey methodology.
THEi Wiley ebooks. |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009849091006719 |
Tabla de Contenidos:
- Intro
- Title Page
- Copyright Page
- Contents
- Notes on Contributors
- Preface
- Section 1 The Concept of TSE and the TSE Paradigm
- Chapter 1 The Roots and Evolution of the Total Survey Error Concept
- 1.1 Introduction and Historical Backdrop
- 1.2 Specific Error Sources and Their Control or Evaluation
- 1.3 Survey Models and Total Survey Design
- 1.4 The Advent of More Systematic Approaches Toward Survey Quality
- 1.5 What the Future Will Bring
- References
- Chapter 2 Total Twitter Error: Decomposing Public Opinion Measurement on Twitter from a Total Survey Error Perspective
- 2.1 Introduction
- 2.1.1 Social Media: A Potential Alternative to Surveys?
- 2.1.2 TSE as a Launching Point for Evaluating Social Media Error
- 2.2 Social Media: An Evolving Online Public Sphere
- 2.2.1 Nature, Norms, and Usage Behaviors of Twitter
- 2.2.2 Research on Public Opinion on Twitter
- 2.3 Components of Twitter Error
- 2.3.1 Coverage Error
- 2.3.2 Query Error
- 2.3.3 Interpretation Error
- 2.3.4 The Deviation of Unstructured Data Errors from TSE
- 2.4 Studying Public Opinion on the Twittersphere and the Potential Error Sources of Twitter Data: Two Case Studies
- 2.4.1 Research Questions and Methodology of Twitter Data Analysis
- 2.4.2 Potential Coverage Error in Twitter Examples
- 2.4.3 Potential Query Error in Twitter Examples
- 2.4.3.1 Implications of Including or Excluding RTs for Error
- 2.4.3.2 Implications of Query Iterations for Error
- 2.4.4 Potential Interpretation Error in Twitter Examples
- 2.5 Discussion
- 2.5.1 A Framework That Better Describes Twitter Data Errors
- 2.5.2 Other Subclasses of Errors to Be Investigated
- 2.6 Conclusion
- 2.6.1 What Advice We Offer for Researchers and Research Consumers
- 2.6.2 Directions for Future Research
- References
- Chapter 3 Big Data: A Survey Research Perspective.
- 3.1 Introduction
- 3.2 Definitions
- 3.2.1 Sources
- 3.2.2 Attributes
- 3.2.2.1 Volume
- 3.2.2.2 Variety
- 3.2.2.3 Velocity
- 3.2.2.4 Veracity
- 3.2.2.5 Variability
- 3.2.2.6 Value
- 3.2.2.7 Visualization
- 3.2.3 The Making of Big Data
- 3.3 The Analytic Challenge: From Database Marketing to Big Data and Data Science
- 3.4 Assessing Data Quality
- 3.4.1 Validity
- 3.4.2 Missingness
- 3.4.3 Representation
- 3.5 Applications in Market, Opinion, and Social Research
- 3.5.1 Adding Value through Linkage
- 3.5.2 Combining Big Data and Surveys in Market Research
- 3.6 The Ethics of Research Using Big Data
- 3.7 The Future of Surveys in a Data-Rich Environment
- References
- Chapter 4 The Role of Statistical Disclosure Limitation in Total Survey Error
- 4.1 Introduction
- 4.2 Primer on SDL
- 4.3 TSE-Aware SDL
- 4.3.1 Additive Noise
- 4.3.2 Data Swapping
- 4.4 Edit-Respecting SDL
- 4.4.1 Simulation Experiment
- 4.4.2 A Deeper Issue
- 4.5 SDL-Aware TSE
- 4.6 Full Unification of Edit, Imputation, and SDL
- 4.7 ``Big Data´´ Issues
- 4.8 Conclusion
- Acknowledgments
- References
- Section 2 Implications for Survey Design
- Chapter 5 The Undercoverage-Nonresponse Tradeoff
- 5.1 Introduction
- 5.2 Examples of the Tradeoff
- 5.3 Simple Demonstration of the Tradeoff
- 5.4 Coverage and Response Propensities and Bias
- 5.5 Simulation Study of Rates and Bias
- 5.5.1 Simulation Setup
- 5.5.2 Results for Coverage and Response Rates
- 5.5.3 Results for Undercoverage and Nonresponse Bias
- 5.5.3.1 Scenario 1
- 5.5.3.2 Scenario 2
- 5.5.3.3 Scenario 3
- 5.5.3.4 Scenario 4
- 5.5.3.5 Scenario 7
- 5.5.4 Summary of Simulation Results
- 5.6 Costs
- 5.7 Lessons for Survey Practice
- References
- Chapter 6 Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error Roger Tourangeau
- 6.1 Introduction.
- 6.2 The Effect of Offering a Choice of Modes
- 6.3 Getting People to Respond Online
- 6.4 Sequencing Different Modes of Data Collection
- 6.5 Separating the Effects of Mode on Selection and Reporting
- 6.5.1 Conceptualizing Mode Effects
- 6.5.2 Separating Observation from Nonobservation Error
- 6.5.2.1 Direct Assessment of Measurement Errors
- 6.5.2.2 Statistical Adjustments
- 6.5.2.3 Modeling Measurement Error
- 6.6 Maximizing Comparability Versus Minimizing Error
- 6.7 Conclusions
- References
- Chapter 7 Mobile Web Surveys: A Total Survey Error Perspective
- 7.1 Introduction
- 7.2 Coverage
- 7.3 Nonresponse
- 7.3.1 Unit Nonresponse
- 7.3.2 Breakoffs
- 7.3.3 Completion Times
- 7.3.4 Compliance with Special Requests
- 7.4 Measurement Error
- 7.4.1 Grouping of Questions
- 7.4.1.1 Question-Order Effects
- 7.4.1.2 Number of Items on a Page
- 7.4.1.3 Grids versus Item-By-Item
- 7.4.2 Effects of Question Type
- 7.4.2.1 Socially Undesirable Questions
- 7.4.2.2 Open-Ended Questions
- 7.4.3 Response and Scale Effects
- 7.4.3.1 Primacy Effects
- 7.4.3.2 Slider Bars and Drop-Down Questions
- 7.4.3.3 Scale Orientation
- 7.4.4 Item Missing Data
- 7.5 Links Between Different Error Sources
- 7.6 The Future of Mobile web Surveys
- References
- Chapter 8 The Effects of a Mid-Data Collection Change in Financial Incentives on Total Survey Error in the National Survey of Famil...
- 8.1 Introduction
- 8.2 Literature Review: Incentives in Face-to-Face Surveys
- 8.2.1 Nonresponse Rates
- 8.2.2 Nonresponse Bias
- 8.2.3 Measurement Error
- 8.2.4 Survey Costs
- 8.2.5 Summary
- 8.3 Data and Methods
- 8.3.1 NSFG Design: Overview
- 8.3.2 Design of Incentive Experiment
- 8.3.3 Variables
- 8.3.4 Statistical Analysis
- 8.4 Results
- 8.4.1 Nonresponse Error
- 8.4.2 Sampling Error and Costs
- 8.4.3 Measurement Error.
- 8.5 Conclusion
- 8.5.1 Summary
- 8.5.2 Recommendations for Practice
- References
- Chapter 9 A Total Survey Error Perspective on Surveys in Multinational, Multiregional, and Multicultural Contexts
- 9.1 Introduction
- 9.2 TSE in Multinational, Multiregional, and Multicultural Surveys
- 9.3 Challenges Related to Representation and Measurement Error Components in Comparative Surveys
- 9.3.1 Representation Error
- 9.3.1.1 Coverage Error
- 9.3.1.2 Sampling Error
- 9.3.1.3 Unit Nonresponse Error
- 9.3.1.4 Adjustment Error
- 9.3.2 Measurement Error
- 9.3.2.1 Validity
- 9.3.2.2 Measurement Error - The Response Process
- 9.3.2.3 Processing Error
- 9.4 QA and QC in 3MC Surveys
- 9.4.1 The Importance of a Solid Infrastructure
- 9.4.2 Examples of QA and QC Approaches Practiced by Some 3MC Surveys
- 9.4.3 QA/QC Recommendations
- References
- Chapter 10 Smartphone Participation in Web Surveys: Choosing Between the Potential for Coverage, Nonresponse, and Measurement Error
- 10.1 Introduction
- 10.1.1 Focus on Smartphones
- 10.1.2 Smartphone Participation: Web-Survey Design Decision Tree
- 10.1.3 Chapter Outline
- 10.2 Prevalence of Smartphone Participation in Web Surveys
- 10.3 Smartphone Participation Choices
- 10.3.1 Disallowing Smartphone Participation
- 10.3.2 Discouraging Smartphone Participation
- 10.4 Instrument Design Choices
- 10.4.1 Doing Nothing
- 10.4.2 Optimizing for Smartphones
- 10.5 Device and Design Treatment Choices
- 10.5.1 PC/Legacy versus Smartphone Designs
- 10.5.2 PC/Legacy versus PC/New
- 10.5.3 Smartphone/Legacy versus Smartphone/New
- 10.5.4 Device and Design Treatment Options
- 10.6 Conclusion
- 10.7 Future Challenges and Research Needs
- Appendix 10.A: Data Sources
- A.1 Market Strategies (17 studies)
- A.2 Experimental Data from Market Strategies International.
- A.3 Sustainability Cultural Indicators Program (SCIP)
- A.4 Army Study to Assess Risk and Resilience in Service members (STARRS)
- A.5 Panel Study of Income Dynamics Childhood Retrospective Circumstances Study (PSID-CRCS)
- Appendix 10.B: Smartphone Prevalence in Web Surveys
- Appendix 10.C: Screen Captures from Peterson et al. (2013) Experiment
- Appendix 10.D: Survey Questions Used in the Analysis of the Peterson et al. (2013) Experiment
- References
- Chapter 11 Survey Research and the Quality of Survey Data Among Ethnic Minorities
- 11.1 Introduction
- 11.2 On the Use of the Terms Ethnicity and Ethnic Minorities
- 11.3 On the Representation of Ethnic Minorities in Surveys
- 11.3.1 Coverage of Ethnic Minorities
- 11.3.2 Factors Affecting Nonresponse Among Ethnic Minorities
- 11.3.3 Postsurvey Adjustment Issues Related to Surveys Among Ethnic Minorities
- 11.4 Measurement Issues
- 11.4.1 The Tradeoff When Using Response-Enhancing Measures
- 11.5 Comparability, Timeliness, and Cost Concerns
- 11.5.1 Comparability
- 11.5.2 Timeliness and Cost Considerations
- 11.6 Conclusion
- References
- Section 3 Data Collection and Data Processing Applications
- Chapter 12 Measurement Error in Survey Operations Management: Detection, Quantification, Visualization, and Reduction
- 12.1 TSE Background on Survey Operations
- 12.2 Better and Better: Using Behavior Coding (CARIcode) and Paradata to Evaluate and Improve Question (Specification) Erro...
- 12.2.1 CARI Coding at Westat
- 12.2.2 CARI Experiments
- 12.3 Field-Centered Design: Mobile App for Rapid Reporting and Management
- 12.3.1 Mobile App Case Study
- 12.3.2 Paradata Quality
- 12.4 Faster and Cheaper: Detecting Falsification With GIS Tools
- 12.5 Putting It All Together: Field Supervisor Dashboards
- 12.5.1 Dashboards in Operations
- 12.5.2 Survey Research Dashboards.
- 12.5.2.1 Dashboards and Paradata.