Assessing Interactional Competence Principles, Test Development and Validation Through an L2 Chinese IC Test

With the growing recognition of the need to broaden the definition of Interactional Competence (IC) for communication and learning, this monograph offers the first book-length treatment on the conceptualization, development and validation of IC assessment instruments.

Detalles Bibliográficos
Autor principal: Harsch, Claudia (-)
Otros Autores: Dai, David Wei
Formato: Libro electrónico
Idioma:Inglés
Publicado: Frankfurt a.M. : Peter Lang GmbH, Internationaler Verlag der Wissenschaften 2024.
Edición:1st ed
Colección:Language Testing and Evaluation Series
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009839116306719
Tabla de Contenidos:
  • Cover
  • Series Information
  • Copyright Information
  • Table of Contents
  • List of Tables
  • Preface
  • Foreward
  • Summary of the Book
  • 中文概 (Summary of the Book in Chinese)
  • Acknowledgements
  • List of Tables
  • List of Figures
  • List of abbreviations
  • Chapter 1 Introduction
  • Chapter 2 Literature review
  • 2.1 A philosophical account of interaction
  • 2.1.1 Interaction and pragmatics
  • 2.1.2 An intentionalist perspective on interaction
  • 2.1.3 A rationalist-utilitarian perspective on interaction
  • 2.1.4 An empiricist-interactional perspective on interaction
  • 2.1.5 A unified account of interaction for assessment
  • 2.2 Interaction in computer-mediated communication
  • 2.2.1 CMC and L2-speaker interaction
  • 2.2.2 An empiricist-interactional approach to CMC
  • 2.2.3 Five CMC considerations for test design
  • 2.3 Defining an IC construct: A theoretical discussion
  • 2.3.1 A brief history of IC
  • 2.3.2 Assessing IC
  • 2.3.3 Differentiating speaking/LC and talking/IC
  • 2.3.4 Strong on speaking/LC but weak on talking/IC
  • 2.3.5 Strong on talking/IC but weak on speaking/LC
  • 2.4 Defining an IC construct: An operational discussion
  • 2.4.1 Are we measuring talking/IC or speaking/LC?
  • 2.4.2 Separating IC from LC
  • 2.4.3 Going beyond the mechanics of interaction: Hymes and Goffman revisited
  • 2.4.4 Emotional, logical and moral IC markers
  • 2.4.5 Aristotelian artistic proofs: Pathos, logos, and ethos
  • 2.4.6 Membership categorization analysis: Categorial IC markers
  • 2.5 Designing IC test tasks
  • 2.5.1 From the target language domain to a test
  • 2.5.2 Task-based needs analysis
  • 2.5.3 Triangulation in needs analysis
  • 2.5.4 Paucity of TBNA in L2 Chinese
  • 2.6 Designing IC rating materials
  • 2.6.1 IC rating materials development
  • 2.6.2 The rater perspective and indigenous criteria.
  • 2.6.3 Test-taker exemplars in IC rating
  • Chapter 3 Interpretive argument and research design
  • 3.1 The inferences and assumptions in the interpretive argument
  • 3.1.1 The domain description inference
  • 3.1.2 The evaluation inference
  • 3.1.3 The generalization inference
  • 3.1.4 The explanation inference
  • 3.1.5 The extrapolation inference
  • 3.2 The design of the three studies
  • 3.2.1 Study one, relevant assumptions and research questions
  • 3.2.2 Study two, relevant assumptions and research questions
  • 3.2.3 Study three, relevant assumptions and research questions
  • Chapter 4 Study one: Task-based needs analysis and test design
  • 4.1 Methodology of study one
  • 4.1.1 Participants
  • 4.1.1.1 TBNA participants
  • 4.1.1.2 Test design participants
  • Item review and moderation participants
  • Norming session participants
  • 4.1.2 Instruments
  • 4.1.2.1 TBNA instruments
  • Hermeneutic-Socratic interviews
  • Longitudinal reflective diaries
  • 4.1.2.2 Test design instruments
  • Norming questionnaires
  • 4.1.3 Procedures
  • 4.1.3.1 TBNA procedure
  • 4.1.3.2 Test design procedure
  • 4.1.4 Data analysis
  • 4.1.4.1 TBNA data analysis
  • 4.1.4.2 Test design data analysis
  • 4.2 Results and initial discussion of study one
  • 4.2.1 TBNA results
  • 4.2.1.1 Social actions
  • 4.2.1.2 Sociopragmatic and pragmalinguistic issues
  • 4.2.1.3 Interactional features and content knowledge
  • 4.2.1.4 Linguistic issues and multimodal cues
  • 4.2.2 The test specifications
  • 4.2.3 Generating draft items
  • 4.2.4 Revising the draft items
  • 4.2.5 Finalizing the IC test
  • Chapter 5 Study two: Pilot test, indigenous criteria, and rating materials
  • 5.1 Methodology of study two
  • 5.1.1 Participants
  • 5.1.1.1 Pilot test test-takers
  • 5.1.1.2 Pilot test raters
  • 5.1.1.3 Everyday-life domain experts
  • 5.1.2 Instruments
  • 5.1.3 Procedures and data analysis.
  • 5.1.3.1 Pilot testing
  • 5.1.3.2 Eliciting DEs' indigenous IC criteria
  • 5.1.3.3 Developing a DEs' indigenous IC criteria rating scale
  • 5.1.3.4 Theoretically expanding the IC rating scale
  • 5.2 Results and initial discussion of study two
  • 5.2.1 Pilot test findings
  • 5.2.2 Domain experts' indigenous IC criteria
  • 5.2.2.1 Conflict management
  • 5.2.2.2 Solidarity promotion
  • 5.2.2.3 Reasoning skills
  • 5.2.2.4 Personal qualities
  • 5.2.2.5 Social relations
  • 5.2.2.6 Linguistic choices
  • 5.2.2.7 Prosodic features
  • 5.2.2.8 The structure of talk
  • 5.2.2.9 Strategies, cultural norms, and miscellaneous
  • 5.2.3 An indigenous IC rating scale
  • 5.2.3.1 Collapsing indigenous criteria into five rating categories
  • 5.2.3.2 Identifying steps in the rating categories
  • 5.2.3.3 Identifying sub rating categories and extracting descriptors
  • 5.2.3.4 Indigenous rating category: Conflict management
  • 5.2.3.5 Indigenous rating category: Solidarity promotion
  • 5.2.3.6 Indigenous rating category: Personal qualities
  • 5.2.3.7 Indigenous rating category: Reasoning skills
  • 5.2.3.8 Indigenous rating category: Social relations
  • 5.2.4 CA and MCA validation and the generation of exemplars
  • 5.2.4.1 The rationale behind the CA and MCA validation of the scale
  • 5.2.4.2 The sample test task and the pilot test test-takers selected
  • 5.2.4.3 Theorizing conflict management and social relations
  • 5.2.4.4 Theorizing solidarity promotion and reasoning skills
  • 5.2.4.5 Theorizing personal qualities
  • 5.2.4.6 Address terms in social role management
  • 5.2.4.7 Categories and predicates
  • 5.2.4.8 Beginner L2-speakers' category knowledge
  • 5.2.4.9 The power of categorization
  • 5.2.5 A theorized IC rating scale
  • 5.2.5.1 Theorized rating category: Disaffiliation control
  • 5.2.5.2 Theorized rating category: Affiliation promotion.
  • 5.2.5.3 Theorized rating category: Morality
  • 5.2.5.4 Theorized rating category: Reasoning
  • 5.2.5.5 Theorized rating category: Social role management
  • 5.2.6 A unified model of IC
  • Chapter 6 Study three: The IC test and accompanying questionnaires
  • 6.1 Methodology
  • 6.1.1 Participants
  • 6.1.1.1 Main testing test-takers
  • 6.1.1.2 Main testing test-taker peers
  • 6.1.1.3 Main testing IC test raters
  • 6.1.2 Instruments
  • 6.1.2.1 The IC test
  • 6.1.2.2 Test-taker background questionnaires
  • 6.1.2.3 Self and peer-assessment questionnaires
  • 6.1.2.4 Rater training materials
  • 6.1.3 Procedures
  • 6.1.3.1 Administering the IC test and questionnaires
  • 6.1.3.2 Training raters
  • 6.1.3.3 Rater rating
  • 6.1.4 Data analysis
  • 6.2 Results and initial discussion
  • 6.2.1 Rasch analyses of IC test scores
  • 6.2.1.1 The Wright map
  • 6.2.1.2 The candidate measurement report
  • 6.2.1.3 The rater measurement report
  • 6.2.1.4 The criterion measurement report
  • 6.2.1.5 The item measurement report
  • 6.2.1.6 The rating scale category functioning
  • 6.2.1.7 The dimensionality of the data structure
  • 6.2.2 Correlation between IC and LC
  • 6.2.3 Rasch analyses of questionnaires
  • 6.2.3.1 The disaffiliation control sub-section
  • 6.2.3.2 The affiliation promotion sub-section
  • 6.2.3.3 The morality sub-section
  • 6.2.3.4 The reasoning sub-section
  • 6.2.3.5 The social role management sub-section
  • 6.2.3.6 Overall results of self and peer IC questionnaires
  • 6.2.4 Correlation between the IC test and questionnaires
  • 6.2.5 Rasch analyses of extrapolation and attitude items
  • 6.2.5.1 Explicit extrapolation questions
  • 6.2.5.2 Test-taker attitude questions
  • Chapter 7 Validity argument and overall discussions
  • 7.1 The domain description inference
  • 7.1.1 Domain description assumption 1
  • 7.1.2 Domain description assumption 2.
  • 7.1.3 Domain description assumption 3
  • 7.1.4 Domain description assumption 4
  • 7.2 The evaluation inference
  • 7.2.1 Evaluation assumption 1
  • 7.2.2 Evaluation assumption 2
  • 7.2.3 Evaluation assumption 3
  • 7.2.4 Evaluation assumption 4
  • 7.3 The generalization inference
  • 7.3.1 Generalization assumption 1
  • 7.3.2 Generalization assumption 2
  • 7.3.3 Generalization assumption 3
  • 7.3.4 Generalization assumption 4
  • 7.4 The explanation inference
  • 7.4.1 Explanation assumption 1
  • 7.4.2 Explanation assumption 2
  • 7.4.3 Explanation assumption 3
  • 7.4.4 Explanation assumption 4
  • 7.4.5 Explanation assumption 5
  • 7.5 The extrapolation inference
  • 7.5.1 Extrapolation assumption 1
  • 7.5.2 Extrapolation assumption 2
  • 7.5.3 Extrapolation assumption 3
  • 7.6 Considerations outside the validity framework
  • 7.6.1 CMC and practicality
  • 7.6.2 Stakeholder take-up and assessment literacy
  • 7.6.3 Building a universal model of IC
  • 7.6.4 Application of the IC construct and rating scale
  • 7.6.5 The parameters of the IC tasks
  • Chapter 8 Conclusions
  • 8.1 Significance of this book
  • 8.2 Outstanding issues, limitations, and future research
  • References
  • Appendix I: S-H interview protocol
  • Appendix II: Norming questionnaire
  • English translation
  • Chinese version
  • Appendix III: The IC test
  • Appendix IV: The IC rating scale
  • English version
  • Chinese version
  • Appendix V: The self-assessment questionnaire
  • English version
  • Chinese version
  • Appendix VI: The peer-assessment questionnaire
  • Author Information
  • Series Index.