Atjaunināt sīkdatņu piekrišanu

E-grāmata: Task Equivalence in Speaking Tests: Investigating the Difficulty of Two Spoken Narrative Tasks

  • Formāts: PDF+DRM
  • Sērija : Linguistic Insights 174
  • Izdošanas datums: 15-Nov-2013
  • Izdevniecība: Peter Lang AG, Internationaler Verlag der Wissenschaften
  • Valoda: eng
  • ISBN-13: 9783035105643
  • Formāts - PDF+DRM
  • Cena: 69,03 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: PDF+DRM
  • Sērija : Linguistic Insights 174
  • Izdošanas datums: 15-Nov-2013
  • Izdevniecība: Peter Lang AG, Internationaler Verlag der Wissenschaften
  • Valoda: eng
  • ISBN-13: 9783035105643

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

This book addresses the issue of task equivalence, which is of fundamental importance in the areas of language testing and task-based research, where task equivalence is a prerequisite. The main study examines the two seemingly-equivalent picture-based spoken narrative tasks, using a multi-method approach combining quantitative and qualitative methodologies with MFRM analysis of the ratings, the analysis of linguistic performances by Japanese candidates and native speakers of English (NS), expert judgements of the task characteristics, and perceptions of the candidates and NS. The results reveal a complex picture with a number of variables involved in ensuring task equivalence, raising relevant issues regarding the theories of task complexity and the commonly-used linguistic variables for examining learner spoken language. This book has important implications for the possible measures that can be taken to avoid selecting non-equivalent tasks for research and teaching.
Acknowledgements 11(2)
1 Introduction
13(6)
1.1 Rationale of the Book
13(1)
1.2 Spoken Narrative Tasks
14(3)
1.2.1 Definition of Spoken Narrative
14(1)
1.2.2 Spoken Narrative Tasks in Language Testing
15(1)
1.2.3 Spoken Narrative Tasks in Task-based Research
16(1)
1.3 Terminology
17(1)
1.4 Organisation of the Book
18(1)
2 Review of the Literature
19(48)
2.1 Introduction
19(1)
2.2 Theoretical Framework
19(8)
2.2.1 Models of Speaking Assessment
20(4)
2.2.2 Validity
24(3)
2.3 Equivalence of Test Forms and Tasks in Speaking Assessments
27(8)
2.3.1 Reliability
27(1)
2.3.2 Forms of a Test
28(1)
2.3.3 Forms of a Test by Different Delivery Modes
29(4)
2.3.4 Tasks of a Test
33(2)
2.4 Summary
35(2)
2.5 Operationalisation of the Evidence for Context and Cognitive Validity in Spoken Narrative Performance
37(23)
2.5.1 Theoretical Frameworks of Speech Production
37(1)
2.5.1.1 Speech Production in L1
37(3)
2.5.1.2 Speech Production in L2
40(2)
2.5.1.3 Task-related Factors Affecting L2 Performance
42(5)
2.5.2 A Priori Evidence of Task Equivalence: Task Complexity Factors
47(2)
2.5.3 A Posteriori Evidence of Context Validity: Linguistic Performance of Spoken Narrative Tasks
49(1)
2.5.3.1 Fluency
50(1)
2.5.3.2 Complexity
51(4)
2.5.3.3 Accuracy
55(2)
2.5.3.4 Idea Units
57(1)
2.5.4 A Posteriori Evidence of Cognitive Validity: Candidate Perceptions
58(2)
2.6 Evidence of Scoring Validity
60(2)
2.7 Summary
62(1)
2.8 Research Questions
63(4)
3 Pilot Studies
67(56)
3.1 Introduction
67(2)
3.2 Pilot Study 1: A Feasibility Study of Linguistic Performance at Two Levels of a Standard Speaking Test (SST)
69(9)
3.2.1 Purpose
69(1)
3.2.2 Data
69(1)
3.2.3 Linguistic Variables
70(2)
3.2.4 Research Question and Analyses
72(1)
3.2.5 Results and Discussion
72(5)
3.2.6 Conclusions and Suggestions for Further Research
77(1)
3.3 Pilot Study 2: Expert Judgements on the Two SST Tasks
78(5)
3.3.1 Purpose
78(1)
3.3.2 Participants and Procedures
78(1)
3.3.3 Research Aims
79(1)
3.3.4 Results and Discussion
80(1)
3.3.5 Conclusions and Suggestions for Further Research
81(2)
3.4 Pilot Study 3: Investigating the Sensitivity of Linguistic Variables in an SST Task
83(17)
3.4.1 Purpose
83(1)
3.4.2 Data
83(1)
3.4.3 Linguistic Variables
84(2)
3.4.4 Research Questions, Procedure and Analysis
86(2)
3.4.5 Results and Discussion
88(1)
3.4.5.1 Descriptive Statistics
88(1)
3.4.5.2 `Sensitive' Variables: Fluency and Accuracy
88(5)
3.4.5.3 Other Variables (1): Syntactic Complexity
93(1)
3.4.5.4 Other Variables (2): Lexical Complexity
94(4)
3.4.5.5 Other Variables (3): Idea Units
98(1)
3.4.6 Conclusions and Suggestions for Future Research
99(1)
3.5 Pilot Study 4: Native Speaker Performance and Perceptions of the Two SST Tasks
100(6)
3.5.1 Purpose
100(1)
3.5.2 Data
101(1)
3.5.3 Research Questions, Procedures and Analysis
101(1)
3.5.4 Results and Discussion
102(1)
3.5.4.1 Syntactic Complexity and Reasoning
102(2)
3.5.4.2 Idea Units
104(1)
3.5.5 Conclusions and Suggestions for Further Research
105(1)
3.6 Pilot Study 5: Selecting the Spoken Narrative Tasks for the Main Study
106(14)
3.6.1 Purpose and Research Questions
106(1)
3.6.2 Tasks
106(1)
3.6.2.1 Tasks 1 and 2
107(3)
3.6.2.2 Tasks 3 and 4
110(3)
3.6.3 Participants
113(1)
3.6.4 Procedures
113(1)
3.6.5 Results and Discussion
114(1)
3.6.5.1 Tasks 1 and 2
114(3)
3.6.5.2 Tasks 3 and 4
117(3)
3.6.6 Conclusions and Suggestions for the Main Study
120(1)
3.7 Summary
120(3)
4 Methodology
123(26)
4.1 Introduction
123(2)
4.2 Data from Japanese University Students
125(5)
4.2.1 Candidates
125(1)
4.2.2 Instruments
126(1)
4.2.2.1 Oxford Quick Placement Test
126(1)
4.2.2.2 Spoken Narrative Tasks
127(1)
4.2.2.3 Robinson's Task Difficulty Questionnaire
127(1)
4.2.2.4 Language Learning Background Questionnaire
128(1)
4.2.3 Procedures
128(2)
4.3 Data from Japanese Teachers of English
130(1)
4.4 Baseline Data from English Native Speakers
131(1)
4.5 Ratings Data for the Spoken Narrative Performances
132(8)
4.5.1 Raters
133(1)
4.5.2 Training with the CEFR Illustrative Samples
133(1)
4.5.2.1 Selection of Samples
133(2)
4.5.2.2 Rating Scales
135(1)
4.5.2.3 Procedures
136(1)
4.5.2.4 Results and Issues
136(1)
4.5.3 Benchmarking with the Japanese samples
137(1)
4.5.3.1 Selection of Samples and Procedures
137(1)
4.5.3.2 Results and Issues
138(2)
4.5.4 Major Rating
140(1)
4.6 Methods of Data Analysis
140(9)
4.6.1 Research Design
140(2)
4.6.2 MFRM Analysis of Task Difficulty, Candidate Ability and Fair Average Ratings (RQs 1, 3 & 4)
142(1)
4.6.3 Perceptions by Candidates and NSs and Expert Judgements of the Tasks (RQ2)
143(1)
4.6.4 Linguistic Performances on the Tasks (RQ3)
143(3)
4.6.5 Validity of Linguistic Variables (RQ4)
146(3)
5 Results
149(36)
5.1 Introduction
149(1)
5.2 Difficulty of the Two Spoken Narrative Tasks Calculated by MFRM Analysis
150(13)
5.2.1 Data
150(1)
5.2.2 Considered Judgement (CJ) Ratings
150(1)
5.2.2.1 Examining the Rating Scale
150(2)
5.2.2.2 Estimates of Candidate Ability, Task Difficulty and Rater Severity
152(3)
5.2.2.3 Effects of Task Difficulty Difference between Tasks A and B
155(2)
5.2.3 Ratings for Range, Accuracy, Fluency, Coherence and Sustained Monologue
157(1)
5.2.3.1 Examining the Rating Scales
157(1)
5.2.3.2 Estimates of Candidate Ability, Task Difficulty, Rater Severity and Rating Category Difficulty
158(3)
5.2.3.3 Effects of Task Difficulty Difference between Tasks A and B
161(2)
5.3 Candidate Perceptions of the Two Spoken Narrative Tasks
163(2)
5.3.1 Data
163(1)
5.3.2 Results of t-tests
164(1)
5.4 Candidate Perceptions of the Two Spoken Narrative Tasks at Different Levels of Proficiency
165(2)
5.5 Expert Judgements of the Two Spoken Narrative Tasks by Japanese Teachers Regarding Task Complexity Factors
167(3)
5.6 Perceived Difficulty of the Two Spoken Narrative Tasks by English Native Speakers
170(1)
5.7 Linguistic Performances on the Two Spoken Narrative Tasks
171(8)
5.7.1 Data
171(1)
5.7.1.1 Order Effect
172(1)
5.7.1.2 Normality Checks
172(1)
5.7.1.3 Bonferroni Correction
173(1)
5.7.2 Results for RQ3-1
174(1)
5.7.3 Results for RQ3-2
175(4)
5.8 Validity of Linguistic Variables
179(6)
5.8.1 Data
179(1)
5.8.2 Results
180(1)
5.8.2.1 Range
180(1)
5.8.2.2 Accuracy
181(1)
5.8.2.3 Fluency
181(1)
5.8.2.4 Coherence
182(1)
5.8.2.5 Sustained Monologue
182(3)
5.9 Summary
185(1)
6 Discussion
185(32)
6.1 Task Difficulty according to MFRM Analysis
185(2)
6.2 Perceived Difficulty and Cognitive Complexity of the Two Spoken Narrative Tasks
187(2)
6.3 Linguistic Performances on the Two Spoken Narrative Tasks
189(25)
6.3.1 Discussing Linguistic Performances in Light of Theories of Task Complexity
190(6)
6.3.2 Validation of Linguistic Variables
196(1)
6.3.2.1 Fluency
196(1)
6.3.2.2 Accuracy
196(1)
6.3.2.3 Range
197(1)
6.3.2.4 Coherence
198(2)
6.3.2.5 Sustained Monologue
200(2)
6.3.2.6 Summary
202(1)
6.3.3 Constructs of the Linguistic Variables
202(1)
6.3.3.1 Accuracy
202(4)
6.3.3.2 Syntactic Complexity
206(3)
6.3.4 Task-essentialness
209(2)
6.3.5 Task Equivalence in Terms of Linguistic Performance
211(3)
6.4 Summary
214(3)
7 Conclusion
217(16)
7.1 Introduction
217(1)
7.2 Synthesis and Summary of Findings
218(6)
7.2.1 RQ1: Task Difficulty according to MFRM Analysis
218(1)
7.2.2 RQs 2-1 & 2-2: Candidate Perceptions of the Tasks
219(1)
7.2.3 RQs 2-3 & 2-4: Native Speaker Perceptions and Expert Judgements of the Tasks
220(1)
7.2.4 RQs 3 & 4: Linguistic Performances and Linguistic Variables
221(3)
7.3 Implications of the Findings
224(4)
7.3.1 For Language Testing Research
224(2)
7.3.2 For Task-Based Research
226(2)
7.4 Limitations and Suggestions for Future Research
228(5)
7.4.1 Limitations of the Main Study
228(2)
7.4.2 Areas for Future Research
230(3)
References 233(14)
Appendices 247
Chihiro Inoue works at CRELLA at University of Bedfordshire, UK. Originally from Japan, she has keen interests in language testing, second language acquisition, and English education. She holds a PhD in Applied Linguistics from Lancaster University, and has been involved in various research projects on the development and validation of language tests in different countries.