Atjaunināt sīkdatņu piekrišanu

E-grāmata: Handbook of Automated Essay Evaluation: Current Applications and New Directions

Edited by (Univeristy of Akron, USA), Edited by (Educational Testing Service, New Jersey, USA)
  • Formāts: 384 pages
  • Izdošanas datums: 18-Jul-2013
  • Izdevniecība: Routledge
  • ISBN-13: 9781136334801
  • Formāts - PDF+DRM
  • Cena: 162,80 €*
  • * ši ir gala cena, t.i., netiek piemērotas nekādas papildus atlaides
  • Ielikt grozā
  • Pievienot vēlmju sarakstam
  • Šī e-grāmata paredzēta tikai personīgai lietošanai. E-grāmatas nav iespējams atgriezt un nauda par iegādātajām e-grāmatām netiek atmaksāta.
  • Formāts: 384 pages
  • Izdošanas datums: 18-Jul-2013
  • Izdevniecība: Routledge
  • ISBN-13: 9781136334801

DRM restrictions

  • Kopēšana (kopēt/ievietot):

    nav atļauts

  • Drukāšana:

    nav atļauts

  • Lietošana:

    Digitālo tiesību pārvaldība (Digital Rights Management (DRM))
    Izdevējs ir piegādājis šo grāmatu šifrētā veidā, kas nozīmē, ka jums ir jāinstalē bezmaksas programmatūra, lai to atbloķētu un lasītu. Lai lasītu šo e-grāmatu, jums ir jāizveido Adobe ID. Vairāk informācijas šeit. E-grāmatu var lasīt un lejupielādēt līdz 6 ierīcēm (vienam lietotājam ar vienu un to pašu Adobe ID).

    Nepieciešamā programmatūra
    Lai lasītu šo e-grāmatu mobilajā ierīcē (tālrunī vai planšetdatorā), jums būs jāinstalē šī bezmaksas lietotne: PocketBook Reader (iOS / Android)

    Lai lejupielādētu un lasītu šo e-grāmatu datorā vai Mac datorā, jums ir nepieciešamid Adobe Digital Editions (šī ir bezmaksas lietotne, kas īpaši izstrādāta e-grāmatām. Tā nav tas pats, kas Adobe Reader, kas, iespējams, jau ir jūsu datorā.)

    Jūs nevarat lasīt šo e-grāmatu, izmantojot Amazon Kindle.

This comprehensive, interdisciplinary handbook reviews the latest methods and technologies used in automated essay evaluation (AEE) methods and technologies. Highlights include the latest in the evaluation of performance-based writing assessments and recent advances in the teaching of writing, language testing, cognitive psychology, and computational linguistics. This greatly expanded follow-up to Automated Essay Scoring reflects the numerous advances that have taken place in the field since 2003 including automated essay scoring and diagnostic feedback. Each chapter features a common structure including an introduction and a conclusion. Ideas for diagnostic and evaluative feedback are sprinkled throughout the book.

Highlights of the books coverage include:











The latest research on automated essay evaluation.





Descriptions of the major scoring engines including the E-rater®, the Intelligent Essay Assessor, the Intellimetric Engine, c-rater, and LightSIDE.





Applications of the uses of the technology including a large scale system used in West Virginia.





A systematic framework for evaluating research and technological results.





Descriptions of AEE methods that can be replicated for languages other than English as seen in the example from China.





Chapters from key researchers in the field.

The book opens with an introduction to AEEs and a review of the "best practices" of teaching writing along with tips on the use of automated analysis in the classroom. Next the book highlights the capabilities and applications of several scoring engines including the E-rater®, the Intelligent Essay Assessor, the Intellimetric engine, c-rater, and LightSIDE. Here readers will find an actual application of the use of an AEE in West Virginia, psychometric issues related to AEEs such as validity, reliability, and scaling, and the use of automated scoring to detect reader drift, grammatical errors, discourse coherence quality, and the impact of human rating on AEEs. A review of the cognitive foundations underlying methods used in AEE is also provided. The book concludes with a comparison of the various AEE systems and speculation about the future of the field in light of current educational policy.

Ideal for educators, professionals, curriculum specialists, and administrators responsible for developing writing programs or distance learning curricula, those who teach using AEE technologies, policy makers, and researchers in education, writing, psychometrics, cognitive psychology, and computational linguistics, this book also serves as a reference for graduate courses on automated essay evaluation taught in education, computer science, language, linguistics, and cognitive psychology.

Recenzijas

"This Handbook does a wonderful job of filling in the blanks in our understanding of these important methods. The book provides detailed descriptions of approaches that are available for computerized scoring and empirical data on their functioning. This is an extremely valuable resource for anyone who is interested in the way these scoring procedures work. I highly recommend this book." Mark D. Reckase, Michigan State University, USA

"This excellent Handbook is edited by two of the leading researchers in Automated Essay Evaluation---Mark Shermis and Jill Burstein. The 20 chapters treat writing research, capabilities of automated scoring evaluation engines, and psychometric issues. This volume is appropriate for numerous audiences including graduate students, educators, researchers, administrators, and policy makers." - Robert L. Brennan, University of Iowa, USA

"At a time when the Common Core State Standards are reshaping writing instruction and writing evaluation, this volume provides a comprehensive review of AEE and its potential for meeting complex new demands for writing." Wayne Camara, The College Board, USA

"There is no other book like this. As we consider offering AEE, I would buy copies for myself, my staff and my Technical Advisory Board. Research departments in state and local education agencies contemplating purchasing such a service would be the prime market." - Lawrence M. Rudner, Graduate Management Admission Council, USA

"This book ... will become the standard reference work in the field. ... [ It] is intended as a primer for audiences from diverse backgrounds ... curriculum specialists, writing instructors, psychometricians, and computational linguists." - Su Baldwin, The National Board of Medical Examiners, USA

"No question, the book gives a thorough insight into the state of the art and it is a must have for all researchers working in related domains. The compilation is especially exciting for researchers and practitioners working in psychometrics, writing instruction, natural language processing and intelligent tutoring systems... AEE surely will play and increasing and prominent role in the future, which makes this book even more valuable." - Wolfgang Lenhard, Department of Psychology, University of Würburg, Germany

Foreword vii
Carl Whithaus
Preface x
About the Editors xiii
List of Contributors
xiv
1 Introduction to Automated Essay Evaluation
1(15)
Mark D. Shermis
Jill Burstein
Sharon Apel Bursky
2 Automated Essay Evaluation and the Teaching of Writing
16(20)
Norbert Elliot
Andrew Klobucar
3 English as a Second Language Writing and Automated Essay Evaluation
36(19)
Sara C. Weigle
4 The E-rater® Automated Essay Scoring System
55(13)
Jill Burstein
Joel Tetreault
Nitin Madnani
5 Implementation and Applications of the Intelligent Essay Assessor
68(21)
Peter W. Foltz
Lynn A. Streeter
Karen E. Lochbaum
Thomas K. Landauer
6 The IntelliMetricTM Automated Essay Scoring Engine - A Review and an Application to Chinese Essay Scoring
89(10)
Matthew T. Schultz
7 Applications of Automated Essay Evaluation in West Virginia
99(25)
Changhua S. Rich
M. Christina Schneider
Juan M. D'Brot
8 LightSIDE: Open Source Machine Learning for Text
124(12)
Elijah Mayfield
Carolyn Penstein Rose
9 Automated Short Answer Scoring: Principles and Prospects
136(17)
Chris Brew
Claudia Leacock
10 Probable Cause: Developing Warrants for Automated Scoring of Essays
153(28)
David M. Williamson
11 Validity and Reliability of Automated Essay Scoring
181(18)
Yigal Attali
12 Scaling and Norming for Automated Essay Scoring
199(22)
Kristin L. K. Koskey
Mark D. Shermis
13 Human Ratings and Automated Essay Evaluation
221(12)
Brent Bridgeman
14 Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring
233(18)
Susan M. Lottridge
E. Matthew Schulz
Howard C. Mitzel
15 Grammatical Error Detection in Automatic Essay Scoring and Feedback
251(16)
Michael Gamon
Martin Chodorow
Claudia Leacock
Joel Tetreault
16 Automated Evaluation of Discourse Coherence Quality in Essay Writing
267(14)
Jill Burstein
Joel Tetreault
Martin Chodorow
Daniel Blanchard
Slava Andreyev
17 Automated Sentiment Analysis for Essay Evaluation
281(17)
Jill Burstein
Beata Beigman-Klebanov
Nitin Madnani
Adam Faulkner
18 Covering the Construct: An Approach to Automated Essay Scoring Motivated by a Socio-Cognitive Framework for Defining Literacy Skills
298(15)
Paul Deane
19 Contrasting State-of-the-Art Automated Scoring of Essays
313(34)
Mark D. Shermis
Ben Hamner
20 The Policy Turn in Current Education Reform: The Common Core State Standards and Its Linguistic Challenges and Opportunities
347(8)
Kenji Hakuta
Index 355
Mark D. Shermis, Ph.D. is a professor at the University of Akron and the principal investigator of the Hewlett Foundation-funded Automated Scoring Assessment Prize (ASAP) program.  He has published extensively on machine scoring and recently co-authored the textbook Classroom Assessment in Action with Francis DiVesta. Shermis is a fellow of the American Psychological Association (Division 5) and the American Educational Research Association. 





Jill Burstein, Ph.D. is a managing principal research scientist in Educational Testing Service's Research and Development Division. Her research interests include natural language processing, automated essay scoring and evaluation, educational technology, discourse and sentiment analysis, English language learning, and writing research. She holds 13 patents for natural language processing educational technology applications. Two of her inventions are e-rater®, an automated essay evaluation application, and Language MuseSM, an instructional authoring tool for teachers of English learners.