Access

You are not currently logged in.

Access JSTOR through your library or other institution:

login

Log in through your institution.

Journal Article

C-rater: Automated Scoring of Short-Answer Questions

Claudia Leacock and Martin Chodorow
Computers and the Humanities
Vol. 37, No. 4 (Nov., 2003), pp. 389-405
Published by: Springer
Stable URL: http://www.jstor.org/stable/30204913
Page Count: 17
Were these topics helpful?
See somethings inaccurate? Let us know!

Select the topics that are inaccurate.

Cancel
  • More info
  • Add to My Lists
  • Cite this Item
C-rater: Automated Scoring of Short-Answer Questions
Preview not available

Abstract

C-rater is an automated scoring engine that has been developed to score responses to content-based short answer questions. It is not simply a string matching program - instead it uses predicate argument structure, pronominal reference, morphological analysis and synonyms to assign full or partial credit to a short answer question. C-rater has been used in two studies: National Assessment for Educational Progress (NAEP) and a statewide assessment in Indiana. In both studies, c-rater agreed with human graders about 84% of the time.

Page Thumbnails

  • Thumbnail: Page 
389
    389
  • Thumbnail: Page 
390
    390
  • Thumbnail: Page 
391
    391
  • Thumbnail: Page 
392
    392
  • Thumbnail: Page 
393
    393
  • Thumbnail: Page 
394
    394
  • Thumbnail: Page 
395
    395
  • Thumbnail: Page 
396
    396
  • Thumbnail: Page 
397
    397
  • Thumbnail: Page 
398
    398
  • Thumbnail: Page 
399
    399
  • Thumbnail: Page 
400
    400
  • Thumbnail: Page 
401
    401
  • Thumbnail: Page 
402
    402
  • Thumbnail: Page 
403
    403
  • Thumbnail: Page 
404
    404
  • Thumbnail: Page 
405
    405