These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Classification accuracy and efficiency of writing screening using automated essay scoring.
    Author: Wilson J, Rodrigues J.
    Journal: J Sch Psychol; 2020 Oct; 82():123-140. PubMed ID: 32988459.
    Abstract:
    The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the Project Essay Grade (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common Core-aligned English language arts state test. Second, the study explored whether a similar level of classification accuracy could be achieved with a more efficient form of the AES-screener with fewer writing prompts. Third, the classification accuracy of the AES-scored screeners was compared to that of screeners scored for word count. Students in Grades 3-5 (n = 185, 167, and 187, respectively) composed six essays in response to multiple writing-prompt screeners on six different randomly assigned topics, consisting of two essays in each of three different genres (narrative, informative, and persuasive). Receiver operating characteristic (ROC) curve analysis was used to assess classification accuracy and to identify multiple cut scores with associated sensitivity and specificity values, and positive and negative posttest probabilities. Results indicated that the AES-scored multi-prompt screener and screeners with fewer prompts yield acceptable classification accuracy, are efficient, and are more accurate than screeners scored for word count. Overall, results illustrate the viability of writing screening using AES.
    [Abstract] [Full Text] [Related] [New Search]