On the Application of sentence transformers to automatic short answer grading in blended assessment
Date
2022-06-09
Abstract
In Natural Language Processing, automatic short answer grading remains a necessary launch-pad for the analysis of human responses in a blended learning setting. This study presents pre-trained neural language models that use context dependent Sentence-Transformers to automatically grade student responses with two different input settings. It is found that the use of these models achieves promising results when compared to conventional Bidirectional Encoder Representation Transformer, (BERT), approaches when applying various text similarity-based tasks. This work presents experiments using the benchmark Mohler dataset to test these new models. In summary, an excellent Pearson Correlation score of 0.82 and a Root Mean Square Error of 0.69 is exhibited across a representative experiment sample size.
Supervisor
Description
Camera-Ready-Final Version
Publisher
Institute of Electrical and Electronics Engineers Inc
Citation
2022 33rd Irish Signals and Systems Conference, ISSC 2022
Collections
ULRR Identifiers
Funding code
Funding Information
Sustainable Development Goals
External Link
License
Attribution-NonCommercial-ShareAlike 4.0 International
