In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.
History
Publication
Assessment & Evaluation in Higher Education;40: 1, 120-134
Publisher
Routledge: Taylor and Francis
Note
peer-reviewed
Rights
This is an Author's Original Manuscript of an article whose final and definitive form, the Version of Record, has been published in Assessment & Evaluation in Higher Education 2015 copyright Taylor & Francis, available online at: http://dx.doi.org/10.1080/02602938.2014.890695