posted on 2020-01-21, 19:46authored byBrian FitzgeraldBrian Fitzgerald, Alan R. Dennis, Juyoung An, Satoshi Tsutsui, Rishikesh Muchhala
Information systems (IS) researchers have long discussed research impact and journal rankings. We believe that any
measure of impact should pass the same fundamental tests that we apply to our own research: validity and reliability.
In this paper, we examine the impact of journals in the AIS Senior Scholars’ basket of eight journals, three close
contenders (i.e., journals that researchers frequently suggest for inclusion in the basket), and six randomly selected IS
journals (from the Web of Science list) using a variety of traditional measures (e.g., journal impact factor) and newer
measures (e.g., PageRank). Based on the results, we make three rather unpleasant and likely contentious conclusions.
First, journal impact factor and other traditional mean-based measures do not represent valid measures so we conclude
that one should not use them to measure journal quality. Second, the journal basket does not reliably measure quality,
so we conclude that it one should not use it to measure journal quality. Third, the journal in which a paper appears does
not reliably measure the paper’s quality, so we conclude that one should not use the number of papers an author has
published in certain journals as a criterion for promotion and tenure assessments. We believe that the best way forward
involves focusing on paper-level and not journal-level measures. We offer some suggestions, but we fundamentally
conclude that we do not know enough to make good recommendations, so we need more research on paper-level
measures. We believe that these issues pertain to many disciplines and not just the IS discipline and that we need to
take the lead in doing research to identify valid and reliable measures for assessing research impact.
History
Publication
Communications of thte Association for Information Systems;45, article 7