University of Limerick
Browse

Experimental assessment of software metrics using automated refactoring

Download (774.92 kB)
conference contribution
posted on 2012-11-09, 10:05 authored by Mel Ó Cinnéide, Laurence Tratt, Mark Harman, Steven Counsell, Iman Hemati-Moghadam
A large number of software metrics have been proposed in the literature, but there is little understanding of how these metrics relate to one another. We propose a novel experimental technique, based on search-based refactoring, to assess software metrics and to explore relationships between them. Our goal is not to improve the program being refactored, but to assess the software metrics that guide the automated refactoring through repeated refactoring experiments. We apply our approach to five popular cohesion metrics using eight real-world Java systems, involving 300,000 lines of code and over 3,000 refactorings. Our results demonstrate that cohesion metrics disagree with each other in 55% of cases, and show how our approach can be used to reveal novel and surprising insights into the software metrics under investigation.

History

Publication

Proceedings of the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’12);pp. 49-58

Publisher

Association for Computing Machinery

Note

peer-reviewed

Other Funding information

SFI

Rights

"© ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published inProceedings of the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’12), http://dx.doi.org/10.1145/2372251.2372260

Language

English

Usage metrics

    University of Limerick

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC