posted on 2017-04-04, 13:03authored byMel Ó Cinnéide, Iman Hemati-Moghadam, Mark Harman, Steven Counsell, Laurence Tratt
In spite of several decades of software metrics research and practice, there is little
understanding of how software metrics relate to one another, nor is there any established
methodology for comparing them. We propose a novel experimental technique, based on
search-based refactoring, to ‘animate’ metrics and observe their behaviour in a practical setting.
Our aim is to promote metrics to the level of active, opinionated objects that can be
compared experimentally to uncover where they conflict, and to understand better the underlying
cause of the conflict. Our experimental approaches include semi-random refactoring,
refactoring for increased metric agreement/disagreement, refactoring to increase/decrease
the gap between a pair of metrics, and targeted hypothesis testing.We apply our approach to five popular cohesion metrics using ten real-world Java systems, involving 330,000 lines of
code and the application of over 78,000 refactorings. Our results demonstrate that cohesion
metrics disagree with each other in a remarkable 55 % of cases, that Low-level Similaritybased
Class Cohesion (LSCC) is the best representative of the set of metrics we investigate
while Sensitive Class Cohesion (SCOM) is the least representative, and we discover several
hitherto unknown differences between the examined metrics. We also use our approach to
investigate the impact of including inheritance in a cohesion metric definition and find that
doing so dramatically changes the metric.
Funding
Substance Abuse in Women: The Role of Stressful Pregnancy Outcomes