论文标题

头脑是一个强大的地方:显示代码可理解性指标如何影响代码理解

The Mind Is a Powerful Place: How Showing Code Comprehensibility Metrics Influences Code Understanding

论文作者

Wyrich, Marvin, Preikschat, Andreas, Graziotin, Daniel, Wagner, Stefan

论文摘要

静态代码分析工具和集成开发环境向开发人员提供了与质量相关的软件指标,其中一些描述了源代码的可理解性。软件指标会影响影响公司未来以及日常软件开发任务的优先级的总体战略决策。但是,几个软件指标缺乏验证:我们只是选择相信它们反映了应衡量的内容。其中一些甚至被证明没有衡量他们打算衡量的质量方面。然而,它们通过我们认知驱动的行为的偏见来影响我们。特别是,他们可能会锚定我们的决定。尚未研究是否使用软件指标存在锚定效果。我们进行了一项随机和双盲实验,以研究源代码可综合性显示源代码的度量值在其主观源代码的主观评估中可综合性的度量值,是否会在工作理解任务时受锚定效应的影响以及哪些个体特征在锚定效应中起作用。我们发现,可理解性度量的显示值对开发人员的代码可理知性额定值具有重大且较大的锚定效果。在研究与所研究的代码段有关的理解问题时,这种效果似乎不会影响时间或正确性。由于锚定效应是最强大的认知偏见之一,并且我们对通过非验证的指标表现出的开发人员的后果有限,因此我们呼吁提高对代码质量报告中责任的认识,以及基于科学证据的相应工具。

Static code analysis tools and integrated development environments present developers with quality-related software metrics, some of which describe the understandability of source code. Software metrics influence overarching strategic decisions that impact the future of companies and the prioritization of everyday software development tasks. Several software metrics, however, lack in validation: we just choose to trust that they reflect what they are supposed to measure. Some of them were even shown to not measure the quality aspects they intend to measure. Yet, they influence us through biases in our cognitive-driven actions. In particular, they might anchor us in our decisions. Whether the anchoring effect exists with software metrics has not been studied yet. We conducted a randomized and double-blind experiment to investigate the extent to which a displayed metric value for source code comprehensibility anchors developers in their subjective rating of source code comprehensibility, whether performance is affected by the anchoring effect when working on comprehension tasks, and which individual characteristics might play a role in the anchoring effect. We found that the displayed value of a comprehensibility metric has a significant and large anchoring effect on a developer's code comprehensibility rating. The effect does not seem to affect the time or correctness when working on comprehension questions related to the code snippets under study. Since the anchoring effect is one of the most robust cognitive biases, and we have limited understanding of the consequences of the demonstrated manipulation of developers by non-validated metrics, we call for an increased awareness of the responsibility in code quality reporting and for corresponding tools to be based on scientific evidence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源