论文标题

关于校准语义分割模型:分析和算法

On Calibrating Semantic Segmentation Models: Analyses and An Algorithm

论文作者

Wang, Dongdong, Gong, Boqing, Wang, Liqiang

论文摘要

我们研究语义分割校准的问题。已经提出了许多解决方案,以实现模型对图像分类的信心的误解。但是,迄今为止,关于语义细分的置信度校准研究仍然有限。我们提供了有关语义分割模型校准的系统研究,并提出了一种简单而有效的方法。首先,我们发现模型容量,作物大小,多尺度测试和预测正确性对校准有影响。其中,由于过度信心,预测正确性,尤其是错误预测,对于误解更为重要。接下来,我们提出了一种简单,统一且有效的方法,即选择性缩放,通过分离正确/错误的缩放预测,并更多地关注错误预测的logit平滑。然后,我们研究了流行的现有校准方法,并将它们与语义分割校准的选择性缩放进行比较。我们对内域和域移位校准进行了多种基准进行广泛的实验,并表明选择性缩放始终优于其他方法。

We study the problem of semantic segmentation calibration. Lots of solutions have been proposed to approach model miscalibration of confidence in image classification. However, to date, confidence calibration research on semantic segmentation is still limited. We provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach. First, we find that model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration. Among them, prediction correctness, especially misprediction, is more important to miscalibration due to over-confidence. Next, we propose a simple, unifying, and effective approach, namely selective scaling, by separating correct/incorrect prediction for scaling and more focusing on misprediction logit smoothing. Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration. We conduct extensive experiments with a variety of benchmarks on both in-domain and domain-shift calibration and show that selective scaling consistently outperforms other methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源