论文标题

图像错误分类的因果解释

Causal Explanations of Image Misclassifications

论文作者

Min, Yan, Bennett, Miles

论文摘要

图像错误分类的因果解释是一个研究的利基市场,它可以在模型解释性中提供有价值的见解并提高预测准确性。这项研究对CIFAR-10进行了CIFAR-10训练,包括VGG16,Resnet50,Googlenet,Densenet161,Mobilenet V2和Inception V3在内的六个现代CNN架构,并使用条件混淆矩阵和错误分类网络探索了错误分类模式。确定并定性区分了两个原因:形态学相似性和非必需信息干扰。前者的原因不是模型依赖性,而后者在所有六个模型中都是不一致的。 为了减少非必要信息干扰引起的错误分类,本研究擦除了固定在显着性图最高像素的粘结框内的像素。该方法首先验证原因;然后,通过直接修改原因,它减少了错误分类。未来的研究将集中于定量区分错误分类的两个原因,从而概括了基于锚箱的推理修饰方法来减少错误分类,并探索了这两个原因在错误分类中的相互作用。

The causal explanation of image misclassifications is an understudied niche, which can potentially provide valuable insights in model interpretability and increase prediction accuracy. This study trains CIFAR-10 on six modern CNN architectures, including VGG16, ResNet50, GoogLeNet, DenseNet161, MobileNet V2, and Inception V3, and explores the misclassification patterns using conditional confusion matrices and misclassification networks. Two causes are identified and qualitatively distinguished: morphological similarity and non-essential information interference. The former cause is not model dependent, whereas the latter is inconsistent across all six models. To reduce the misclassifications caused by non-essential information interference, this study erases the pixels within the bonding boxes anchored at the top 5% pixels of the saliency map. This method first verifies the cause; then by directly modifying the cause it reduces the misclassification. Future studies will focus on quantitatively differentiating the two causes of misclassifications, generalizing the anchor-box based inference modification method to reduce misclassification, exploring the interactions of the two causes in misclassifications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源