论文标题

可用于强大的左心房和跨扫描仪的疤痕细分的丑陋者

UGformer for Robust Left Atrium and Scar Segmentation Across Scanners

论文作者

Liu, Tianyi, Hou, Size, Zhu, Jiayuan, Zhao, Zilong, Jiang, Haochuan

论文摘要

由于长期依赖性和对不规则形状的鲁棒性的能力,视觉变压器和可变形的卷积之所以出现,随着分割的强大视觉技术而出现。同时,图形卷积网络(GCN)优化了基于全球拓扑关系模型的本地特征。特别是,事实证明,它们可以有效解决医学成像分割任务中的问题,包括低质量图像的多域泛化。在本文中,我们为医学图像分割提供了一个新颖,有效且健壮的框架,即丑陋的框架。它统一了源自U-NET的新型变压器块,GCN桥和卷积解码器,以预测左心房(LAS)和LA SCARS。我们已经确定了拟议的丑陋者的两个吸引人的发现:1)。具有可变形卷积的增强变压器模块,可改善变压器信息与卷积信息的混合,并有助于预测不规则的LAS和疤痕形状。 2)。使用结合GCN的桥进一步克服了在不同的磁共振图像上捕获条件不一致的困难,并具有各种不一致的域信息。拟议的Ugformer模型具有出色的能力,可以在Lascarqs 2022数据集上细分左心房和疤痕,表现优于最近的几个最新技术。

Thanks to the capacity for long-range dependencies and robustness to irregular shapes, vision transformers and deformable convolutions are emerging as powerful vision techniques of segmentation.Meanwhile, Graph Convolution Networks (GCN) optimize local features based on global topological relationship modeling. Particularly, they have been proved to be effective in addressing issues in medical imaging segmentation tasks including multi-domain generalization for low-quality images. In this paper, we present a novel, effective, and robust framework for medical image segmentation, namely, UGformer. It unifies novel transformer blocks, GCN bridges, and convolution decoders originating from U-Net to predict left atriums (LAs) and LA scars. We have identified two appealing findings of the proposed UGformer: 1). an enhanced transformer module with deformable convolutions to improve the blending of the transformer information with convolutional information and help predict irregular LAs and scar shapes. 2). Using a bridge incorporating GCN to further overcome the difficulty of capturing condition inconsistency across different Magnetic Resonance Images scanners with various inconsistent domain information. The proposed UGformer model exhibits outstanding ability to segment the left atrium and scar on the LAScarQS 2022 dataset, outperforming several recent state-of-the-arts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源