论文标题

生物医学成像的跨域分割和对抗性损失和协变量转移

Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for Biomedical Imaging

论文作者

Baydar, Bora, Ozkan, Savas, Kavur, A. Emre, Gezer, N. Sinem, Selver, M. Alper, Akar, Gozde Bozdagi

论文摘要

尽管广泛使用深度学习方法来对从单个来源获取的图像进行语义分割,但临床医生经常使用多域数据进行详细分析。例如,CT和MRI在成像质量,人工制品和输出特征方面具有相互优势,从而导致鉴别诊断。当前分割技术的能力仅由于其差异而允许为单个领域工作。但是,对于完整的解决方案,基本上需要能够以所有方式进行所有模式的模型。此外,鲁棒性在训练步骤中受到样本数量的巨大影响,尤其是对于深度学习模型。因此,无论数据域如何用于可靠的方法,所有可用数据都必须使用。为此,该手稿旨在实施一个新型模型,该模型可以通过封装不同方式的不同和共享模式来从跨域数据中学习强大的表示。准确地说,协变量偏移特性以结构性修饰和对抗性损失保留,并获得稀疏和丰富的表示。因此,单个参数集用于执行跨域分割任务。该方法的优越性是在训练或推理阶段没有提供与模式相关的信息。关于常规临床工作流程中获得的CT和MRI肝脏数据的测试表明,所提出的模型的表现优于所有其他基线。在COVID-19数据集上还进行了实验,该数据集由CT数据组成,其中观察到显着的类内视觉差异。同样,所提出的方法可以达到最佳性能。

Despite the widespread use of deep learning methods for semantic segmentation of images that are acquired from a single source, clinicians often use multi-domain data for a detailed analysis. For instance, CT and MRI have advantages over each other in terms of imaging quality, artifacts, and output characteristics that lead to differential diagnosis. The capacity of current segmentation techniques is only allow to work for an individual domain due to their differences. However, the models that are capable of working on all modalities are essentially needed for a complete solution. Furthermore, robustness is drastically affected by the number of samples in the training step, especially for deep learning models. Hence, there is a necessity that all available data regardless of data domain should be used for reliable methods. For this purpose, this manuscript aims to implement a novel model that can learn robust representations from cross-domain data by encapsulating distinct and shared patterns from different modalities. Precisely, covariate shift property is retained with structural modification and adversarial loss where sparse and rich representations are obtained. Hence, a single parameter set is used to perform cross-domain segmentation task. The superiority of the proposed method is that no information related to modalities are provided in either training or inference phase. The tests on CT and MRI liver data acquired in routine clinical workflows show that the proposed model outperforms all other baseline with a large margin. Experiments are also conducted on Covid-19 dataset that it consists of CT data where significant intra-class visual differences are observed. Similarly, the proposed method achieves the best performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源