论文标题
灯:具有自动模型并行的大型深网,用于图像分割
LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation
论文作者
论文摘要
深度学习(DL)模型正在变得更大,因为模型大小的增加可能会提供明显的准确性增长。为了实现大型深网的训练,数据并行性和模型并行性是两种并行训练的众所周知的方法。但是,数据并行性无助于减少每个设备的内存足迹。在这项工作中,我们介绍了具有自动模型并行性(LAMP)的大型深3D Convnets,并研究了输入和深3D Convnets大小对分割精度的影响。通过自动模型并行性,可以用大型输入贴片,甚至整个图像训练大型深3D弯曲。广泛的实验表明,通过自动模型并行性促进,可以通过增加模型大小和输入上下文大小来提高分割精度,并且与推理中的小斑块的滑动窗口相比,大输入可产生明显的推理加速。代码可用\ footNote {https://monai.io/research/lamp-automated-model-parlelalism}。
Deep Learning (DL) models are becoming larger, because the increase in model size might offer significant accuracy gain. To enable the training of large deep networks, data parallelism and model parallelism are two well-known approaches for parallel training. However, data parallelism does not help reduce memory footprint per device. In this work, we introduce Large deep 3D ConvNets with Automated Model Parallelism (LAMP) and investigate the impact of both input's and deep 3D ConvNets' size on segmentation accuracy. Through automated model parallelism, it is feasible to train large deep 3D ConvNets with a large input patch, even the whole image. Extensive experiments demonstrate that, facilitated by the automated model parallelism, the segmentation accuracy can be improved through increasing model size and input context size, and large input yields significant inference speedup compared with sliding window of small patches in the inference. Code is available\footnote{https://monai.io/research/lamp-automated-model-parallelism}.