论文标题

在深度学习中的物理和内容先验之间的相互作用上,以进行计算成像

On the interplay between physical and content priors in deep learning for computational imaging

论文作者

Deng, Mo, Li, Shuai, Kang, Iksung, Fang, Nicholas X., Barbastathis, George

论文摘要

深度学习(DL)已广泛应用于许多计算成像问题,通常会导致超过传统迭代方法的卓越性能。但是,两个重要的问题仍然没有解决:首先,训练有素的神经网络对与培训中的对象有何不同?这在实践中尤其重要,因为在培训期间通常不可用大规模的注释示例。其次,训练有素的神经网络是否学习了基础(反向)物理模型,还是只是做了一些琐碎的事情,例如记住示例或点模式匹配?这与基于机器学习算法的解释性有关。在这项工作中,我们将相位提取神经网络(PHENN)(PHENN)(PHENN)(PHENN)(PHENN),深度神经网络(DNN)在无镜头相成像系统中进行定量相检索作为标准平台,并表明这两个问题是相关的,并共享一个共同的症结:训练示例的选择。此外,我们将训练集施加的正则化效果的强度与培训过程相关联,并将其与数据集中图像的香农熵联系起来。也就是说,训练图像的熵越高,可以施加正则化效应越弱。我们还发现,较弱的正则化效应会导致更好地学习潜在的传播模型,即弱对象传输函数,适用于弱对象近似下的弱散射对象。最后,模拟和实验结果表明,如果在高渗透数据库中训练DNN,例如,可以实现更好的跨域泛化性能。与在低渗透数据库中训练相同的DNN相比,Imagenet(例如MNIST,因为前者允许基础物理模型比后者更好地学习。

Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN), a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源