JOINT image reconstruction and image registration WITHOUT ANY ground-truth supervision
1Department of Computer Science & Engineering, Washington University in St.Louis, St. Louis, MO, USA 2Mallinckrodt Institute of Radiology,Washington University in St. Louis, St.Louis, MO, USA 3Department of Electrical & SystemsEngineering, Washington University in St.Louis, St. Louis, MO, USA 4Department of Biomedical Engineering,Washington University in St. Louis, St.Louis, MO, USA 5Department of Neurology, WashingtonUniversity in St. Louis, St. Louis, MO, USA
We are grateful to Vivian Chen for her contribution on this project website.
An illustration of DeCoLearn training procedure.
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.
A video of DeCoLearn reconstructed images across different slice. LEFT: inverse multi-coil non-uniform fast Fourier transform (MCNUFFT). RIGHT: DeCoLearn.
A video of DeCoLearn reconstructed images across different respiratory phase. LEFT: inverse multi-coil non-uniform fast Fourier transform (MCNUFFT). RIGHT: DeCoLearn.
An illustration of DeCoLearn reconstructed images across different respiratory phase.
An illustration of DeCoLearn reconstructed images compared against several baseline methods.