Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 17th Sept 2021, 05:24:37am CEST
Oral Session - Deep Learning Methods
2:50pm - 3:10pm
Fast Reconstruction of non-circular CBCT orbits using CNNs
1Computer Assisted Clinical Medicine, Heidelberg University, Heidelberg, Germany; 2Department of Biomedical Engineering, Johns-Hopkins University, Baltimore, USA; 3ACRF Image X Institute, University of Sydney, Australia; 4Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria; 5Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
Non-circular acquisition orbits for CBCT have been investigated for a number of reasons including increased field-of-view and improved CBCT image quality.
3:10pm - 3:30pm
Noise Entangled GAN For Low-Dose CT Simulation - FULLY3D 2021 AWARD NOMINEE
1Department of Biomedical Engineering, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY USA; 2Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX USA; 3Computer Science Department, Stony Brook University, Stony Brook, NY USA; 4Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, MD USA
We propose a Noise Entangled GAN (NE-GAN) for simulating low-dose computed tomography (CT) images from a higher dose CT image. First, we present two schemes to generate a clean CT image and a noise image from the high-dose CT image. Then an NE-GAN is proposed to simulate different levels of low-dose CT images, where the level of generated noise can be continuously controlled by a noise factor. NE-GAN consists of a generator and a set of discriminators, the number of discriminators is determined by the number of noise levels during training. Compared with traditional methods based on projection data that are usually unavailable in real applications, NE-GAN can directly learn from real and/or simulated CT images and may create low-dose CT images quickly without the need of raw data or other proprietary CT scanner information. The experimental results show that the proposed method has the potential to simulate realistic low-dose CT images.
3:30pm - 3:50pm
Using Uncertainty in Deep Learning Reconstruction for Cone-Beam CT of the Brain - FULLY3D 2021 AWARD WINNER
1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA 21205; 2Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore MD USA 21287
We present a deep learning reconstruction (DL-Recon) method that integrates physically principled reconstruction models with DL-based image synthesis based on the statistical uncertainty in the synthesis image. A synthesis network was developed to generate a synthesized CBCT image (DL-Synthesis) from an uncorrected filtered back-projection (FBP) image. To improve generalizability (including accurate representation of lesions not seen in training), voxel-wise epistemic uncertainty of DL-Synthesis was computed using a Bayesian inference technique. In regions of high uncertainty, the DL-Recon method incorporates information from a physics-based reconstruction model and corrected projection data (with DL-Synthesis as the object model). Compared to FBP and PWLS, the DL-Recon methods showed ~50% reduction in noise (matched spatial resolution) and ~40-70% improvement in image uniformity. Conventional DL-Synthesis alone exhibited ~70% under-estimation of brain lesion contrast, suggesting a lack of generalizability for structures unseen in the training data, which was avoided with DL-Recon methods.
3:50pm - 4:10pm
Deep Positron Range Correction
1Grupo de Fisica Nuclear, EMFTEL & IPARCOS, Complutense University of Madrid, Madrid,Spain; 2Health Research Institute of the Hospital Clínico San Carlos (IdISSC)
Positron range is one of the main limiting factors to the spatial resolution achievable with Positron Emission Tomography (PET). Several PET radionuclides such as 68Ga and 82Rb, emit high-energy positrons, creating a significant blurring in the reconstructed images. In this work, we have trained a deep neural network (Deep-PRC) with a U-NET architecture to correct PET images for positron range effects. Deep-PRC has been trained with 3D input patches from reconstructed images from realistic Monte Carlo simulations that take into account the positron energy distribution and the materials and tissues it propagates into, as well as acquisition effects. The quantification of the reconstructed PET images corrected with Deep-PRC shows that it may restore the images up to 95% without any significant noise increase. The proposed method can provide an accurate positron range correction in a few seconds for a typical PET acquisition.
4:10pm - 4:30pm
RIDL: Row Interpolation with Deep Learning
1German Cancer Research Center (DKFZ), Heidelberg, Germany; 2Siemens Healthcare GmbH, Forchheim, Germany; 3Institute of Medical Engineering, University of Lübeck, Lübeck, Germany
The limited resolution of spiral CT scans in the axial z-direction can lead to reconstruction artifacts - so-called windmill artifacts. Available methods to increase the resolution, such as z-Flying-Focal-Spot (zFFS), are technically intricate. This work aims to interpolate CT detector rows using a neural network trained with projection data from clinical CT scans. The presented approach is abbreviated as the acronym RIDL (Row Interpolation with Deep Learning). In addition to analyzing the interpolation results with single projection data, the method was validated in the image domain. A reconstruction algorithm was applied to the output generated by the RIDL and compared with data sets using zFFS and linear interpolation. Although the zFFS cannot be entirely replaced by the presented method, it was shown that the resolution in z-direction can be increased with the RIDL network while achieving significantly better reconstruction results than with linear interpolation.
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Fully3D 2021
|Conference Software - ConfTool Pro 2.6.141
© 2001 - 2021 by Dr. H. Weinreich, Hamburg, Germany