Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th Sept 2021, 05:24:37am CEST

Session Overview
Oral Session - Deep Learning Methods
Thursday, 22/July/2021:
2:50pm - 4:30pm

Session Chair: Jerome Liang
Session Chair: Kuang Gong
Location: virtual (CEST)

External Resource:
  • Link to the prerecordings
  • Link to the recording of the live session
  • Show help for 'Increase or decrease the abstract text size'
    2:50pm - 3:10pm

    Fast Reconstruction of non-circular CBCT orbits using CNNs

    Tom Russ1, Wenying Wang2, Alena-Kathrin Golla1, Dominik F. Bauer1, Matthew Tivnan2, Christian Tönnes1, Yiqun Q. Ma2, Tess Reynolds3, Sepideh Hatamikia4,5, Lothar R. Schad1, Frank G. Zöllner1, Grace J. Gang2, J. Webster Stayman2

    1Computer Assisted Clinical Medicine, Heidelberg University, Heidelberg, Germany; 2Department of Biomedical Engineering, Johns-Hopkins University, Baltimore, USA; 3ACRF Image X Institute, University of Sydney, Australia; 4Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria; 5Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria

    Non-circular acquisition orbits for CBCT have been investigated for a number of reasons including increased field-of-view and improved CBCT image quality.
    Fast reconstruction of the projection data is essential in an interventional imaging setting.
    We present a scheme for fast reconstruction of arbitrary orbits based on CNNs.
    Specifically, we propose a processing chain that includes a shift-invariant deconvolution of backprojected measurements, followed by CNN processing in a U-Net architecture to address artifacts and deficiencies in the deconvolution process.
    Synthetic training data is produced using orbital specifications and projections of a large number of procedurally generated objects.
    We investigated the reconstruction performance for different sets of acquisition orbits including: circular, sinusoidal and randomized parametric trajectories.
    Our reconstruction scheme yields similar image quality when compared to SART, at a small fraction of the computation time.
    Thus, the proposed work offers a potential way to utilize sophisticated non-circular orbits while maintaining the strict time requirements found in interventional imaging.

    3:10pm - 3:30pm

    Noise Entangled GAN For Low-Dose CT Simulation - FULLY3D 2021 AWARD NOMINEE

    Chuang Niu1, Ge Wang1, Pingkun Yan1, Juergen Hahn1, Youfang Lai2, Xun Jia2, Arjun Krishna3, Klaus Mueller3, Andreu Badal4, Kyle J. Myers4, Rongping Zeng4

    1Department of Biomedical Engineering, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY USA; 2Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX USA; 3Computer Science Department, Stony Brook University, Stony Brook, NY USA; 4Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, MD USA

    We propose a Noise Entangled GAN (NE-GAN) for simulating low-dose computed tomography (CT) images from a higher dose CT image. First, we present two schemes to generate a clean CT image and a noise image from the high-dose CT image. Then an NE-GAN is proposed to simulate different levels of low-dose CT images, where the level of generated noise can be continuously controlled by a noise factor. NE-GAN consists of a generator and a set of discriminators, the number of discriminators is determined by the number of noise levels during training. Compared with traditional methods based on projection data that are usually unavailable in real applications, NE-GAN can directly learn from real and/or simulated CT images and may create low-dose CT images quickly without the need of raw data or other proprietary CT scanner information. The experimental results show that the proposed method has the potential to simulate realistic low-dose CT images.

    3:30pm - 3:50pm

    Using Uncertainty in Deep Learning Reconstruction for Cone-Beam CT of the Brain - FULLY3D 2021 AWARD WINNER

    Pengwei Wu1, Alejandro Sisniega1, Ali Uneri1, Runze Han1, Craig Jones1, Prasad Vagdargi1, Xiaoxuan Zhang1, Mark Luciano2, William Anderson2, Jeffrey Siewerdsen1,2

    1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA 21205; 2Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore MD USA 21287

    We present a deep learning reconstruction (DL-Recon) method that integrates physically principled reconstruction models with DL-based image synthesis based on the statistical uncertainty in the synthesis image. A synthesis network was developed to generate a synthesized CBCT image (DL-Synthesis) from an uncorrected filtered back-projection (FBP) image. To improve generalizability (including accurate representation of lesions not seen in training), voxel-wise epistemic uncertainty of DL-Synthesis was computed using a Bayesian inference technique. In regions of high uncertainty, the DL-Recon method incorporates information from a physics-based reconstruction model and corrected projection data (with DL-Synthesis as the object model). Compared to FBP and PWLS, the DL-Recon methods showed ~50% reduction in noise (matched spatial resolution) and ~40-70% improvement in image uniformity. Conventional DL-Synthesis alone exhibited ~70% under-estimation of brain lesion contrast, suggesting a lack of generalizability for structures unseen in the training data, which was avoided with DL-Recon methods.

    3:50pm - 4:10pm

    Deep Positron Range Correction

    Joaquin L. Herraiz1,2, Alejandro Lopez-Montes1, Adrian Bembibre1, Nerea Encina1

    1Grupo de Fisica Nuclear, EMFTEL & IPARCOS, Complutense University of Madrid, Madrid,Spain; 2Health Research Institute of the Hospital Clínico San Carlos (IdISSC)

    Positron range is one of the main limiting factors to the spatial resolution achievable with Positron Emission Tomography (PET). Several PET radionuclides such as 68Ga and 82Rb, emit high-energy positrons, creating a significant blurring in the reconstructed images. In this work, we have trained a deep neural network (Deep-PRC) with a U-NET architecture to correct PET images for positron range effects. Deep-PRC has been trained with 3D input patches from reconstructed images from realistic Monte Carlo simulations that take into account the positron energy distribution and the materials and tissues it propagates into, as well as acquisition effects. The quantification of the reconstructed PET images corrected with Deep-PRC shows that it may restore the images up to 95% without any significant noise increase. The proposed method can provide an accurate positron range correction in a few seconds for a typical PET acquisition.

    4:10pm - 4:30pm

    RIDL: Row Interpolation with Deep Learning

    Jan Magonov1,2,3, Marc Kachelrieß1, Eric Fournie2, Karl Stierstorfer2, Thorsten Buzug3, Maik Stille3

    1German Cancer Research Center (DKFZ), Heidelberg, Germany; 2Siemens Healthcare GmbH, Forchheim, Germany; 3Institute of Medical Engineering, University of Lübeck, Lübeck, Germany

    The limited resolution of spiral CT scans in the axial z-direction can lead to reconstruction artifacts - so-called windmill artifacts. Available methods to increase the resolution, such as z-Flying-Focal-Spot (zFFS), are technically intricate. This work aims to interpolate CT detector rows using a neural network trained with projection data from clinical CT scans. The presented approach is abbreviated as the acronym RIDL (Row Interpolation with Deep Learning). In addition to analyzing the interpolation results with single projection data, the method was validated in the image domain. A reconstruction algorithm was applied to the output generated by the RIDL and compared with data sets using zFFS and linear interpolation. Although the zFFS cannot be entirely replaced by the presented method, it was shown that the resolution in z-direction can be increased with the RIDL network while achieving significantly better reconstruction results than with linear interpolation.

    Contact and Legal Notice · Contact Address:
    Privacy Statement · Conference: Fully3D 2021
    Conference Software - ConfTool Pro 2.6.141
    © 2001 - 2021 by Dr. H. Weinreich, Hamburg, Germany