These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Multi-View Spatial Aggregation Framework for Joint Localization and Segmentation of Organs at Risk in Head and Neck CT Images.
    Author: Liang S, Thung KH, Nie D, Zhang Y, Shen D.
    Journal: IEEE Trans Med Imaging; 2020 Sep; 39(9):2794-2805. PubMed ID: 32091997.
    Abstract:
    Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.
    [Abstract] [Full Text] [Related] [New Search]