These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Explainable multi-module semantic guided attention based network for medical image segmentation. Author: Karri M, Annavarapu CSR, Acharya UR. Journal: Comput Biol Med; 2022 Dec; 151(Pt A):106231. PubMed ID: 36335811. Abstract: Automated segmentation of medical images is crucial for disease diagnosis and treatment planning. Medical image segmentation has been improved based on the convolutional neural networks (CNNs) models. Unfortunately, they are still limited by scenarios in which the segmentation objective has large variations in size, boundary, position, and shape. Moreover, current CNNs have low explainability, restricting their use in clinical decisions. In this paper, we involve substantial use of various attentions in a CNN model and present an explainable multi-module semantic guided attention based network (MSGA-Net) for explainable and highly accurate medical image segmentation, which involves considering the most significant spatial regions, boundaries, scales, and channels. Specifically, we present a multi-scale attention module (MSA) to extract the most salient features at various scales from medical images. Then, we propose a semantic region-guided attention mechanism (SRGA) including location attention (LAM), channel-wise attention (CWA), and edge attention (EA) modules to extract the most important spatial, channel-wise, boundary-related features for interested regions. Moreover, we present a sequence of fine-tuning steps with the SRGA module to gradually weight the significance of interesting regions while simultaneously reducing the noise. In this work, we experimented with three different types of medical images such as dermoscopic images (HAM10000 dataset), multi-organ CT images (CHAOS 2019 dataset), and Brain tumor MRI images (BraTS 2020 dataset). Extensive experiments on all types of medical images revealed that our proposed MSGA-Net substantially increased the overall performance of all metrics over the existing models. Moreover, displaying the attention feature maps has more explainability than state-of-the-art models.[Abstract] [Full Text] [Related] [New Search]