These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN. Author: Safari M, Yang X, Chang CW, Qiu RLJ, Fatemi A, Archambault L. Journal: Phys Med Biol; 2024 May 30; 69(11):. PubMed ID: 38714192. Abstract: Objective.This study developed an unsupervised motion artifact reduction method for magnetic resonance imaging (MRI) images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images.Approach.The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients'in-vivodatasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests.Main results.On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07 ± 0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93 ± 0.04), PSNR (30.63 ± 4.96), and VIF (0.45 ± 0.05) values, along with comparable MS-SSIM (0.96 ± 0.31). Similarly, our method outperformed comparative models in removingin-vivomotion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82 ± 0.52, 1.88 ± 0.71, and 1.02 ± 0.14 (p-values≪0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels.Significance.Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.[Abstract] [Full Text] [Related] [New Search]