These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Automatic detection of microaneurysms in optical coherence tomography images of retina using convolutional neural networks and transfer learning.
    Author: Almasi R, Vafaei A, Kazeminasab E, Rabbani H.
    Journal: Sci Rep; 2022 Aug 17; 12(1):13975. PubMed ID: 35978087.
    Abstract:
    Microaneurysms (MAs) are pathognomonic signs that help clinicians to detect diabetic retinopathy (DR) in the early stages. Automatic detection of MA in retinal images is an active area of research due to its application in screening processes for DR which is one of the main reasons of blindness amongst the working-age population. The focus of these works is on the automatic detection of MAs in en face retinal images like fundus color and Fluorescein Angiography (FA). On the other hand, detection of MAs from Optical Coherence Tomography (OCT) images has 2 main advantages: first, OCT is a non-invasive imaging technique that does not require injection, therefore is safer. Secondly, because of the proven application of OCT in detection of Age-Related Macular Degeneration, Diabetic Macular Edema, and normal cases, thanks to detecting MAs in OCT, extensive information is obtained by using this imaging technique. In this research, the concentration is on the diagnosis of MAs using deep learning in the OCT images which represent in-depth structure of retinal layers. To this end, OCT B-scans should be divided into strips and MA patterns should be searched in the resulted strips. Since we need a dataset comprising OCT image strips with suitable labels and such large labelled datasets are not yet available, we have created it. For this purpose, an exact registration method is utilized to align OCT images with FA photographs. Then, with the help of corresponding FA images, OCT image strips are created from OCT B-scans in four labels, namely MA, normal, abnormal, and vessel. Once the dataset of image strips is prepared, a stacked generalization (stacking) ensemble of four fine-tuned, pre-trained convolutional neural networks is trained to classify the strips of OCT images into the mentioned classes. FA images are used once to create OCT strips for training process and they are no longer needed for subsequent steps. Once the stacking ensemble model is obtained, it will be used to classify the OCT strips in the test process. The results demonstrate that the proposed framework classifies overall OCT image strips and OCT strips containing MAs with accuracy scores of 0.982 and 0.987, respectively.
    [Abstract] [Full Text] [Related] [New Search]