These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Diabetic retinopathy prediction based on vision transformer and modified capsule network. Author: Oulhadj M, Riffi J, Khodriss C, Mahraz AM, Yahyaouy A, Abdellaoui M, Andaloussi IB, Tairi H. Journal: Comput Biol Med; 2024 Jun; 175():108523. PubMed ID: 38701591. Abstract: Diabetic retinopathy is considered one of the most common diseases that can lead to blindness in the working age, and the chance of developing it increases as long as a person suffers from diabetes. Protecting the sight of the patient or decelerating the evolution of this disease depends on its early detection as well as identifying the exact levels of this pathology, which is done manually by ophthalmologists. This manual process is very consuming in terms of the time and experience of an expert ophthalmologist, which makes developing an automated method to aid in the diagnosis of diabetic retinopathy an essential and urgent need. In this paper, we aim to propose a new hybrid deep learning method based on a fine-tuning vision transformer and a modified capsule network for automatic diabetic retinopathy severity level prediction. The proposed approach consists of a new range of computer vision operations, including the power law transformation technique and the contrast-limiting adaptive histogram equalization technique in the preprocessing step. While the classification step builds up on a fine-tuning vision transformer, a modified capsule network, and a classification model combined with a classification model, The effectiveness of our approach was evaluated using four datasets, including the APTOS, Messidor-2, DDR, and EyePACS datasets, for the task of severity levels of diabetic retinopathy. We have attained excellent test accuracy scores on the four datasets, respectively: 88.18%, 87.78%, 80.36%, and 78.64%. Comparing our results with the state-of-the-art, we reached a better performance.[Abstract] [Full Text] [Related] [New Search]