These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia.
    Author: Ieracitano C, Mammone N, Hussain A, Morabito FC.
    Journal: Neural Netw; 2020 Mar; 123():176-190. PubMed ID: 31884180.
    Abstract:
    Electroencephalographic (EEG) recordings generate an electrical map of the human brain that are useful for clinical inspection of patients and in biomedical smart Internet-of-Things (IoT) and Brain-Computer Interface (BCI) applications. From a signal processing perspective, EEGs yield a nonlinear and nonstationary, multivariate representation of the underlying neural circuitry interactions. In this paper, a novel multi-modal Machine Learning (ML) based approach is proposed to integrate EEG engineered features for automatic classification of brain states. EEGs are acquired from neurological patients with Mild Cognitive Impairment (MCI) or Alzheimer's disease (AD) and the aim is to discriminate Healthy Control (HC) subjects from patients. Specifically, in order to effectively cope with nonstationarities, 19-channels EEG signals are projected into the time-frequency (TF) domain by means of the Continuous Wavelet Transform (CWT) and a set of appropriate features (denoted as CWT features) are extracted from δ, θ, α1, α2, β EEG sub-bands. Furthermore, to exploit nonlinear phase-coupling information of EEG signals, higher order statistics (HOS) are extracted from the bispectrum (BiS) representation. BiS generates a second set of features (denoted as BiS features) which are also evaluated in the five EEG sub-bands. The CWT and BiS features are fed into a number of ML classifiers to perform both 2-way (AD vs. HC, AD vs. MCI, MCI vs. HC) and 3-way (AD vs. MCI vs. HC) classifications. As an experimental benchmark, a balanced EEG dataset that includes 63 AD, 63 MCI and 63 HC is analyzed. Comparative results show that when the concatenation of CWT and BiS features (denoted as multi-modal (CWT+BiS) features) is used as input, the Multi-Layer Perceptron (MLP) classifier outperforms all other models, specifically, the Autoencoder (AE), Logistic Regression (LR) and Support Vector Machine (SVM). Consequently, our proposed multi-modal ML scheme can be considered a viable alternative to state-of-the-art computationally intensive deep learning approaches.
    [Abstract] [Full Text] [Related] [New Search]