These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Integration of Multikinds Imputation With Covariance Adaptation Based on Evidence Theory. Author: Huang L, Fan J, Liew AW. Journal: IEEE Trans Neural Netw Learn Syst; 2024 Jun 25; PP():. PubMed ID: 38917281. Abstract: For incomplete data classification, missing attribute values are often estimated by imputation methods before building classifiers. The estimated attribute values are not actual attribute values. Thus, the distributions of data will be changed after imputing, and this phenomenon often results in degradation of classification performance. Here, we propose a new framework called integration of multikinds imputation with covariance adaptation (MICA) based on evidence theory (ET) to effectively deal with the classification problem with incomplete training data and complete test data. In MICA, we first employ different kinds of imputation methods to obtain multiple imputed training datasets. In general, the distributions of each imputed training dataset and test dataset will be different. A covariance adaptation module (CAM) is then developed to reduce the distribution difference of each imputed training dataset and test dataset. Then, multiple classifiers can be learned on the multiple imputed training datasets, and they are complementary to each other. For a test pattern, we can combine the multiple pieces of soft classification results yielded by these classifiers based on ET to obtain better classification performance. However, the reliabilities/weights of different imputed training datasets are usually different, so the soft classification results cannot be treated equally during fusion. We propose to use covariance difference across datasets and accuracy of imputed training data to estimate the weights. Finally, the soft classification results discounted by the estimated weights are combined by ET to make the final class decision. MICA was compared with a variety of related methods on several datasets, and the experimental results demonstrate that this new method can significantly improve the classification performance.[Abstract] [Full Text] [Related] [New Search]