These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

116 related articles for article (PubMed ID: 39089155)

  • 1. Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs.
    Tian Y; Xu S; Li M
    Neural Netw; 2024 Nov; 179():106567. PubMed ID: 39089155
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Enhanced Scalable Graph Neural Network via Knowledge Distillation.
    Mai C; Chang Y; Chen C; Zheng Z
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999962
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Fine-Grained Learning Behavior-Oriented Knowledge Distillation for Graph Neural Networks.
    Liu K; Huang Z; Wang CD; Gao B; Chen Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; PP():. PubMed ID: 39012738
    [TBL] [Abstract][Full Text] [Related]  

  • 4. On Representation Knowledge Distillation for Graph Neural Networks.
    Joshi CK; Liu F; Xun X; Lin J; Foo CS
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):4656-4667. PubMed ID: 36459610
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Graph Flow: Cross-Layer Graph Flow Distillation for Dual Efficient Medical Image Segmentation.
    Zou W; Qi X; Zhou W; Sun M; Sun Z; Shan C
    IEEE Trans Med Imaging; 2023 Apr; 42(4):1159-1171. PubMed ID: 36423314
    [TBL] [Abstract][Full Text] [Related]  

  • 6. MSKD: Structured knowledge distillation for efficient medical image segmentation.
    Zhao L; Qian X; Guo Y; Song J; Hou J; Gong J
    Comput Biol Med; 2023 Sep; 164():107284. PubMed ID: 37572439
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Localization Distillation for Object Detection.
    Zheng Z; Ye R; Hou Q; Ren D; Wang P; Zuo W; Cheng MM
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):10070-10083. PubMed ID: 37027640
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Graph Transformer Networks: Learning meta-path graphs to improve GNNs.
    Yun S; Jeong M; Yoo S; Lee S; Yi SS; Kim R; Kang J; Kim HJ
    Neural Netw; 2022 Sep; 153():104-119. PubMed ID: 35716619
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Position-Sensing Graph Neural Networks: Proactively Learning Nodes Relative Positions.
    Zhang Y; Qin Z; Anwar S; Kim D; Liu Y; Ji P; Gedeon T
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38530723
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A Block-Based Adaptive Decoupling Framework for Graph Neural Networks.
    Shen X; Zhang Y; Xie Y; Wong KC; Peng C
    Entropy (Basel); 2022 Aug; 24(9):. PubMed ID: 36141076
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Knowledge Distillation Guided Interpretable Brain Subgraph Neural Networks for Brain Disorder Exploration.
    Luo X; Wu J; Yang J; Chen H; Li Z; Peng H; Zhou C
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; PP():. PubMed ID: 38356216
    [TBL] [Abstract][Full Text] [Related]  

  • 12. GNNExplainer: Generating Explanations for Graph Neural Networks.
    Ying R; Bourgeois D; You J; Zitnik M; Leskovec J
    Adv Neural Inf Process Syst; 2019 Dec; 32():9240-9251. PubMed ID: 32265580
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Spot-Adaptive Knowledge Distillation.
    Song J; Chen Y; Ye J; Song M
    IEEE Trans Image Process; 2022; 31():3359-3370. PubMed ID: 35503832
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Exploiting Neighbor Effect: Conv-Agnostic GNN Framework for Graphs With Heterophily.
    Chen J; Chen S; Gao J; Huang Z; Zhang J; Pu J
    IEEE Trans Neural Netw Learn Syst; 2023 May; PP():. PubMed ID: 37195851
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Toward Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM Framework.
    Wang J; Li H; Chai Z; Wang Y; Cheng Y; Zhao L
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):4491-4501. PubMed ID: 37015438
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Layer-Specific Knowledge Distillation for Class Incremental Semantic Segmentation.
    Wang Q; Wu Y; Yang L; Zuo W; Hu Q
    IEEE Trans Image Process; 2024; 33():1977-1989. PubMed ID: 38451756
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Distilling Knowledge by Mimicking Features.
    Wang GH; Ge Y; Wu J
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8183-8195. PubMed ID: 34379588
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Augmented Graph Neural Network with hierarchical global-based residual connections.
    Rassil A; Chougrad H; Zouaki H
    Neural Netw; 2022 Jun; 150():149-166. PubMed ID: 35313247
    [TBL] [Abstract][Full Text] [Related]  

  • 19. NTCE-KD: Non-Target-Class-Enhanced Knowledge Distillation.
    Li C; Teng X; Ding Y; Lan L
    Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894408
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A General Dynamic Knowledge Distillation Method for Visual Analytics.
    Tu Z; Liu X; Xiao X
    IEEE Trans Image Process; 2022 Oct; PP():. PubMed ID: 36227819
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.