These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 37692884)

  • 1. A novel approach to attention mechanism using kernel functions: Kerformer.
    Gan Y; Fu Y; Wang D; Li Y
    Front Neurorobot; 2023; 17():1214203. PubMed ID: 37692884
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Vicinity Vision Transformer.
    Sun W; Qin Z; Deng H; Wang J; Zhang Y; Zhang K; Barnes N; Birchfield S; Kong L; Zhong Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12635-12649. PubMed ID: 37310842
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Leveraging transformers-based language models in proteome bioinformatics.
    Le NQK
    Proteomics; 2023 Dec; 23(23-24):e2300011. PubMed ID: 37381841
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Towards Lightweight Transformer Via Group-Wise Transformation for Vision-and-Language Tasks.
    Luo G; Zhou Y; Sun X; Wang Y; Cao L; Wu Y; Huang F; Ji R
    IEEE Trans Image Process; 2022; 31():3386-3398. PubMed ID: 35471883
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Diffusion Kernel Attention Network for Brain Disorder Classification.
    Zhang J; Zhou L; Wang L; Liu M; Shen D
    IEEE Trans Med Imaging; 2022 Oct; 41(10):2814-2827. PubMed ID: 35471877
    [TBL] [Abstract][Full Text] [Related]  

  • 6. MAE-TransRNet: An improved transformer-ConvNet architecture with masked autoencoder for cardiac MRI registration.
    Xiao X; Dong S; Yu Y; Li Y; Yang G; Qiu Z
    Front Med (Lausanne); 2023; 10():1114571. PubMed ID: 36968818
    [TBL] [Abstract][Full Text] [Related]  

  • 7. PSLT: A Light-Weight Vision Transformer With Ladder Self-Attention and Progressive Shift.
    Wu G; Zheng WS; Lu Y; Tian Q
    IEEE Trans Pattern Anal Mach Intell; 2023 Sep; 45(9):11120-11135. PubMed ID: 37027255
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Vision Transformer-based recognition of diabetic retinopathy grade.
    Wu J; Hu R; Xiao Z; Chen J; Liu J
    Med Phys; 2021 Dec; 48(12):7850-7863. PubMed ID: 34693536
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A comparative study of pretrained language models for long clinical text.
    Li Y; Wehbe RM; Ahmad FS; Wang H; Luo Y
    J Am Med Inform Assoc; 2023 Jan; 30(2):340-347. PubMed ID: 36451266
    [TBL] [Abstract][Full Text] [Related]  

  • 10. BAT: Block and token self-attention for speech emotion recognition.
    Lei J; Zhu X; Wang Y
    Neural Netw; 2022 Dec; 156():67-80. PubMed ID: 36242835
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Contextual Transformer Networks for Visual Recognition.
    Li Y; Yao T; Pan Y; Mei T
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1489-1500. PubMed ID: 35363608
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Nyströmformer: A Nystöm-based Algorithm for Approximating Self-Attention.
    Xiong Y; Zeng Z; Chakraborty R; Tan M; Fung G; Li Y; Singh V
    Proc AAAI Conf Artif Intell; 2021; 35(16):14138-14148. PubMed ID: 34745767
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Relation is an option for processing context information.
    Yamada KD; Baladram MS; Lin F
    Front Artif Intell; 2022; 5():924688. PubMed ID: 36304959
    [TBL] [Abstract][Full Text] [Related]  

  • 14. VOLO: Vision Outlooker for Visual Recognition.
    Yuan L; Hou Q; Jiang Z; Feng J; Yan S
    IEEE Trans Pattern Anal Mach Intell; 2023 May; 45(5):6575-6586. PubMed ID: 36094970
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Image classification model based on large kernel attention mechanism and relative position self-attention mechanism.
    Liu S; Wei J; Liu G; Zhou B
    PeerJ Comput Sci; 2023; 9():e1344. PubMed ID: 37346614
    [TBL] [Abstract][Full Text] [Related]  

  • 16. P2T: Pyramid Pooling Transformer for Scene Understanding.
    Wu YH; Liu Y; Zhan X; Cheng MM
    IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):12760-12771. PubMed ID: 36040936
    [TBL] [Abstract][Full Text] [Related]  

  • 17. End-to-End Multitask Learning With Vision Transformer.
    Tian Y; Bai K
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):9579-9590. PubMed ID: 37018576
    [TBL] [Abstract][Full Text] [Related]  

  • 18. An Efficient Transformer Based on Global and Local Self-attention for Face Photo-Sketch Synthesis.
    Yu W; Zhu M; Wang N; Wang X; Gao X
    IEEE Trans Image Process; 2022 Dec; PP():. PubMed ID: 37015434
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
    Yang F; Wang X; Ma H; Li J
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
    [TBL] [Abstract][Full Text] [Related]  

  • 20. PLG-ViT: Vision Transformer with Parallel Local and Global Self-Attention.
    Ebert N; Stricker D; Wasenmüller O
    Sensors (Basel); 2023 Mar; 23(7):. PubMed ID: 37050507
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.