32 related articles for article (PubMed ID: 38610547)
1. Multiresolution Discriminative Mixup Network for Fine-Grained Visual Categorization.
Xu K; Lai R; Gu L; Li Y
IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3488-3500. PubMed ID: 34606464
[TBL] [Abstract][Full Text] [Related]
2. MIST: Multi-instance selective transformer for histopathological subtype prediction.
Zhao R; Xi Z; Liu H; Jian X; Zhang J; Zhang Z; Li S
Med Image Anal; 2024 Jun; 97():103251. PubMed ID: 38954942
[TBL] [Abstract][Full Text] [Related]
3. MetaV: A Pioneer in feature Augmented Meta-Learning Based Vision Transformer for Medical Image Classification.
Ansari SA; Agrawal AP; Wajid MA; Wajid MS; Zafar A
Interdiscip Sci; 2024 Jun; ():. PubMed ID: 38951382
[TBL] [Abstract][Full Text] [Related]
4. Do we really need a large number of visual prompts?
Kim Y; Li Y; Moitra A; Yin R; Panda P
Neural Netw; 2024 Sep; 177():106390. PubMed ID: 38805797
[TBL] [Abstract][Full Text] [Related]
5. How Does Attention Work in Vision Transformers? A Visual Analytics Attempt.
Li Y; Wang J; Dai X; Wang L; Yeh CM; Zheng Y; Zhang W; Ma KL
IEEE Trans Vis Comput Graph; 2023 Jun; 29(6):2888-2900. PubMed ID: 37027263
[TBL] [Abstract][Full Text] [Related]
6. Dynamic Weighting Network for Person Re-Identification.
Li G; Liu P; Cao X; Liu C
Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420745
[TBL] [Abstract][Full Text] [Related]
7. Location-enhanced syntactic knowledge for biomedical relation extraction.
Zhang Y; Yang Z; Yang Y; Lin H; Wang J
J Biomed Inform; 2024 Jun; 156():104676. PubMed ID: 38876451
[TBL] [Abstract][Full Text] [Related]
8. Perception-Aware Texture Similarity Prediction.
Wang W; Dong X
IEEE Trans Image Process; 2024; 33():3536-3549. PubMed ID: 38814771
[TBL] [Abstract][Full Text] [Related]
9. Do it the transformer way: A comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification.
Alharthi AG; Alzahrani SM
Comput Biol Med; 2023 Dec; 167():107667. PubMed ID: 37939407
[TBL] [Abstract][Full Text] [Related]
10. Dual-Dependency Attention Transformer for Fine-Grained Visual Classification.
Cui S; Hui B
Sensors (Basel); 2024 Apr; 24(7):. PubMed ID: 38610547
[TBL] [Abstract][Full Text] [Related]
11. Fine-grained image classification method based on hybrid attention module.
Lu W; Yang Y; Yang L
Front Neurorobot; 2024; 18():1391791. PubMed ID: 38765871
[TBL] [Abstract][Full Text] [Related]
12. Factorization Vision Transformer: Modeling Long-Range Dependency With Local Window Cost.
Qin H; Zhou D; Xu T; Bian Z; Li J
IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38153834
[TBL] [Abstract][Full Text] [Related]
13. Token Selection is a Simple Booster for Vision Transformers.
Zhou D; Hou Q; Yang L; Jin X; Feng J
IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):12738-12746. PubMed ID: 36155475
[TBL] [Abstract][Full Text] [Related]
14. Fine-Grained Recognition With Learnable Semantic Data Augmentation.
Pu Y; Han Y; Wang Y; Feng J; Deng C; Huang G
IEEE Trans Image Process; 2024; 33():3130-3144. PubMed ID: 38662557
[TBL] [Abstract][Full Text] [Related]
15.
; ; . PubMed ID:
[No Abstract] [Full Text] [Related]
16.
; ; . PubMed ID:
[No Abstract] [Full Text] [Related]
17.
; ; . PubMed ID:
[No Abstract] [Full Text] [Related]
18.
; ; . PubMed ID:
[No Abstract] [Full Text] [Related]
19.
; ; . PubMed ID:
[No Abstract] [Full Text] [Related]
20.
; ; . PubMed ID:
[No Abstract] [Full Text] [Related]
[Next] [New Search]