These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
113 related articles for article (PubMed ID: 39356597)
1. Adapting Vision-Language Models via Learning to Inject Knowledge. Xuan S; Yang M; Zhang S IEEE Trans Image Process; 2024; 33():5798-5809. PubMed ID: 39356597 [TBL] [Abstract][Full Text] [Related]
2. Learning Domain Invariant Prompt for Vision-Language Models. Zhao C; Wang Y; Jiang X; Shen Y; Song K; Li D; Miao D IEEE Trans Image Process; 2024; 33():1348-1360. PubMed ID: 38335087 [TBL] [Abstract][Full Text] [Related]
3. X Zeng Y; Zhang X; Li H; Wang J; Zhang J; Zhou W IEEE Trans Pattern Anal Mach Intell; 2024 May; 46(5):3156-3168. PubMed ID: 38090826 [TBL] [Abstract][Full Text] [Related]
4. Vision-Language Models for Vision Tasks: A Survey. Zhang J; Huang J; Jin S; Lu S IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5625-5644. PubMed ID: 38408000 [TBL] [Abstract][Full Text] [Related]
5. Zero-shot prompt-based video encoder for surgical gesture recognition. Rao M; Qin Y; Kolouri S; Wu JY; Moyer D Int J Comput Assist Radiol Surg; 2024 Sep; ():. PubMed ID: 39287713 [TBL] [Abstract][Full Text] [Related]
6. MCPL: Multi-modal Collaborative Prompt Learning for Medical Vision-Language Model. Wang P; Zhang H; Yuan Y IEEE Trans Med Imaging; 2024 Jun; PP():. PubMed ID: 38913527 [TBL] [Abstract][Full Text] [Related]
7. Fine-Grained Visual-Text Prompt-Driven Self-Training for Open-Vocabulary Object Detection. Long Y; Han J; Huang R; Xu H; Zhu Y; Xu C; Liang X IEEE Trans Neural Netw Learn Syst; 2024 Nov; 35(11):16277-16287. PubMed ID: 37506020 [TBL] [Abstract][Full Text] [Related]
8. Prompt-guided and multimodal landscape scenicness assessments with vision-language models. Levering A; Marcos D; Jacobs N; Tuia D PLoS One; 2024; 19(9):e0307083. PubMed ID: 39348404 [TBL] [Abstract][Full Text] [Related]
9. A Foundation Language-Image Model of the Retina (FLAIR): encoding expert knowledge in text supervision. Silva-RodrÃguez J; Chakor H; Kobbi R; Dolz J; Ben Ayed I Med Image Anal; 2025 Jan; 99():103357. PubMed ID: 39418828 [TBL] [Abstract][Full Text] [Related]
10. Utilizing Geographical Distribution Statistical Data to Improve Zero-Shot Species Recognition. Liu L; Han B; Chen F; Mou C; Xu F Animals (Basel); 2024 Jun; 14(12):. PubMed ID: 38929335 [TBL] [Abstract][Full Text] [Related]
11. Proto-Adapter: Efficient Training-Free CLIP-Adapter for Few-Shot Image Classification. Kato N; Nota Y; Aoki Y Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894415 [TBL] [Abstract][Full Text] [Related]
12. Turning a CLIP Model Into a Scene Text Spotter. Yu W; Liu Y; Zhu X; Cao H; Sun X; Bai X IEEE Trans Pattern Anal Mach Intell; 2024 Sep; 46(9):6040-6054. PubMed ID: 38507385 [TBL] [Abstract][Full Text] [Related]
14. Meta-Prototypical Learning for Domain-Agnostic Few-Shot Recognition. Wang RQ; Zhang XY; Liu CL IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6990-6996. PubMed ID: 34097618 [TBL] [Abstract][Full Text] [Related]
15. Prompt-and-Transfer: Dynamic Class-aware Enhancement for Few-shot Segmentation. Bi H; Feng Y; Diao W; Wang P; Mao Y; Fu K; Wang H; Sun X IEEE Trans Pattern Anal Mach Intell; 2024 Sep; PP():. PubMed ID: 39288051 [TBL] [Abstract][Full Text] [Related]
16. ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation. Yang B; Liu F; Zou Y; Wu X; Wang Y; Clifton DA IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5712-5724. PubMed ID: 38421845 [TBL] [Abstract][Full Text] [Related]
17. AttriPrompter: Auto-Prompting with Attribute Semantics for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models. Wu Y; Zhou Y; Saiyin J; Wei B; Lai M; Shou J; Xu Y IEEE Trans Med Imaging; 2024 Oct; PP():. PubMed ID: 39361456 [TBL] [Abstract][Full Text] [Related]