These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

147 related articles for article (PubMed ID: 38570373)

  • 1. EndoViT: pretraining vision transformers on a large collection of endoscopic images.
    Batić D; Holm F; Özsoy E; Czempiel T; Navab N
    Int J Comput Assist Radiol Surg; 2024 Jun; 19(6):1085-1091. PubMed ID: 38570373
    [TBL] [Abstract][Full Text] [Related]  

  • 2. SurgNet: Self-Supervised Pretraining With Semantic Consistency for Vessel and Instrument Segmentation in Surgical Images.
    Chen J; Li M; Han H; Zhao Z; Chen X
    IEEE Trans Med Imaging; 2024 Apr; 43(4):1513-1525. PubMed ID: 38090838
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Enhancing diagnostic deep learning via self-supervised pretraining on large-scale, unlabeled non-medical images.
    Tayebi Arasteh S; Misera L; Kather JN; Truhn D; Nebelung S
    Eur Radiol Exp; 2024 Feb; 8(1):10. PubMed ID: 38326501
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Transformer-based unsupervised contrastive learning for histopathological image classification.
    Wang X; Yang S; Zhang J; Wang M; Zhang J; Yang W; Huang J; Han X
    Med Image Anal; 2022 Oct; 81():102559. PubMed ID: 35952419
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimizing Vision Transformers for Histopathology: Pretraining and Normalization in Breast Cancer Classification.
    Baroni GL; Rasotto L; Roitero K; Tulisso A; Di Loreto C; Della Mea V
    J Imaging; 2024 Apr; 10(5):. PubMed ID: 38786562
    [TBL] [Abstract][Full Text] [Related]  

  • 6. AMMU: A survey of transformer-based biomedical pretrained language models.
    Kalyan KS; Rajasekharan A; Sangeetha S
    J Biomed Inform; 2022 Feb; 126():103982. PubMed ID: 34974190
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Exploiting the potential of unlabeled endoscopic video data with self-supervised learning.
    Ross T; Zimmerer D; Vemuri A; Isensee F; Wiesenfarth M; Bodenstedt S; Both F; Kessler P; Wagner M; Müller B; Kenngott H; Speidel S; Kopp-Schneider A; Maier-Hein K; Maier-Hein L
    Int J Comput Assist Radiol Surg; 2018 Jun; 13(6):925-933. PubMed ID: 29704196
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 9. POPAR: Patch Order Prediction and Appearance Recovery for Self-supervised Medical Image Analysis.
    Pang J; Haghighi F; Ma D; Islam NU; Taher MRH; Gotway MB; Liang J
    Domain Adapt Represent Transf (2022); 2022 Sep; 13542():77-87. PubMed ID: 36507898
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Gait Recognition with Self-Supervised Learning of Gait Features Based on Vision Transformers.
    Pinčić D; Sušanj D; Lenac K
    Sensors (Basel); 2022 Sep; 22(19):. PubMed ID: 36236238
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Nucleus-Aware Self-Supervised Pretraining Using Unpaired Image-to-Image Translation for Histopathology Images.
    Song Z; Du P; Yan J; Li K; Shou J; Lai M; Fan Y; Xu Y
    IEEE Trans Med Imaging; 2024 Jan; 43(1):459-472. PubMed ID: 37647175
    [TBL] [Abstract][Full Text] [Related]  

  • 12. CellViT: Vision Transformers for precise cell segmentation and classification.
    Hörst F; Rempe M; Heine L; Seibold C; Keyl J; Baldini G; Ugurel S; Siveke J; Grünwald B; Egger J; Kleesiek J
    Med Image Anal; 2024 May; 94():103143. PubMed ID: 38507894
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Advantages of transformer and its application for medical image segmentation: a survey.
    Pu Q; Xi Z; Yin S; Zhao Z; Zhao L
    Biomed Eng Online; 2024 Feb; 23(1):14. PubMed ID: 38310297
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Speed Improvement in Image Stitching for Panoramic Dynamic Images during Minimally Invasive Surgery.
    Kim DT; Nguyen VT; Cheng CH; Liu DG; Liu KCJ; Huang KCJ
    J Healthc Eng; 2018; 2018():3654210. PubMed ID: 30631411
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Self-supervised-RCNN for medical image segmentation with limited data annotation.
    Felfeliyan B; Forkert ND; Hareendranathan A; Cornel D; Zhou Y; Kuntze G; Jaremko JL; Ronsky JL
    Comput Med Imaging Graph; 2023 Oct; 109():102297. PubMed ID: 37729826
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Advancing Accuracy in Multimodal Medical Tasks Through Bootstrapped Language-Image Pretraining (BioMedBLIP): Performance Evaluation Study.
    Naseem U; Thapa S; Masood A
    JMIR Med Inform; 2024 Aug; 12():e56627. PubMed ID: 39102281
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Simulation-to-real domain adaptation with teacher-student learning for endoscopic instrument segmentation.
    Sahu M; Mukhopadhyay A; Zachow S
    Int J Comput Assist Radiol Surg; 2021 May; 16(5):849-859. PubMed ID: 33982232
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images.
    Ma J; Chen H
    IEEE Trans Med Imaging; 2024 Apr; 43(4):1388-1399. PubMed ID: 38010933
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reducing annotation burden in MR: A novel MR-contrast guided contrastive learning approach for image segmentation.
    Umapathy L; Brown T; Mushtaq R; Greenhill M; Lu J; Martin D; Altbach M; Bilgin A
    Med Phys; 2024 Apr; 51(4):2707-2720. PubMed ID: 37956263
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Generalizability of Self-Supervised Training Models for Digital Pathology: A Multicountry Comparison in Colorectal Cancer.
    Shao Z; Dai L; Jonnagaddala J; Chen Y; Wang Y; Fang Z; Zhang Y
    JCO Clin Cancer Inform; 2023 Sep; 7():e2200178. PubMed ID: 37703507
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.