BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

200 related articles for article (PubMed ID: 36011135)

  • 1. Comparison of Pretraining Models and Strategies for Health-Related Social Media Text Classification.
    Guo Y; Ge Y; Yang YC; Al-Garadi MA; Sarker A
    Healthcare (Basel); 2022 Aug; 10(8):. PubMed ID: 36011135
    [TBL] [Abstract][Full Text] [Related]  

  • 2. RadBERT: Adapting Transformer-based Language Models to Radiology.
    Yan A; McAuley J; Lu X; Du J; Chang EY; Gentili A; Hsu CN
    Radiol Artif Intell; 2022 Jul; 4(4):e210258. PubMed ID: 35923376
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Comparison of pretrained transformer-based models for influenza and COVID-19 detection using social media text data in Saskatchewan, Canada.
    Tian Y; Zhang W; Duan L; McDonald W; Osgood N
    Front Digit Health; 2023; 5():1203874. PubMed ID: 37448834
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
    Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
    JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Text classification models for the automatic detection of nonmedical prescription medication use from social media.
    Al-Garadi MA; Yang YC; Cai H; Ruan Y; O'Connor K; Graciela GH; Perrone J; Sarker A
    BMC Med Inform Decis Mak; 2021 Jan; 21(1):27. PubMed ID: 33499852
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study.
    Sun Y; Gao D; Shen X; Li M; Nan J; Zhang W
    JMIR Med Inform; 2022 Apr; 10(4):e35606. PubMed ID: 35451969
    [TBL] [Abstract][Full Text] [Related]  

  • 8. BioBERT and Similar Approaches for Relation Extraction.
    Bhasuran B
    Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Training a Deep Contextualized Language Model for International Classification of Diseases, 10th Revision Classification via Federated Learning: Model Development and Validation Study.
    Chen PF; He TL; Lin SC; Chu YC; Kuo CT; Lai F; Wang SM; Zhu WX; Chen KC; Kuo LC; Hung FM; Lin YC; Tsai IC; Chiu CH; Chang SC; Yang CY
    JMIR Med Inform; 2022 Nov; 10(11):e41342. PubMed ID: 36355417
    [TBL] [Abstract][Full Text] [Related]  

  • 10. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction.
    Rasmy L; Xiang Y; Xie Z; Tao C; Zhi D
    NPJ Digit Med; 2021 May; 4(1):86. PubMed ID: 34017034
    [TBL] [Abstract][Full Text] [Related]  

  • 12. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Sequence-to-sequence pretraining for a less-resourced Slovenian language.
    Ulčar M; Robnik-Šikonja M
    Front Artif Intell; 2023; 6():932519. PubMed ID: 37056912
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Enhancing clinical concept extraction with contextual embeddings.
    Si Y; Wang J; Xu H; Roberts K
    J Am Med Inform Assoc; 2019 Nov; 26(11):1297-1304. PubMed ID: 31265066
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Measurement of Semantic Textual Similarity in Clinical Texts: Comparison of Transformer-Based Models.
    Yang X; He X; Zhang H; Ma Y; Bian J; Wu Y
    JMIR Med Inform; 2020 Nov; 8(11):e19735. PubMed ID: 33226350
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Depression Risk Prediction for Chinese Microblogs via Deep-Learning Methods: Content Analysis.
    Wang X; Chen S; Li T; Li W; Zhou Y; Zheng J; Chen Q; Yan J; Tang B
    JMIR Med Inform; 2020 Jul; 8(7):e17958. PubMed ID: 32723719
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Identifying the Perceived Severity of Patient-Generated Telemedical Queries Regarding COVID: Developing and Evaluating a Transfer Learning-Based Solution.
    Gatto J; Seegmiller P; Johnston G; Preum SM
    JMIR Med Inform; 2022 Sep; 10(9):e37770. PubMed ID: 35981230
    [TBL] [Abstract][Full Text] [Related]  

  • 18. BioBERTurk: Exploring Turkish Biomedical Language Model Development Strategies in Low-Resource Setting.
    Türkmen H; Dikenelli O; Eraslan C; Çallı MC; Özbek SS
    J Healthc Inform Res; 2023 Dec; 7(4):433-446. PubMed ID: 37927378
    [TBL] [Abstract][Full Text] [Related]  

  • 19. COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter.
    Müller M; Salathé M; Kummervold PE
    Front Artif Intell; 2023; 6():1023281. PubMed ID: 36998290
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Improving model transferability for clinical note section classification models using continued pretraining.
    Zhou W; Yetisgen M; Afshar M; Gao Y; Savova G; Miller TA
    J Am Med Inform Assoc; 2023 Dec; 31(1):89-97. PubMed ID: 37725927
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.