These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

193 related articles for article (PubMed ID: 36011135)

  • 21. Clinical concept extraction using transformers.
    Yang X; Bian J; Hogan WR; Wu Y
    J Am Med Inform Assoc; 2020 Dec; 27(12):1935-1942. PubMed ID: 33120431
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Evaluation of clinical named entity recognition methods for Serbian electronic health records.
    Kaplar A; Stošović M; Kaplar A; Brković V; Naumović R; Kovačević A
    Int J Med Inform; 2022 Aug; 164():104805. PubMed ID: 35653828
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Toward a clinical text encoder: pretraining for clinical natural language processing with applications to substance misuse.
    Dligach D; Afshar M; Miller T
    J Am Med Inform Assoc; 2019 Nov; 26(11):1272-1278. PubMed ID: 31233140
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study.
    Mitra A; Rawat BPS; McManus DD; Yu H
    JMIR Med Inform; 2021 Jul; 9(7):e27527. PubMed ID: 34255697
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Bioformer: an efficient transformer language model for biomedical text mining.
    Fang L; Chen Q; Wei CH; Lu Z; Wang K
    ArXiv; 2023 Feb; ():. PubMed ID: 36945685
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study.
    Albalawi Y; Nikolov NS; Buckley J
    JMIR Form Res; 2022 Jun; 6(6):e34834. PubMed ID: 35767322
    [TBL] [Abstract][Full Text] [Related]  

  • 27. An Ensemble Learning Strategy for Eligibility Criteria Text Classification for Clinical Trial Recruitment: Algorithm Development and Validation.
    Zeng K; Pan Z; Xu Y; Qu Y
    JMIR Med Inform; 2020 Jul; 8(7):e17832. PubMed ID: 32609092
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Deep contextualized embeddings for quantifying the informative content in biomedical text summarization.
    Moradi M; Dorffner G; Samwald M
    Comput Methods Programs Biomed; 2020 Feb; 184():105117. PubMed ID: 31627150
    [TBL] [Abstract][Full Text] [Related]  

  • 29. AMMU: A survey of transformer-based biomedical pretrained language models.
    Kalyan KS; Rajasekharan A; Sangeetha S
    J Biomed Inform; 2022 Feb; 126():103982. PubMed ID: 34974190
    [TBL] [Abstract][Full Text] [Related]  

  • 30. PharmBERT: a domain-specific BERT model for drug labels.
    ValizadehAslani T; Shi Y; Ren P; Wang J; Zhang Y; Hu M; Zhao L; Liang H
    Brief Bioinform; 2023 Jul; 24(4):. PubMed ID: 37317617
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Fake or real news about COVID-19? Pretrained transformer model to detect potential misleading news.
    Malla S; Alphonse PJA
    Eur Phys J Spec Top; 2022; 231(18-20):3347-3356. PubMed ID: 35039760
    [TBL] [Abstract][Full Text] [Related]  

  • 32. RxBERT: Enhancing drug labeling text mining and analysis with AI language modeling.
    Wu L; Gray M; Dang O; Xu J; Fang H; Tong W
    Exp Biol Med (Maywood); 2023 Nov; 248(21):1937-1943. PubMed ID: 38166420
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Improving Model Transferability for Clinical Note Section Classification Models Using Continued Pretraining.
    Zhou W; Yetisgen M; Afshar M; Gao Y; Savova G; Miller TA
    medRxiv; 2023 Apr; ():. PubMed ID: 37162963
    [TBL] [Abstract][Full Text] [Related]  

  • 34. A BERT-based pretraining model for extracting molecular structural information from a SMILES sequence.
    Zheng X; Tomiura Y
    J Cheminform; 2024 Jun; 16(1):71. PubMed ID: 38898528
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Extracting comprehensive clinical information for breast cancer using deep learning methods.
    Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
    Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
    [TBL] [Abstract][Full Text] [Related]  

  • 36. A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance.
    Lu H; Ehwerhemuepha L; Rakovski C
    BMC Med Res Methodol; 2022 Jul; 22(1):181. PubMed ID: 35780100
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Comparing Pre-trained and Feature-Based Models for Prediction of Alzheimer's Disease Based on Speech.
    Balagopalan A; Eyre B; Robin J; Rudzicz F; Novikova J
    Front Aging Neurosci; 2021; 13():635945. PubMed ID: 33986655
    [No Abstract]   [Full Text] [Related]  

  • 38. A comparative study of pretrained language models for long clinical text.
    Li Y; Wehbe RM; Ahmad FS; Wang H; Luo Y
    J Am Med Inform Assoc; 2023 Jan; 30(2):340-347. PubMed ID: 36451266
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Transformer versus traditional natural language processing: how much data is enough for automated radiology report classification?
    Yang E; Li MD; Raghavan S; Deng F; Lang M; Succi MD; Huang AJ; Kalpathy-Cramer J
    Br J Radiol; 2023 Sep; 96(1149):20220769. PubMed ID: 37162253
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Automatic International Classification of Diseases Coding System: Deep Contextualized Language Model With Rule-Based Approaches.
    Chen PF; Chen KC; Liao WC; Lai F; He TL; Lin SC; Chen WJ; Yang CY; Lin YC; Tsai IC; Chiu CH; Chang SC; Hung FM
    JMIR Med Inform; 2022 Jun; 10(6):e37557. PubMed ID: 35767353
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.