These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

242 related articles for article (PubMed ID: 37468830)

  • 21. Deep learning to refine the identification of high-quality clinical research articles from the biomedical literature: Performance evaluation.
    Lokker C; Bagheri E; Abdelkader W; Parrish R; Afzal M; Navarro T; Cotoi C; Germini F; Linkins L; Haynes RB; Chu L; Iorio A
    J Biomed Inform; 2023 Jun; 142():104384. PubMed ID: 37164244
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Classifying social determinants of health from unstructured electronic health records using deep learning-based natural language processing.
    Han S; Zhang RF; Shi L; Richie R; Liu H; Tseng A; Quan W; Ryan N; Brent D; Tsui FR
    J Biomed Inform; 2022 Mar; 127():103984. PubMed ID: 35007754
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Classifying the lifestyle status for Alzheimer's disease from clinical notes using deep learning with weak supervision.
    Shen Z; Schutte D; Yi Y; Bompelli A; Yu F; Wang Y; Zhang R
    BMC Med Inform Decis Mak; 2022 Jul; 22(Suppl 1):88. PubMed ID: 35799294
    [TBL] [Abstract][Full Text] [Related]  

  • 24. A Large Language Model-Based Generative Natural Language Processing Framework Finetuned on Clinical Notes Accurately Extracts Headache Frequency from Electronic Health Records.
    Chiang CC; Luo M; Dumkrieger G; Trivedi S; Chen YC; Chao CJ; Schwedt TJ; Sarker A; Banerjee I
    medRxiv; 2023 Oct; ():. PubMed ID: 37873417
    [TBL] [Abstract][Full Text] [Related]  

  • 25. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning.
    Prottasha NJ; Sami AA; Kowsher M; Murad SA; Bairagi AK; Masud M; Baz M
    Sensors (Basel); 2022 May; 22(11):. PubMed ID: 35684778
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Automatic extraction of 12 cardiovascular concepts from German discharge letters using pre-trained language models.
    Richter-Pechanski P; Geis NA; Kiriakou C; Schwab DM; Dieterich C
    Digit Health; 2021; 7():20552076211057662. PubMed ID: 34868618
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Deep contextualized embeddings for quantifying the informative content in biomedical text summarization.
    Moradi M; Dorffner G; Samwald M
    Comput Methods Programs Biomed; 2020 Feb; 184():105117. PubMed ID: 31627150
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Relation Extraction from Clinical Narratives Using Pre-trained Language Models.
    Wei Q; Ji Z; Si Y; Du J; Wang J; Tiryaki F; Wu S; Tao C; Roberts K; Xu H
    AMIA Annu Symp Proc; 2019; 2019():1236-1245. PubMed ID: 32308921
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Discovering novel drug-supplement interactions using SuppKG generated from the biomedical literature.
    Schutte D; Vasilakes J; Bompelli A; Zhou Y; Fiszman M; Xu H; Kilicoglu H; Bishop JR; Adam T; Zhang R
    J Biomed Inform; 2022 Jul; 131():104120. PubMed ID: 35709900
    [TBL] [Abstract][Full Text] [Related]  

  • 31. MLM-based typographical error correction of unstructured medical texts for named entity recognition.
    Lee EB; Heo GE; Choi CM; Song M
    BMC Bioinformatics; 2022 Nov; 23(1):486. PubMed ID: 36384464
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Identifying and Extracting Rare Diseases and Their Phenotypes with Large Language Models.
    Shyr C; Hu Y; Bastarache L; Cheng A; Hamid R; Harris P; Xu H
    J Healthc Inform Res; 2024 Jun; 8(2):438-461. PubMed ID: 38681753
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Improving large language models for clinical named entity recognition via prompt engineering.
    Hu Y; Chen Q; Du J; Peng X; Keloth VK; Zuo X; Zhou Y; Li Z; Jiang X; Lu Z; Roberts K; Xu H
    J Am Med Inform Assoc; 2024 Jan; ():. PubMed ID: 38281112
    [TBL] [Abstract][Full Text] [Related]  

  • 34. A Question-and-Answer System to Extract Data From Free-Text Oncological Pathology Reports (CancerBERT Network): Development Study.
    Mitchell JR; Szepietowski P; Howard R; Reisman P; Jones JD; Lewis P; Fridley BL; Rollison DE
    J Med Internet Res; 2022 Mar; 24(3):e27210. PubMed ID: 35319481
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.
    Naseem U; Dunn AG; Khushi M; Kim J
    BMC Bioinformatics; 2022 Apr; 23(1):144. PubMed ID: 35448946
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Dependency parsing of biomedical text with BERT.
    Kanerva J; Ginter F; Pyysalo S
    BMC Bioinformatics; 2020 Dec; 21(Suppl 23):580. PubMed ID: 33372589
    [TBL] [Abstract][Full Text] [Related]  

  • 37. PICO entity extraction for preclinical animal literature.
    Wang Q; Liao J; Lapata M; Macleod M
    Syst Rev; 2022 Sep; 11(1):209. PubMed ID: 36180888
    [TBL] [Abstract][Full Text] [Related]  

  • 38. BioInstruct: instruction tuning of large language models for biomedical natural language processing.
    Tran H; Yang Z; Yao Z; Yu H
    J Am Med Inform Assoc; 2024 Jun; ():. PubMed ID: 38833265
    [TBL] [Abstract][Full Text] [Related]  

  • 39. deepBioWSD: effective deep neural word sense disambiguation of biomedical text data.
    Pesaranghader A; Matwin S; Sokolova M; Pesaranghader A
    J Am Med Inform Assoc; 2019 May; 26(5):438-446. PubMed ID: 30811548
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Adapting State-of-the-Art Deep Language Models to Clinical Information Extraction Systems: Potentials, Challenges, and Solutions.
    Zhou L; Suominen H; Gedeon T
    JMIR Med Inform; 2019 Apr; 7(2):e11499. PubMed ID: 31021325
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 13.