These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

326 related articles for article (PubMed ID: 34017034)

  • 21. Semantic Textual Similarity in Japanese Clinical Domain Texts Using BERT.
    Mutinda FW; Yada S; Wakamiya S; Aramaki E
    Methods Inf Med; 2021 Jun; 60(S 01):e56-e64. PubMed ID: 34237783
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Does BERT need domain adaptation for clinical negation detection?
    Lin C; Bethard S; Dligach D; Sadeque F; Savova G; Miller TA
    J Am Med Inform Assoc; 2020 Apr; 27(4):584-591. PubMed ID: 32044989
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Predicting Postoperative Mortality With Deep Neural Networks and Natural Language Processing: Model Development and Validation.
    Chen PF; Chen L; Lin YK; Li GH; Lai F; Lu CW; Yang CY; Chen KC; Lin TY
    JMIR Med Inform; 2022 May; 10(5):e38241. PubMed ID: 35536634
    [TBL] [Abstract][Full Text] [Related]  

  • 24. FG-BERT: a generalized and self-supervised functional group-based molecular representation learning framework for properties prediction.
    Li B; Lin M; Chen T; Wang L
    Brief Bioinform; 2023 Sep; 24(6):. PubMed ID: 37930026
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Classifying the lifestyle status for Alzheimer's disease from clinical notes using deep learning with weak supervision.
    Shen Z; Schutte D; Yi Y; Bompelli A; Yu F; Wang Y; Zhang R
    BMC Med Inform Decis Mak; 2022 Jul; 22(Suppl 1):88. PubMed ID: 35799294
    [TBL] [Abstract][Full Text] [Related]  

  • 26. BERT-Kgly: A Bidirectional Encoder Representations From Transformers (BERT)-Based Model for Predicting Lysine Glycation Site for
    Liu Y; Liu Y; Wang GA; Cheng Y; Bi S; Zhu X
    Front Bioinform; 2022; 2():834153. PubMed ID: 36304324
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Comparing Pre-trained and Feature-Based Models for Prediction of Alzheimer's Disease Based on Speech.
    Balagopalan A; Eyre B; Robin J; Rudzicz F; Novikova J
    Front Aging Neurosci; 2021; 13():635945. PubMed ID: 33986655
    [No Abstract]   [Full Text] [Related]  

  • 28. Fine-tuning of BERT Model to Accurately Predict Drug-Target Interactions.
    Kang H; Goo S; Lee H; Chae JW; Yun HY; Jung S
    Pharmaceutics; 2022 Aug; 14(8):. PubMed ID: 36015336
    [TBL] [Abstract][Full Text] [Related]  

  • 29. BioBERT and Similar Approaches for Relation Extraction.
    Bhasuran B
    Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Symptom-BERT: Enhancing Cancer Symptom Detection in EHR Clinical Notes.
    Zeinali N; Albashayreh A; Fan W; White SG
    J Pain Symptom Manage; 2024 Aug; 68(2):190-198.e1. PubMed ID: 38789092
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques.
    Jaber A; Martínez P
    Methods Inf Med; 2022 Jun; 61(S 01):e28-e34. PubMed ID: 35104909
    [TBL] [Abstract][Full Text] [Related]  

  • 32. GT-Finder: Classify the family of glucose transporters with pre-trained BERT language models.
    Ali Shah SM; Taju SW; Ho QT; Nguyen TT; Ou YY
    Comput Biol Med; 2021 Apr; 131():104259. PubMed ID: 33581474
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Application of Entity-BERT model based on neuroscience and brain-like cognition in electronic medical record entity recognition.
    Lu W; Jiang J; Shi Y; Zhong X; Gu J; Huangfu L; Gong M
    Front Neurosci; 2023; 17():1259652. PubMed ID: 37799340
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Development and External Validation of an Artificial Intelligence Model for Identifying Radiology Reports Containing Recommendations for Additional Imaging.
    Abbasi N; Lacson R; Kapoor N; Licaros A; Guenette JP; Burk KS; Hammer M; Desai S; Eappen S; Saini S; Khorasani R
    AJR Am J Roentgenol; 2023 Sep; 221(3):377-385. PubMed ID: 37073901
    [No Abstract]   [Full Text] [Related]  

  • 35. Deep learning to refine the identification of high-quality clinical research articles from the biomedical literature: Performance evaluation.
    Lokker C; Bagheri E; Abdelkader W; Parrish R; Afzal M; Navarro T; Cotoi C; Germini F; Linkins L; Haynes RB; Chu L; Iorio A
    J Biomed Inform; 2023 Jun; 142():104384. PubMed ID: 37164244
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Modified Bidirectional Encoder Representations From Transformers Extractive Summarization Model for Hospital Information Systems Based on Character-Level Tokens (AlphaBERT): Development and Performance Evaluation.
    Chen YP; Chen YY; Lin JJ; Huang CH; Lai F
    JMIR Med Inform; 2020 Apr; 8(4):e17787. PubMed ID: 32347806
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Improving text mining in plant health domain with GAN and/or pre-trained language model.
    Jiang S; Cormier S; Angarita R; Rousseaux F
    Front Artif Intell; 2023; 6():1072329. PubMed ID: 36895200
    [TBL] [Abstract][Full Text] [Related]  

  • 38. The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study.
    Rivera Zavala R; Martinez P
    JMIR Med Inform; 2020 Dec; 8(12):e18953. PubMed ID: 33270027
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Use of BERT (Bidirectional Encoder Representations from Transformers)-Based Deep Learning Method for Extracting Evidences in Chinese Radiology Reports: Development of a Computer-Aided Liver Cancer Diagnosis Framework.
    Liu H; Zhang Z; Xu Y; Wang N; Huang Y; Yang Z; Jiang R; Chen H
    J Med Internet Res; 2021 Jan; 23(1):e19689. PubMed ID: 33433395
    [TBL] [Abstract][Full Text] [Related]  

  • 40. An Evaluation of Pretrained BERT Models for Comparing Semantic Similarity Across Unstructured Clinical Trial Texts.
    Patricoski J; Kreimeyer K; Balan A; Hardart K; Tao J; ; Anagnostou V; Botsis T
    Stud Health Technol Inform; 2022 Jan; 289():18-21. PubMed ID: 35062081
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 17.