These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

183 related articles for article (PubMed ID: 35265939)

  • 21. Automatic ICD-10 Coding and Training System: Deep Neural Network Based on Supervised Learning.
    Chen PF; Wang SM; Liao WC; Kuo LC; Chen KC; Lin YC; Yang CY; Chiu CH; Chang SC; Lai F
    JMIR Med Inform; 2021 Aug; 9(8):e23230. PubMed ID: 34463639
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography.
    Usman M; Zia T; Tariq A
    J Digit Imaging; 2022 Dec; 35(6):1445-1462. PubMed ID: 35819537
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Classification of the Disposition of Patients Hospitalized with COVID-19: Reading Discharge Summaries Using Natural Language Processing.
    Fernandes M; Sun H; Jain A; Alabsi HS; Brenner LN; Ye E; Ge W; Collens SI; Leone MJ; Das S; Robbins GK; Mukerji SS; Westover MB
    JMIR Med Inform; 2021 Feb; 9(2):e25457. PubMed ID: 33449908
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Predicting near-term glaucoma progression: An artificial intelligence approach using clinical free-text notes and data from electronic health records.
    Jalamangala Shivananjaiah SK; Kumari S; Majid I; Wang SY
    Front Med (Lausanne); 2023; 10():1157016. PubMed ID: 37122330
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Stacked DeBERT: All attention in incomplete data for text classification.
    Cunha Sergio G; Lee M
    Neural Netw; 2021 Apr; 136():87-96. PubMed ID: 33453522
    [TBL] [Abstract][Full Text] [Related]  

  • 26. A clinical text classification paradigm using weak supervision and deep representation.
    Wang Y; Sohn S; Liu S; Shen F; Wang L; Atkinson EJ; Amin S; Liu H
    BMC Med Inform Decis Mak; 2019 Jan; 19(1):1. PubMed ID: 30616584
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Predicting Semantic Similarity Between Clinical Sentence Pairs Using Transformer Models: Evaluation and Representational Analysis.
    Ormerod M; Martínez Del Rincón J; Devereux B
    JMIR Med Inform; 2021 May; 9(5):e23099. PubMed ID: 34037527
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Critical assessment of transformer-based AI models for German clinical notes.
    Lentzen M; Madan S; Lage-Rupprecht V; Kühnel L; Fluck J; Jacobs M; Mittermaier M; Witzenrath M; Brunecker P; Hofmann-Apitius M; Weber J; Fröhlich H
    JAMIA Open; 2022 Dec; 5(4):ooac087. PubMed ID: 36380848
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Utilizing Text Mining, Data Linkage and Deep Learning in Police and Health Records to Predict Future Offenses in Family and Domestic Violence.
    Karystianis G; Cabral RC; Han SC; Poon J; Butler T
    Front Digit Health; 2021; 3():602683. PubMed ID: 34713088
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports.
    Bressem KK; Adams LC; Gaudin RA; Tröltzsch D; Hamm B; Makowski MR; Schüle CY; Vahldiek JL; Niehues SM
    Bioinformatics; 2021 Jan; 36(21):5255-5261. PubMed ID: 32702106
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Investigating the impact of pre-processing techniques and pre-trained word embeddings in detecting Arabic health information on social media.
    Albalawi Y; Buckley J; Nikolov NS
    J Big Data; 2021; 8(1):95. PubMed ID: 34249602
    [TBL] [Abstract][Full Text] [Related]  

  • 32. A comparative study of pretrained language models for long clinical text.
    Li Y; Wehbe RM; Ahmad FS; Wang H; Luo Y
    J Am Med Inform Assoc; 2023 Jan; 30(2):340-347. PubMed ID: 36451266
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Comparing Pre-trained and Feature-Based Models for Prediction of Alzheimer's Disease Based on Speech.
    Balagopalan A; Eyre B; Robin J; Rudzicz F; Novikova J
    Front Aging Neurosci; 2021; 13():635945. PubMed ID: 33986655
    [No Abstract]   [Full Text] [Related]  

  • 34. Does BERT need domain adaptation for clinical negation detection?
    Lin C; Bethard S; Dligach D; Sadeque F; Savova G; Miller TA
    J Am Med Inform Assoc; 2020 Apr; 27(4):584-591. PubMed ID: 32044989
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Pre-training phenotyping classifiers.
    Dligach D; Afshar M; Miller T
    J Biomed Inform; 2021 Jan; 113():103626. PubMed ID: 33259943
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Deep learning approaches for extracting adverse events and indications of dietary supplements from clinical text.
    Fan Y; Zhou S; Li Y; Zhang R
    J Am Med Inform Assoc; 2021 Mar; 28(3):569-577. PubMed ID: 33150942
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Text Sentiment Classification Based on BERT Embedding and Sliced Multi-Head Self-Attention Bi-GRU.
    Zhang X; Wu Z; Liu K; Zhao Z; Wang J; Wu C
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772522
    [TBL] [Abstract][Full Text] [Related]  

  • 38. A study of deep learning methods for de-identification of clinical notes in cross-institute settings.
    Yang X; Lyu T; Li Q; Lee CY; Bian J; Hogan WR; Wu Y
    BMC Med Inform Decis Mak; 2019 Dec; 19(Suppl 5):232. PubMed ID: 31801524
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Identifying and Predicting Intentional Self-Harm in Electronic Health Record Clinical Notes: Deep Learning Approach.
    Obeid JS; Dahne J; Christensen S; Howard S; Crawford T; Frey LJ; Stecker T; Bunnell BE
    JMIR Med Inform; 2020 Jul; 8(7):e17784. PubMed ID: 32729840
    [TBL] [Abstract][Full Text] [Related]  

  • 40. A Hybrid Model for Family History Information Identification and Relation Extraction: Development and Evaluation of an End-to-End Information Extraction System.
    Kim Y; Heider PM; Lally IR; Meystre SM
    JMIR Med Inform; 2021 Apr; 9(4):e22797. PubMed ID: 33885370
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.