BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

235 related articles for article (PubMed ID: 32044989)

  • 1. Does BERT need domain adaptation for clinical negation detection?
    Lin C; Bethard S; Dligach D; Sadeque F; Savova G; Miller TA
    J Am Med Inform Assoc; 2020 Apr; 27(4):584-591. PubMed ID: 32044989
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Extracting comprehensive clinical information for breast cancer using deep learning methods.
    Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
    Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT).
    Li J; Lin Y; Zhao P; Liu W; Cai L; Sun J; Zhao L; Yang Z; Song H; Lv H; Wang Z
    BMC Med Inform Decis Mak; 2022 Jul; 22(1):200. PubMed ID: 35907966
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study.
    Rivera Zavala R; Martinez P
    JMIR Med Inform; 2020 Dec; 8(12):e18953. PubMed ID: 33270027
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.
    Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T
    J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Deep Learning Approach for Negation and Speculation Detection for Automated Important Finding Flagging and Extraction in Radiology Report: Internal Validation and Technique Comparison Study.
    Weng KH; Liu CF; Chen CJ
    JMIR Med Inform; 2023 Apr; 11():e46348. PubMed ID: 37097731
    [TBL] [Abstract][Full Text] [Related]  

  • 8. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Classifying the lifestyle status for Alzheimer's disease from clinical notes using deep learning with weak supervision.
    Shen Z; Schutte D; Yi Y; Bompelli A; Yu F; Wang Y; Zhang R
    BMC Med Inform Decis Mak; 2022 Jul; 22(Suppl 1):88. PubMed ID: 35799294
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Semantic Textual Similarity in Japanese Clinical Domain Texts Using BERT.
    Mutinda FW; Yada S; Wakamiya S; Aramaki E
    Methods Inf Med; 2021 Jun; 60(S 01):e56-e64. PubMed ID: 34237783
    [TBL] [Abstract][Full Text] [Related]  

  • 11. BioBERT and Similar Approaches for Relation Extraction.
    Bhasuran B
    Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Relation Extraction from Clinical Narratives Using Pre-trained Language Models.
    Wei Q; Ji Z; Si Y; Du J; Wang J; Tiryaki F; Wu S; Tao C; Roberts K; Xu H
    AMIA Annu Symp Proc; 2019; 2019():1236-1245. PubMed ID: 32308921
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports.
    Bressem KK; Adams LC; Gaudin RA; Tröltzsch D; Hamm B; Makowski MR; Schüle CY; Vahldiek JL; Niehues SM
    Bioinformatics; 2021 Jan; 36(21):5255-5261. PubMed ID: 32702106
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance.
    Lu H; Ehwerhemuepha L; Rakovski C
    BMC Med Res Methodol; 2022 Jul; 22(1):181. PubMed ID: 35780100
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Use of BERT (Bidirectional Encoder Representations from Transformers)-Based Deep Learning Method for Extracting Evidences in Chinese Radiology Reports: Development of a Computer-Aided Liver Cancer Diagnosis Framework.
    Liu H; Zhang Z; Xu Y; Wang N; Huang Y; Yang Z; Jiang R; Chen H
    J Med Internet Res; 2021 Jan; 23(1):e19689. PubMed ID: 33433395
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Disease Concept-Embedding Based on the Self-Supervised Method for Medical Information Extraction from Electronic Health Records and Disease Retrieval: Algorithm Development and Validation Study.
    Chen YP; Lo YH; Lai F; Huang CH
    J Med Internet Res; 2021 Jan; 23(1):e25113. PubMed ID: 33502324
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Korean clinical entity recognition from diagnosis text using BERT.
    Kim YM; Lee TH
    BMC Med Inform Decis Mak; 2020 Sep; 20(Suppl 7):242. PubMed ID: 32998724
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Traditional Chinese medicine clinical records classification with BERT and domain specific corpora.
    Yao L; Jin Z; Mao C; Zhang Y; Luo Y
    J Am Med Inform Assoc; 2019 Dec; 26(12):1632-1636. PubMed ID: 31550356
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Comparing Pre-trained and Feature-Based Models for Prediction of Alzheimer's Disease Based on Speech.
    Balagopalan A; Eyre B; Robin J; Rudzicz F; Novikova J
    Front Aging Neurosci; 2021; 13():635945. PubMed ID: 33986655
    [No Abstract]   [Full Text] [Related]  

  • 20. Toward a clinical text encoder: pretraining for clinical natural language processing with applications to substance misuse.
    Dligach D; Afshar M; Miller T
    J Am Med Inform Assoc; 2019 Nov; 26(11):1272-1278. PubMed ID: 31233140
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.