These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
527 related articles for article (PubMed ID: 32477646)
1. BERT-based Ranking for Biomedical Entity Normalization. Ji Z; Wei Q; Xu H AMIA Jt Summits Transl Sci Proc; 2020; 2020():269-277. PubMed ID: 32477646 [TBL] [Abstract][Full Text] [Related]
2. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885 [TBL] [Abstract][Full Text] [Related]
3. Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study. Li F; Jin Y; Liu W; Rawat BPS; Cai P; Yu H JMIR Med Inform; 2019 Sep; 7(3):e14830. PubMed ID: 31516126 [TBL] [Abstract][Full Text] [Related]
4. Extracting comprehensive clinical information for breast cancer using deep learning methods. Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032 [TBL] [Abstract][Full Text] [Related]
5. Deep contextualized embeddings for quantifying the informative content in biomedical text summarization. Moradi M; Dorffner G; Samwald M Comput Methods Programs Biomed; 2020 Feb; 184():105117. PubMed ID: 31627150 [TBL] [Abstract][Full Text] [Related]
6. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation. Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671 [TBL] [Abstract][Full Text] [Related]
7. BioBERT and Similar Approaches for Relation Extraction. Bhasuran B Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867 [TBL] [Abstract][Full Text] [Related]
8. Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study. Mitra A; Rawat BPS; McManus DD; Yu H JMIR Med Inform; 2021 Jul; 9(7):e27527. PubMed ID: 34255697 [TBL] [Abstract][Full Text] [Related]
9. Drug knowledge discovery via multi-task learning and pre-trained models. Li D; Xiong Y; Hu B; Tang B; Peng W; Chen Q BMC Med Inform Decis Mak; 2021 Nov; 21(Suppl 9):251. PubMed ID: 34789238 [TBL] [Abstract][Full Text] [Related]
10. Identification of Semantically Similar Sentences in Clinical Notes: Iterative Intermediate Training Using Multi-Task Learning. Mahajan D; Poddar A; Liang JJ; Lin YT; Prager JM; Suryanarayanan P; Raghavan P; Tsou CH JMIR Med Inform; 2020 Nov; 8(11):e22508. PubMed ID: 33245284 [TBL] [Abstract][Full Text] [Related]
11. Stacking-BERT model for Chinese medical procedure entity normalization. Li L; Zhai Y; Gao J; Wang L; Hou L; Zhao J Math Biosci Eng; 2023 Jan; 20(1):1018-1036. PubMed ID: 36650800 [TBL] [Abstract][Full Text] [Related]
12. Training a Deep Contextualized Language Model for International Classification of Diseases, 10th Revision Classification via Federated Learning: Model Development and Validation Study. Chen PF; He TL; Lin SC; Chu YC; Kuo CT; Lai F; Wang SM; Zhu WX; Chen KC; Kuo LC; Hung FM; Lin YC; Tsai IC; Chiu CH; Chang SC; Yang CY JMIR Med Inform; 2022 Nov; 10(11):e41342. PubMed ID: 36355417 [TBL] [Abstract][Full Text] [Related]
13. Evaluating Medical Entity Recognition in Health Care: Entity Model Quantitative Study. Liu S; Wang A; Xiu X; Zhong M; Wu S JMIR Med Inform; 2024 Oct; 12():e59782. PubMed ID: 39419501 [TBL] [Abstract][Full Text] [Related]
14. Bioformer: an efficient transformer language model for biomedical text mining. Fang L; Chen Q; Wei CH; Lu Z; Wang K ArXiv; 2023 Feb; ():. PubMed ID: 36945685 [TBL] [Abstract][Full Text] [Related]
15. Relation Extraction from Clinical Narratives Using Pre-trained Language Models. Wei Q; Ji Z; Si Y; Du J; Wang J; Tiryaki F; Wu S; Tao C; Roberts K; Xu H AMIA Annu Symp Proc; 2019; 2019():1236-1245. PubMed ID: 32308921 [TBL] [Abstract][Full Text] [Related]
16. Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study. Sun Y; Gao D; Shen X; Li M; Nan J; Zhang W JMIR Med Inform; 2022 Apr; 10(4):e35606. PubMed ID: 35451969 [TBL] [Abstract][Full Text] [Related]
17. GT-Finder: Classify the family of glucose transporters with pre-trained BERT language models. Ali Shah SM; Taju SW; Ho QT; Nguyen TT; Ou YY Comput Biol Med; 2021 Apr; 131():104259. PubMed ID: 33581474 [TBL] [Abstract][Full Text] [Related]
18. Discovering Thematically Coherent Biomedical Documents Using Contextualized Bidirectional Encoder Representations from Transformers-Based Clustering. Davagdorj K; Wang L; Li M; Pham VH; Ryu KH; Theera-Umpon N Int J Environ Res Public Health; 2022 May; 19(10):. PubMed ID: 35627429 [TBL] [Abstract][Full Text] [Related]
19. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT. Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811 [TBL] [Abstract][Full Text] [Related]
20. Modified Bidirectional Encoder Representations From Transformers Extractive Summarization Model for Hospital Information Systems Based on Character-Level Tokens (AlphaBERT): Development and Performance Evaluation. Chen YP; Chen YY; Lin JJ; Huang CH; Lai F JMIR Med Inform; 2020 Apr; 8(4):e17787. PubMed ID: 32347806 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]