484 related articles for article (PubMed ID: 35713867)
1. BioBERT and Similar Approaches for Relation Extraction.
Bhasuran B
Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
[TBL] [Abstract][Full Text] [Related]
2. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
[TBL] [Abstract][Full Text] [Related]
3. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
[TBL] [Abstract][Full Text] [Related]
4. Bioformer: an efficient transformer language model for biomedical text mining.
Fang L; Chen Q; Wei CH; Lu Z; Wang K
ArXiv; 2023 Feb; ():. PubMed ID: 36945685
[TBL] [Abstract][Full Text] [Related]
5. Extracting comprehensive clinical information for breast cancer using deep learning methods.
Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
[TBL] [Abstract][Full Text] [Related]
6. Drug knowledge discovery via multi-task learning and pre-trained models.
Li D; Xiong Y; Hu B; Tang B; Peng W; Chen Q
BMC Med Inform Decis Mak; 2021 Nov; 21(Suppl 9):251. PubMed ID: 34789238
[TBL] [Abstract][Full Text] [Related]
7. Investigation of improving the pre-training and fine-tuning of BERT model for biomedical relation extraction.
Su P; Vijay-Shanker K
BMC Bioinformatics; 2022 Apr; 23(1):120. PubMed ID: 35379166
[TBL] [Abstract][Full Text] [Related]
8. Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study.
Li F; Jin Y; Liu W; Rawat BPS; Cai P; Yu H
JMIR Med Inform; 2019 Sep; 7(3):e14830. PubMed ID: 31516126
[TBL] [Abstract][Full Text] [Related]
9. PharmBERT: a domain-specific BERT model for drug labels.
ValizadehAslani T; Shi Y; Ren P; Wang J; Zhang Y; Hu M; Zhao L; Liang H
Brief Bioinform; 2023 Jul; 24(4):. PubMed ID: 37317617
[TBL] [Abstract][Full Text] [Related]
10. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.
Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T
J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671
[TBL] [Abstract][Full Text] [Related]
11. Do syntactic trees enhance Bidirectional Encoder Representations from Transformers (BERT) models for chemical-drug relation extraction?
Tang A; Deléger L; Bossy R; Zweigenbaum P; Nédellec C
Database (Oxford); 2022 Aug; 2022():. PubMed ID: 36006843
[TBL] [Abstract][Full Text] [Related]
12. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
[TBL] [Abstract][Full Text] [Related]
13. BERT-based Ranking for Biomedical Entity Normalization.
Ji Z; Wei Q; Xu H
AMIA Jt Summits Transl Sci Proc; 2020; 2020():269-277. PubMed ID: 32477646
[TBL] [Abstract][Full Text] [Related]
14. Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.
Naseem U; Dunn AG; Khushi M; Kim J
BMC Bioinformatics; 2022 Apr; 23(1):144. PubMed ID: 35448946
[TBL] [Abstract][Full Text] [Related]
15. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
Li X; Yuan W; Peng D; Mei Q; Wang Y
BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
[TBL] [Abstract][Full Text] [Related]
16. Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study.
Mitra A; Rawat BPS; McManus DD; Yu H
JMIR Med Inform; 2021 Jul; 9(7):e27527. PubMed ID: 34255697
[TBL] [Abstract][Full Text] [Related]
17. Relation Extraction from Clinical Narratives Using Pre-trained Language Models.
Wei Q; Ji Z; Si Y; Du J; Wang J; Tiryaki F; Wu S; Tao C; Roberts K; Xu H
AMIA Annu Symp Proc; 2019; 2019():1236-1245. PubMed ID: 32308921
[TBL] [Abstract][Full Text] [Related]
18. Use of BERT (Bidirectional Encoder Representations from Transformers)-Based Deep Learning Method for Extracting Evidences in Chinese Radiology Reports: Development of a Computer-Aided Liver Cancer Diagnosis Framework.
Liu H; Zhang Z; Xu Y; Wang N; Huang Y; Yang Z; Jiang R; Chen H
J Med Internet Res; 2021 Jan; 23(1):e19689. PubMed ID: 33433395
[TBL] [Abstract][Full Text] [Related]
19. BERT-GT: cross-sentence n-ary relation extraction with BERT and Graph Transformer.
Lai PT; Lu Z
Bioinformatics; 2021 Apr; 36(24):5678-5685. PubMed ID: 33416851
[TBL] [Abstract][Full Text] [Related]
20. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
Yang F; Wang X; Ma H; Li J
BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]