237 related articles for article (PubMed ID: 35062081)
1. An Evaluation of Pretrained BERT Models for Comparing Semantic Similarity Across Unstructured Clinical Trial Texts.
Patricoski J; Kreimeyer K; Balan A; Hardart K; Tao J; ; Anagnostou V; Botsis T
Stud Health Technol Inform; 2022 Jan; 289():18-21. PubMed ID: 35062081
[TBL] [Abstract][Full Text] [Related]
2. Semantic Textual Similarity in Japanese Clinical Domain Texts Using BERT.
Mutinda FW; Yada S; Wakamiya S; Aramaki E
Methods Inf Med; 2021 Jun; 60(S 01):e56-e64. PubMed ID: 34237783
[TBL] [Abstract][Full Text] [Related]
3. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
[TBL] [Abstract][Full Text] [Related]
4. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.
Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T
J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671
[TBL] [Abstract][Full Text] [Related]
5. Extracting comprehensive clinical information for breast cancer using deep learning methods.
Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
[TBL] [Abstract][Full Text] [Related]
6. Deep contextualized embeddings for quantifying the informative content in biomedical text summarization.
Moradi M; Dorffner G; Samwald M
Comput Methods Programs Biomed; 2020 Feb; 184():105117. PubMed ID: 31627150
[TBL] [Abstract][Full Text] [Related]
7. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
Li X; Yuan W; Peng D; Mei Q; Wang Y
BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
[TBL] [Abstract][Full Text] [Related]
8. A comparative study of pre-trained language models for named entity recognition in clinical trial eligibility criteria from multiple corpora.
Li J; Wei Q; Ghiasvand O; Chen M; Lobanov V; Weng C; Xu H
BMC Med Inform Decis Mak; 2022 Sep; 22(Suppl 3):235. PubMed ID: 36068551
[TBL] [Abstract][Full Text] [Related]
9. Extracting clinical named entity for pituitary adenomas from Chinese electronic medical records.
Fang A; Hu J; Zhao W; Feng M; Fu J; Feng S; Lou P; Ren H; Chen X
BMC Med Inform Decis Mak; 2022 Mar; 22(1):72. PubMed ID: 35321705
[TBL] [Abstract][Full Text] [Related]
10. GT-Finder: Classify the family of glucose transporters with pre-trained BERT language models.
Ali Shah SM; Taju SW; Ho QT; Nguyen TT; Ou YY
Comput Biol Med; 2021 Apr; 131():104259. PubMed ID: 33581474
[TBL] [Abstract][Full Text] [Related]
11. Incorporating Domain Knowledge Into Language Models by Using Graph Convolutional Networks for Assessing Semantic Textual Similarity: Model Development and Performance Comparison.
Chang D; Lin E; Brandt C; Taylor RA
JMIR Med Inform; 2021 Nov; 9(11):e23101. PubMed ID: 34842531
[TBL] [Abstract][Full Text] [Related]
12. Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study.
Kades K; Sellner J; Koehler G; Full PM; Lai TYE; Kleesiek J; Maier-Hein KH
JMIR Med Inform; 2021 Feb; 9(2):e22795. PubMed ID: 33533728
[TBL] [Abstract][Full Text] [Related]
13. Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study.
Sun Y; Gao D; Shen X; Li M; Nan J; Zhang W
JMIR Med Inform; 2022 Apr; 10(4):e35606. PubMed ID: 35451969
[TBL] [Abstract][Full Text] [Related]
14. Exploration of text matching methods in Chinese disease Q&A systems: A method using ensemble based on BERT and boosted tree models.
Wu Z; Liang J; Zhang Z; Lei J
J Biomed Inform; 2021 Mar; 115():103683. PubMed ID: 33484938
[TBL] [Abstract][Full Text] [Related]
15. A Natural Language Processing Model for COVID-19 Detection Based on Dutch General Practice Electronic Health Records by Using Bidirectional Encoder Representations From Transformers: Development and Validation Study.
Homburg M; Meijer E; Berends M; Kupers T; Olde Hartman T; Muris J; de Schepper E; Velek P; Kuiper J; Berger M; Peters L
J Med Internet Res; 2023 Oct; 25():e49944. PubMed ID: 37792444
[TBL] [Abstract][Full Text] [Related]
16. BioBERT and Similar Approaches for Relation Extraction.
Bhasuran B
Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
[TBL] [Abstract][Full Text] [Related]
17. Multi-Ontology Refined Embeddings (MORE): A hybrid multi-ontology and corpus-based semantic representation model for biomedical concepts.
Jiang S; Wu W; Tomita N; Ganoe C; Hassanpour S
J Biomed Inform; 2020 Nov; 111():103581. PubMed ID: 33010425
[TBL] [Abstract][Full Text] [Related]
18. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
[TBL] [Abstract][Full Text] [Related]
19. A comparison of word embeddings for the biomedical natural language processing.
Wang Y; Liu S; Afzal N; Rastegar-Mojarad M; Wang L; Shen F; Kingsbury P; Liu H
J Biomed Inform; 2018 Nov; 87():12-20. PubMed ID: 30217670
[TBL] [Abstract][Full Text] [Related]
20. Does BERT need domain adaptation for clinical negation detection?
Lin C; Bethard S; Dligach D; Sadeque F; Savova G; Miller TA
J Am Med Inform Assoc; 2020 Apr; 27(4):584-591. PubMed ID: 32044989
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]