These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

288 related articles for article (PubMed ID: 36380848)

  • 1. Critical assessment of transformer-based AI models for German clinical notes.
    Lentzen M; Madan S; Lage-Rupprecht V; Kühnel L; Fluck J; Jacobs M; Mittermaier M; Witzenrath M; Brunecker P; Hofmann-Apitius M; Weber J; Fröhlich H
    JAMIA Open; 2022 Dec; 5(4):ooac087. PubMed ID: 36380848
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
    Yang F; Wang X; Ma H; Li J
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study.
    Li F; Jin Y; Liu W; Rawat BPS; Cai P; Yu H
    JMIR Med Inform; 2019 Sep; 7(3):e14830. PubMed ID: 31516126
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
    Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
    JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Clinical concept extraction using transformers.
    Yang X; Bian J; Hogan WR; Wu Y
    J Am Med Inform Assoc; 2020 Dec; 27(12):1935-1942. PubMed ID: 33120431
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Measurement of Semantic Textual Similarity in Clinical Texts: Comparison of Transformer-Based Models.
    Yang X; He X; Zhang H; Ma Y; Bian J; Wu Y
    JMIR Med Inform; 2020 Nov; 8(11):e19735. PubMed ID: 33226350
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.
    Naseem U; Dunn AG; Khushi M; Kim J
    BMC Bioinformatics; 2022 Apr; 23(1):144. PubMed ID: 35448946
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A comparative study of pre-trained language models for named entity recognition in clinical trial eligibility criteria from multiple corpora.
    Li J; Wei Q; Ghiasvand O; Chen M; Lobanov V; Weng C; Xu H
    BMC Med Inform Decis Mak; 2022 Sep; 22(Suppl 3):235. PubMed ID: 36068551
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Automatic extraction of 12 cardiovascular concepts from German discharge letters using pre-trained language models.
    Richter-Pechanski P; Geis NA; Kiriakou C; Schwab DM; Dieterich C
    Digit Health; 2021; 7():20552076211057662. PubMed ID: 34868618
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Extracting comprehensive clinical information for breast cancer using deep learning methods.
    Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
    Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Bioformer: an efficient transformer language model for biomedical text mining.
    Fang L; Chen Q; Wei CH; Lu Z; Wang K
    ArXiv; 2023 Feb; ():. PubMed ID: 36945685
    [TBL] [Abstract][Full Text] [Related]  

  • 13. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Identify diabetic retinopathy-related clinical concepts and their attributes using transformer-based natural language processing methods.
    Yu Z; Yang X; Sweeting GL; Ma Y; Stolte SE; Fang R; Wu Y
    BMC Med Inform Decis Mak; 2022 Sep; 22(Suppl 3):255. PubMed ID: 36167551
    [TBL] [Abstract][Full Text] [Related]  

  • 15. BioBERT and Similar Approaches for Relation Extraction.
    Bhasuran B
    Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
    [TBL] [Abstract][Full Text] [Related]  

  • 16. RadBERT: Adapting Transformer-based Language Models to Radiology.
    Yan A; McAuley J; Lu X; Du J; Chang EY; Gentili A; Hsu CN
    Radiol Artif Intell; 2022 Jul; 4(4):e210258. PubMed ID: 35923376
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Identification of Semantically Similar Sentences in Clinical Notes: Iterative Intermediate Training Using Multi-Task Learning.
    Mahajan D; Poddar A; Liang JJ; Lin YT; Prager JM; Suryanarayanan P; Raghavan P; Tsou CH
    JMIR Med Inform; 2020 Nov; 8(11):e22508. PubMed ID: 33245284
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Predicting Semantic Similarity Between Clinical Sentence Pairs Using Transformer Models: Evaluation and Representational Analysis.
    Ormerod M; Martínez Del Rincón J; Devereux B
    JMIR Med Inform; 2021 May; 9(5):e23099. PubMed ID: 34037527
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Sample Size Considerations for Fine-Tuning Large Language Models for Named Entity Recognition Tasks: Methodological Study.
    Majdik ZP; Graham SS; Shiva Edward JC; Rodriguez SN; Karnes MS; Jensen JT; Barbour JB; Rousseau JF
    JMIR AI; 2024 May; 3():e52095. PubMed ID: 38875593
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Classifying social determinants of health from unstructured electronic health records using deep learning-based natural language processing.
    Han S; Zhang RF; Shi L; Richie R; Liu H; Tseng A; Quan W; Ryan N; Brent D; Tsui FR
    J Biomed Inform; 2022 Mar; 127():103984. PubMed ID: 35007754
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 15.