BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

1145 related articles for article (PubMed ID: 34330244)

  • 1. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
    Yang F; Wang X; Ma H; Li J
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Predicting Semantic Similarity Between Clinical Sentence Pairs Using Transformer Models: Evaluation and Representational Analysis.
    Ormerod M; Martínez Del Rincón J; Devereux B
    JMIR Med Inform; 2021 May; 9(5):e23099. PubMed ID: 34037527
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Extracting comprehensive clinical information for breast cancer using deep learning methods.
    Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
    Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Clinical concept extraction using transformers.
    Yang X; Bian J; Hogan WR; Wu Y
    J Am Med Inform Assoc; 2020 Dec; 27(12):1935-1942. PubMed ID: 33120431
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
    Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
    JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Identify diabetic retinopathy-related clinical concepts and their attributes using transformer-based natural language processing methods.
    Yu Z; Yang X; Sweeting GL; Ma Y; Stolte SE; Fang R; Wu Y
    BMC Med Inform Decis Mak; 2022 Sep; 22(Suppl 3):255. PubMed ID: 36167551
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Evaluation of clinical named entity recognition methods for Serbian electronic health records.
    Kaplar A; Stošović M; Kaplar A; Brković V; Naumović R; Kovačević A
    Int J Med Inform; 2022 Aug; 164():104805. PubMed ID: 35653828
    [TBL] [Abstract][Full Text] [Related]  

  • 8. RadBERT: Adapting Transformer-based Language Models to Radiology.
    Yan A; McAuley J; Lu X; Du J; Chang EY; Gentili A; Hsu CN
    Radiol Artif Intell; 2022 Jul; 4(4):e210258. PubMed ID: 35923376
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Question-and-Answer System to Extract Data From Free-Text Oncological Pathology Reports (CancerBERT Network): Development Study.
    Mitchell JR; Szepietowski P; Howard R; Reisman P; Jones JD; Lewis P; Fridley BL; Rollison DE
    J Med Internet Res; 2022 Mar; 24(3):e27210. PubMed ID: 35319481
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Sample Size Considerations for Fine-Tuning Large Language Models for Named Entity Recognition Tasks: Methodological Study.
    Majdik ZP; Graham SS; Shiva Edward JC; Rodriguez SN; Karnes MS; Jensen JT; Barbour JB; Rousseau JF
    JMIR AI; 2024 May; 3():e52095. PubMed ID: 38875593
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A comparative study of pretrained language models for long clinical text.
    Li Y; Wehbe RM; Ahmad FS; Wang H; Luo Y
    J Am Med Inform Assoc; 2023 Jan; 30(2):340-347. PubMed ID: 36451266
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Transformers for extracting breast cancer information from Spanish clinical narratives.
    Solarte-Pabón O; Montenegro O; García-Barragán A; Torrente M; Provencio M; Menasalvas E; Robles V
    Artif Intell Med; 2023 Sep; 143():102625. PubMed ID: 37673566
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Vocabulary Matters: An Annotation Pipeline and Four Deep Learning Algorithms for Enzyme Named Entity Recognition.
    Wang M; Vijayaraghavan A; Beck T; Posma JM
    J Proteome Res; 2024 Jun; 23(6):1915-1925. PubMed ID: 38733346
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study.
    Li F; Jin Y; Liu W; Rawat BPS; Cai P; Yu H
    JMIR Med Inform; 2019 Sep; 7(3):e14830. PubMed ID: 31516126
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Explainable clinical coding with in-domain adapted transformers.
    López-García G; Jerez JM; Ribelles N; Alba E; Veredas FJ
    J Biomed Inform; 2023 Mar; 139():104323. PubMed ID: 36813154
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.
    Naseem U; Dunn AG; Khushi M; Kim J
    BMC Bioinformatics; 2022 Apr; 23(1):144. PubMed ID: 35448946
    [TBL] [Abstract][Full Text] [Related]  

  • 18. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Multilabel classification of medical concepts for patient clinical profile identification.
    Gérardin C; Wajsbürt P; Vaillant P; Bellamine A; Carrat F; Tannier X
    Artif Intell Med; 2022 Jun; 128():102311. PubMed ID: 35534148
    [TBL] [Abstract][Full Text] [Related]  

  • 20. AMMU: A survey of transformer-based biomedical pretrained language models.
    Kalyan KS; Rajasekharan A; Sangeetha S
    J Biomed Inform; 2022 Feb; 126():103982. PubMed ID: 34974190
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 58.