BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

161 related articles for article (PubMed ID: 37056912)

  • 1. Sequence-to-sequence pretraining for a less-resourced Slovenian language.
    Ulčar M; Robnik-Šikonja M
    Front Artif Intell; 2023; 6():932519. PubMed ID: 37056912
    [TBL] [Abstract][Full Text] [Related]  

  • 2. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study.
    Sun Y; Gao D; Shen X; Li M; Nan J; Zhang W
    JMIR Med Inform; 2022 Apr; 10(4):e35606. PubMed ID: 35451969
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Evaluation of clinical named entity recognition methods for Serbian electronic health records.
    Kaplar A; Stošović M; Kaplar A; Brković V; Naumović R; Kovačević A
    Int J Med Inform; 2022 Aug; 164():104805. PubMed ID: 35653828
    [TBL] [Abstract][Full Text] [Related]  

  • 6. On cross-lingual retrieval with multilingual text encoders.
    Litschko R; Vulić I; Ponzetto SP; Glavaš G
    Inf Retr Boston; 2022; 25(2):149-183. PubMed ID: 35573078
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
    Yang F; Wang X; Ma H; Li J
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Deep learning based sentiment analysis and offensive language identification on multilingual code-mixed data.
    Shanmugavadivel K; Sathishkumar VE; Raja S; Lingaiah TB; Neelakandan S; Subramanian M
    Sci Rep; 2022 Dec; 12(1):21557. PubMed ID: 36513786
    [TBL] [Abstract][Full Text] [Related]  

  • 9. RadBERT: Adapting Transformer-based Language Models to Radiology.
    Yan A; McAuley J; Lu X; Du J; Chang EY; Gentili A; Hsu CN
    Radiol Artif Intell; 2022 Jul; 4(4):e210258. PubMed ID: 35923376
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Survey of transformers and towards ensemble learning using transformers for natural language processing.
    Zhang H; Shafiq MO
    J Big Data; 2024; 11(1):25. PubMed ID: 38321999
    [TBL] [Abstract][Full Text] [Related]  

  • 11. AmericasNLI: Machine translation and natural language inference systems for Indigenous languages of the Americas.
    Kann K; Ebrahimi A; Mager M; Oncevay A; Ortega JE; Rios A; Fan A; Gutierrez-Vasques X; Chiruzzo L; Giménez-Lugo GA; Ramos R; Meza Ruiz IV; Mager E; Chaudhary V; Neubig G; Palmer A; Coto-Solano R; Vu NT
    Front Artif Intell; 2022; 5():995667. PubMed ID: 36530357
    [TBL] [Abstract][Full Text] [Related]  

  • 12. BioBERTurk: Exploring Turkish Biomedical Language Model Development Strategies in Low-Resource Setting.
    Türkmen H; Dikenelli O; Eraslan C; Çallı MC; Özbek SS
    J Healthc Inform Res; 2023 Dec; 7(4):433-446. PubMed ID: 37927378
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Comparison of Pretraining Models and Strategies for Health-Related Social Media Text Classification.
    Guo Y; Ge Y; Yang YC; Al-Garadi MA; Sarker A
    Healthcare (Basel); 2022 Aug; 10(8):. PubMed ID: 36011135
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Multilingual text categorization and sentiment analysis: a comparative analysis of the utilization of multilingual approaches for classifying twitter data.
    Manias G; Mavrogiorgou A; Kiourtis A; Symvoulidis C; Kyriazis D
    Neural Comput Appl; 2023 May; ():1-17. PubMed ID: 37362579
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Bioformer: an efficient transformer language model for biomedical text mining.
    Fang L; Chen Q; Wei CH; Lu Z; Wang K
    ArXiv; 2023 Feb; ():. PubMed ID: 36945685
    [TBL] [Abstract][Full Text] [Related]  

  • 16. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction.
    Rasmy L; Xiang Y; Xie Z; Tao C; Zhi D
    NPJ Digit Med; 2021 May; 4(1):86. PubMed ID: 34017034
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Semantic Textual Similarity in Japanese Clinical Domain Texts Using BERT.
    Mutinda FW; Yada S; Wakamiya S; Aramaki E
    Methods Inf Med; 2021 Jun; 60(S 01):e56-e64. PubMed ID: 34237783
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Pashto offensive language detection: a benchmark dataset and monolingual Pashto BERT.
    Haq I; Qiu W; Guo J; Tang P
    PeerJ Comput Sci; 2023; 9():e1617. PubMed ID: 38077561
    [TBL] [Abstract][Full Text] [Related]  

  • 20. COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter.
    Müller M; Salathé M; Kummervold PE
    Front Artif Intell; 2023; 6():1023281. PubMed ID: 36998290
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.