These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

180 related articles for article (PubMed ID: 37742294)

  • 21. A publication-wide association study (PWAS), historical language models to prioritise novel therapeutic drug targets.
    Narganes-Carlón D; Crowther DJ; Pearson ER
    Sci Rep; 2023 May; 13(1):8366. PubMed ID: 37225853
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Transformer-based structuring of free-text radiology report databases.
    Nowak S; Biesner D; Layer YC; Theis M; Schneider H; Block W; Wulff B; Attenberger UI; Sifa R; Sprinkart AM
    Eur Radiol; 2023 Jun; 33(6):4228-4236. PubMed ID: 36905469
    [TBL] [Abstract][Full Text] [Related]  

  • 23. AMMU: A survey of transformer-based biomedical pretrained language models.
    Kalyan KS; Rajasekharan A; Sangeetha S
    J Biomed Inform; 2022 Feb; 126():103982. PubMed ID: 34974190
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Stacked DeBERT: All attention in incomplete data for text classification.
    Cunha Sergio G; Lee M
    Neural Netw; 2021 Apr; 136():87-96. PubMed ID: 33453522
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Generative design of compounds with desired potency from target protein sequences using a multimodal biochemical language model.
    Chen H; Bajorath J
    J Cheminform; 2024 May; 16(1):55. PubMed ID: 38778425
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Identifying Patient Populations in Texts Describing Drug Approvals Through Deep Learning-Based Information Extraction: Development of a Natural Language Processing Algorithm.
    Gendrin A; Souliotis L; Loudon-Griffiths J; Aggarwal R; Amoako D; Desouza G; Dimitrievska S; Metcalfe P; Louvet E; Sahni H
    JMIR Form Res; 2023 Jun; 7():e44876. PubMed ID: 37347514
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Comparing Pre-trained and Feature-Based Models for Prediction of Alzheimer's Disease Based on Speech.
    Balagopalan A; Eyre B; Robin J; Rudzicz F; Novikova J
    Front Aging Neurosci; 2021; 13():635945. PubMed ID: 33986655
    [No Abstract]   [Full Text] [Related]  

  • 28. A Review of Recent Work in Transfer Learning and Domain Adaptation for Natural Language Processing of Electronic Health Records.
    Laparra E; Mascio A; Velupillai S; Miller T
    Yearb Med Inform; 2021 Aug; 30(1):239-244. PubMed ID: 34479396
    [TBL] [Abstract][Full Text] [Related]  

  • 29. The Expanding Role of ChatGPT (Chat-Generative Pre-Trained Transformer) in Neurosurgery: A Systematic Review of Literature and Conceptual Framework.
    Roman A; Al-Sharif L; Al Gharyani M
    Cureus; 2023 Aug; 15(8):e43502. PubMed ID: 37719492
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Roman Urdu Hate Speech Detection Using Transformer-Based Model for Cyber Security Applications.
    Bilal M; Khan A; Jan S; Musa S; Ali S
    Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112249
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Leveraging pre-trained language models for mining microbiome-disease relationships.
    Karkera N; Acharya S; Palaniappan SK
    BMC Bioinformatics; 2023 Jul; 24(1):290. PubMed ID: 37468830
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Evaluating the Usefulness of a Large Language Model as a Wholesome Tool for De Novo Polymerase Chain Reaction (PCR) Primer Design.
    Jorapur S; Srivastava A; Kulkarni S
    Cureus; 2023 Oct; 15(10):e47711. PubMed ID: 38021866
    [TBL] [Abstract][Full Text] [Related]  

  • 33. BioVAE: a pre-trained latent variable language model for biomedical text mining.
    Trieu HL; Miwa M; Ananiadou S
    Bioinformatics; 2022 Jan; 38(3):872-874. PubMed ID: 34636886
    [TBL] [Abstract][Full Text] [Related]  

  • 34. COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter.
    Müller M; Salathé M; Kummervold PE
    Front Artif Intell; 2023; 6():1023281. PubMed ID: 36998290
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Deep contextualized embeddings for quantifying the informative content in biomedical text summarization.
    Moradi M; Dorffner G; Samwald M
    Comput Methods Programs Biomed; 2020 Feb; 184():105117. PubMed ID: 31627150
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer.
    Watters C; Lemanski MK
    Front Big Data; 2023; 6():1224976. PubMed ID: 37680954
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Transformer-based models for ICD-10 coding of death certificates with Portuguese text.
    Coutinho I; Martins B
    J Biomed Inform; 2022 Dec; 136():104232. PubMed ID: 36307020
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Transformer versus traditional natural language processing: how much data is enough for automated radiology report classification?
    Yang E; Li MD; Raghavan S; Deng F; Lang M; Succi MD; Huang AJ; Kalpathy-Cramer J
    Br J Radiol; 2023 Sep; 96(1149):20220769. PubMed ID: 37162253
    [TBL] [Abstract][Full Text] [Related]  

  • 39. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing.
    Dergaa I; Chamari K; Zmijewski P; Ben Saad H
    Biol Sport; 2023 Apr; 40(2):615-622. PubMed ID: 37077800
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Explainable clinical coding with in-domain adapted transformers.
    López-García G; Jerez JM; Ribelles N; Alba E; Veredas FJ
    J Biomed Inform; 2023 Mar; 139():104323. PubMed ID: 36813154
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 9.