BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

180 related articles for article (PubMed ID: 37742294)

  • 1. Biomedical generative pre-trained based transformer language model for age-related disease target discovery.
    Zagirova D; Pushkov S; Leung GHD; Liu BHM; Urban A; Sidorenko D; Kalashnikov A; Kozlova E; Naumov V; Pun FW; Ozerov IV; Aliper A; Zhavoronkov A
    Aging (Albany NY); 2023 Sep; 15(18):9293-9309. PubMed ID: 37742294
    [TBL] [Abstract][Full Text] [Related]  

  • 2. BioGPT: generative pre-trained transformer for biomedical text generation and mining.
    Luo R; Sun L; Xia Y; Qin T; Zhang S; Poon H; Liu TY
    Brief Bioinform; 2022 Nov; 23(6):. PubMed ID: 36156661
    [TBL] [Abstract][Full Text] [Related]  

  • 3. ChIP-GPT: a managed large language model for robust data extraction from biomedical database records.
    Cinquin O
    Brief Bioinform; 2024 Jan; 25(2):. PubMed ID: 38314912
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Leveraging Large Language Models for Clinical Abbreviation Disambiguation.
    Hosseini M; Hosseini M; Javidan R
    J Med Syst; 2024 Feb; 48(1):27. PubMed ID: 38411689
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Generative large language models are all-purpose text analytics engines: text-to-text learning is all your need.
    Peng C; Yang X; Chen A; Yu Z; Smith KE; Costa AB; Flores MG; Bian J; Wu Y
    J Am Med Inform Assoc; 2024 Apr; ():. PubMed ID: 38630580
    [TBL] [Abstract][Full Text] [Related]  

  • 6. AI chatbots not yet ready for clinical use.
    Au Yeung J; Kraljevic Z; Luintel A; Balston A; Idowu E; Dobson RJ; Teo JT
    Front Digit Health; 2023; 5():1161098. PubMed ID: 37122812
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Critical assessment of transformer-based AI models for German clinical notes.
    Lentzen M; Madan S; Lage-Rupprecht V; Kühnel L; Fluck J; Jacobs M; Mittermaier M; Witzenrath M; Brunecker P; Hofmann-Apitius M; Weber J; Fröhlich H
    JAMIA Open; 2022 Dec; 5(4):ooac087. PubMed ID: 36380848
    [TBL] [Abstract][Full Text] [Related]  

  • 8. ChatGPT and large language model (LLM) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine.
    Kim JK; Chua M; Rickard M; Lorenzo A
    J Pediatr Urol; 2023 Oct; 19(5):598-604. PubMed ID: 37328321
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
    Yang F; Wang X; Ma H; Li J
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers.
    Roussinov D; Conkie A; Patterson A; Sainsbury C
    Front Digit Health; 2021; 3():810260. PubMed ID: 35265939
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Transformer-based deep neural network language models for Alzheimer's disease risk assessment from targeted speech.
    Roshanzamir A; Aghajan H; Soleymani Baghshah M
    BMC Med Inform Decis Mak; 2021 Mar; 21(1):92. PubMed ID: 33750385
    [TBL] [Abstract][Full Text] [Related]  

  • 12. BactInt: A domain driven transfer learning approach for extracting inter-bacterial associations from biomedical text.
    Das Baksi K; Pokhrel V; Pudavar AE; Mande SS; Kuntal BK
    Comput Biol Chem; 2024 Apr; 109():108012. PubMed ID: 38198963
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened.
    Májovský M; Černý M; Kasal M; Komarc M; Netuka D
    J Med Internet Res; 2023 May; 25():e46924. PubMed ID: 37256685
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Precious1GPT: multimodal transformer-based transfer learning for aging clock development and feature importance analysis for aging and age-related disease target discovery.
    Urban A; Sidorenko D; Zagirova D; Kozlova E; Kalashnikov A; Pushkov S; Naumov V; Sarkisova V; Leung GHD; Leung HW; Pun FW; Ozerov IV; Aliper A; Ren F; Zhavoronkov A
    Aging (Albany NY); 2023 Jun; 15(11):4649-4666. PubMed ID: 37315204
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Do syntactic trees enhance Bidirectional Encoder Representations from Transformers (BERT) models for chemical-drug relation extraction?
    Tang A; Deléger L; Bossy R; Zweigenbaum P; Nédellec C
    Database (Oxford); 2022 Aug; 2022():. PubMed ID: 36006843
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Deep learning to refine the identification of high-quality clinical research articles from the biomedical literature: Performance evaluation.
    Lokker C; Bagheri E; Abdelkader W; Parrish R; Afzal M; Navarro T; Cotoi C; Germini F; Linkins L; Haynes RB; Chu L; Iorio A
    J Biomed Inform; 2023 Jun; 142():104384. PubMed ID: 37164244
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Drug knowledge discovery via multi-task learning and pre-trained models.
    Li D; Xiong Y; Hu B; Tang B; Peng W; Chen Q
    BMC Med Inform Decis Mak; 2021 Nov; 21(Suppl 9):251. PubMed ID: 34789238
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Predicting Semantic Similarity Between Clinical Sentence Pairs Using Transformer Models: Evaluation and Representational Analysis.
    Ormerod M; Martínez Del Rincón J; Devereux B
    JMIR Med Inform; 2021 May; 9(5):e23099. PubMed ID: 34037527
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Discovering Thematically Coherent Biomedical Documents Using Contextualized Bidirectional Encoder Representations from Transformers-Based Clustering.
    Davagdorj K; Wang L; Li M; Pham VH; Ryu KH; Theera-Umpon N
    Int J Environ Res Public Health; 2022 May; 19(10):. PubMed ID: 35627429
    [TBL] [Abstract][Full Text] [Related]  

  • 20. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.