BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

156 related articles for article (PubMed ID: 37722445)

  • 1. A self-supervised language model selection strategy for biomedical question answering.
    Arabzadeh N; Bagheri E
    J Biomed Inform; 2023 Oct; 146():104486. PubMed ID: 37722445
    [TBL] [Abstract][Full Text] [Related]  

  • 2. KEBLM: Knowledge-Enhanced Biomedical Language Models.
    Lai TM; Zhai C; Ji H
    J Biomed Inform; 2023 Jul; 143():104392. PubMed ID: 37211194
    [TBL] [Abstract][Full Text] [Related]  

  • 3. External features enriched model for biomedical question answering.
    Xu G; Rong W; Wang Y; Ouyang Y; Xiong Z
    BMC Bioinformatics; 2021 May; 22(1):272. PubMed ID: 34039273
    [TBL] [Abstract][Full Text] [Related]  

  • 4. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Word embeddings and external resources for answer processing in biomedical factoid question answering.
    Dimitriadis D; Tsoumakas G
    J Biomed Inform; 2019 Apr; 92():103118. PubMed ID: 30753948
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Bioformer: an efficient transformer language model for biomedical text mining.
    Fang L; Chen Q; Wei CH; Lu Z; Wang K
    ArXiv; 2023 Feb; ():. PubMed ID: 36945685
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Pre-trained language models in medicine: A survey.
    Luo X; Deng Z; Yang B; Luo MY
    Artif Intell Med; 2024 Jun; 154():102904. PubMed ID: 38917600
    [TBL] [Abstract][Full Text] [Related]  

  • 8. AMMU: A survey of transformer-based biomedical pretrained language models.
    Kalyan KS; Rajasekharan A; Sangeetha S
    J Biomed Inform; 2022 Feb; 126():103982. PubMed ID: 34974190
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A comparison of word embeddings for the biomedical natural language processing.
    Wang Y; Liu S; Afzal N; Rastegar-Mojarad M; Wang L; Shen F; Kingsbury P; Liu H
    J Biomed Inform; 2018 Nov; 87():12-20. PubMed ID: 30217670
    [TBL] [Abstract][Full Text] [Related]  

  • 10. SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions.
    Sarrouti M; Ouatik El Alaoui S
    Artif Intell Med; 2020 Jan; 102():101767. PubMed ID: 31980104
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.
    Naseem U; Dunn AG; Khushi M; Kim J
    BMC Bioinformatics; 2022 Apr; 23(1):144. PubMed ID: 35448946
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Named Entity Aware Transfer Learning for Biomedical Factoid Question Answering.
    Peng K; Yin C; Rong W; Lin C; Zhou D; Xiong Z
    IEEE/ACM Trans Comput Biol Bioinform; 2022; 19(4):2365-2376. PubMed ID: 33974546
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Critical assessment of transformer-based AI models for German clinical notes.
    Lentzen M; Madan S; Lage-Rupprecht V; Kühnel L; Fluck J; Jacobs M; Mittermaier M; Witzenrath M; Brunecker P; Hofmann-Apitius M; Weber J; Fröhlich H
    JAMIA Open; 2022 Dec; 5(4):ooac087. PubMed ID: 36380848
    [TBL] [Abstract][Full Text] [Related]  

  • 14. BioVAE: a pre-trained latent variable language model for biomedical text mining.
    Trieu HL; Miwa M; Ananiadou S
    Bioinformatics; 2022 Jan; 38(3):872-874. PubMed ID: 34636886
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks.
    Luo L; Ning J; Zhao Y; Wang Z; Ding Z; Chen P; Fu W; Han Q; Xu G; Qiu Y; Pan D; Li J; Li H; Feng W; Tu S; Liu Y; Yang Z; Wang J; Sun Y; Lin H
    J Am Med Inform Assoc; 2024 Feb; ():. PubMed ID: 38422367
    [TBL] [Abstract][Full Text] [Related]  

  • 16. SentiMedQAer: A Transfer Learning-Based Sentiment-Aware Model for Biomedical Question Answering.
    Zhu X; Chen Y; Gu Y; Xiao Z
    Front Neurorobot; 2022; 16():773329. PubMed ID: 35360832
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Incorporating entity-level knowledge in pretrained language model for biomedical dense retrieval.
    Tan J; Hu J; Dong S
    Comput Biol Med; 2023 Nov; 166():107535. PubMed ID: 37788508
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Improving Biomedical Question Answering by Data Augmentation and Model Weighting.
    Du Y; Yan J; Lu Y; Zhao Y; Jin X
    IEEE/ACM Trans Comput Biol Bioinform; 2023; 20(2):1114-1124. PubMed ID: 35486563
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A comparative study of pre-trained language models for named entity recognition in clinical trial eligibility criteria from multiple corpora.
    Li J; Wei Q; Ghiasvand O; Chen M; Lobanov V; Weng C; Xu H
    BMC Med Inform Decis Mak; 2022 Sep; 22(Suppl 3):235. PubMed ID: 36068551
    [TBL] [Abstract][Full Text] [Related]  

  • 20. COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter.
    Müller M; Salathé M; Kummervold PE
    Front Artif Intell; 2023; 6():1023281. PubMed ID: 36998290
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.