These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

175 related articles for article (PubMed ID: 36380848)

  • 21. AMMU: A survey of transformer-based biomedical pretrained language models.
    Kalyan KS; Rajasekharan A; Sangeetha S
    J Biomed Inform; 2022 Feb; 126():103982. PubMed ID: 34974190
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Adversarial active learning for the identification of medical concepts and annotation inconsistency.
    Yu G; Yang Y; Wang X; Zhen H; He G; Li Z; Zhao Y; Shu Q; Shu L
    J Biomed Inform; 2020 Aug; 108():103481. PubMed ID: 32687985
    [TBL] [Abstract][Full Text] [Related]  

  • 23. The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study.
    Rivera Zavala R; Martinez P
    JMIR Med Inform; 2020 Dec; 8(12):e18953. PubMed ID: 33270027
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Analyzing transfer learning impact in biomedical cross-lingual named entity recognition and normalization.
    Rivera-Zavala RM; Martínez P
    BMC Bioinformatics; 2021 Dec; 22(Suppl 1):601. PubMed ID: 34920703
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A Question-and-Answer System to Extract Data From Free-Text Oncological Pathology Reports (CancerBERT Network): Development Study.
    Mitchell JR; Szepietowski P; Howard R; Reisman P; Jones JD; Lewis P; Fridley BL; Rollison DE
    J Med Internet Res; 2022 Mar; 24(3):e27210. PubMed ID: 35319481
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Roman Urdu Hate Speech Detection Using Transformer-Based Model for Cyber Security Applications.
    Bilal M; Khan A; Jan S; Musa S; Ali S
    Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112249
    [TBL] [Abstract][Full Text] [Related]  

  • 27. A comparative study of pretrained language models for long clinical text.
    Li Y; Wehbe RM; Ahmad FS; Wang H; Luo Y
    J Am Med Inform Assoc; 2023 Jan; 30(2):340-347. PubMed ID: 36451266
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Do syntactic trees enhance Bidirectional Encoder Representations from Transformers (BERT) models for chemical-drug relation extraction?
    Tang A; Deléger L; Bossy R; Zweigenbaum P; Nédellec C
    Database (Oxford); 2022 Aug; 2022():. PubMed ID: 36006843
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Contextualized medication information extraction using Transformer-based deep learning architectures.
    Chen A; Yu Z; Yang X; Guo Y; Bian J; Wu Y
    J Biomed Inform; 2023 Jun; 142():104370. PubMed ID: 37100106
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Are synthetic clinical notes useful for real natural language processing tasks: A case study on clinical entity recognition.
    Li J; Zhou Y; Jiang X; Natarajan K; Pakhomov SV; Liu H; Xu H
    J Am Med Inform Assoc; 2021 Sep; 28(10):2193-2201. PubMed ID: 34272955
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Transformer-based deep neural network language models for Alzheimer's disease risk assessment from targeted speech.
    Roshanzamir A; Aghajan H; Soleymani Baghshah M
    BMC Med Inform Decis Mak; 2021 Mar; 21(1):92. PubMed ID: 33750385
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT).
    Li J; Lin Y; Zhao P; Liu W; Cai L; Sun J; Zhao L; Yang Z; Song H; Lv H; Wang Z
    BMC Med Inform Decis Mak; 2022 Jul; 22(1):200. PubMed ID: 35907966
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Does BERT need domain adaptation for clinical negation detection?
    Lin C; Bethard S; Dligach D; Sadeque F; Savova G; Miller TA
    J Am Med Inform Assoc; 2020 Apr; 27(4):584-591. PubMed ID: 32044989
    [TBL] [Abstract][Full Text] [Related]  

  • 34. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.
    Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T
    J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671
    [TBL] [Abstract][Full Text] [Related]  

  • 35. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Application of Deep Learning in Generating Structured Radiology Reports: A Transformer-Based Technique.
    Moezzi SAR; Ghaedi A; Rahmanian M; Mousavi SZ; Sami A
    J Digit Imaging; 2023 Feb; 36(1):80-90. PubMed ID: 36002778
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Explainable clinical coding with in-domain adapted transformers.
    López-García G; Jerez JM; Ribelles N; Alba E; Veredas FJ
    J Biomed Inform; 2023 Mar; 139():104323. PubMed ID: 36813154
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Transformer-Based Named Entity Recognition for Parsing Clinical Trial Eligibility Criteria.
    Tian S; Erdengasileng A; Yang X; Guo Y; Wu Y; Zhang J; Bian J; He Z
    ACM BCB; 2021 Aug; 2021():. PubMed ID: 34414397
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Towards Transfer Learning Techniques-BERT, DistilBERT, BERTimbau, and DistilBERTimbau for Automatic Text Classification from Different Languages: A Case Study.
    Silva Barbon R; Akabane AT
    Sensors (Basel); 2022 Oct; 22(21):. PubMed ID: 36365883
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Classifying the lifestyle status for Alzheimer's disease from clinical notes using deep learning with weak supervision.
    Shen Z; Schutte D; Yi Y; Bompelli A; Yu F; Wang Y; Zhang R
    BMC Med Inform Decis Mak; 2022 Jul; 22(Suppl 1):88. PubMed ID: 35799294
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 9.