These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

448 related articles for article (PubMed ID: 33581474)

  • 1. GT-Finder: Classify the family of glucose transporters with pre-trained BERT language models.
    Ali Shah SM; Taju SW; Ho QT; Nguyen TT; Ou YY
    Comput Biol Med; 2021 Apr; 131():104259. PubMed ID: 33581474
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A transformer architecture based on BERT and 2D convolutional neural network to identify DNA enhancers from sequence information.
    Le NQK; Ho QT; Nguyen TT; Ou YY
    Brief Bioinform; 2021 Sep; 22(5):. PubMed ID: 33539511
    [TBL] [Abstract][Full Text] [Related]  

  • 3. TRP-BERT: Discrimination of transient receptor potential (TRP) channels using contextual representations from deep bidirectional transformer based on BERT.
    Ali Shah SM; Ou YY
    Comput Biol Med; 2021 Oct; 137():104821. PubMed ID: 34508974
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ActTRANS: Functional classification in active transport proteins based on transfer learning and contextual representations.
    Taju SW; Shah SMA; Ou YY
    Comput Biol Chem; 2021 Aug; 93():107537. PubMed ID: 34217007
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Identification of efflux proteins based on contextual representations with deep bidirectional transformer encoders.
    Taju SW; Shah SMA; Ou YY
    Anal Biochem; 2021 Nov; 633():114416. PubMed ID: 34656612
    [TBL] [Abstract][Full Text] [Related]  

  • 6. BERT-based Ranking for Biomedical Entity Normalization.
    Ji Z; Wei Q; Xu H
    AMIA Jt Summits Transl Sci Proc; 2020; 2020():269-277. PubMed ID: 32477646
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Deep contextualized embeddings for quantifying the informative content in biomedical text summarization.
    Moradi M; Dorffner G; Samwald M
    Comput Methods Programs Biomed; 2020 Feb; 184():105117. PubMed ID: 31627150
    [TBL] [Abstract][Full Text] [Related]  

  • 8. BERT-Promoter: An improved sequence-based predictor of DNA promoter using BERT pre-trained model and SHAP feature selection.
    Le NQK; Ho QT; Nguyen VN; Chang JS
    Comput Biol Chem; 2022 Aug; 99():107732. PubMed ID: 35863177
    [TBL] [Abstract][Full Text] [Related]  

  • 9. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study.
    Sun Y; Gao D; Shen X; Li M; Nan J; Zhang W
    JMIR Med Inform; 2022 Apr; 10(4):e35606. PubMed ID: 35451969
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT).
    Li J; Lin Y; Zhao P; Liu W; Cai L; Sun J; Zhao L; Yang Z; Song H; Lv H; Wang Z
    BMC Med Inform Decis Mak; 2022 Jul; 22(1):200. PubMed ID: 35907966
    [TBL] [Abstract][Full Text] [Related]  

  • 12. BERT-Kcr: prediction of lysine crotonylation sites by a transfer learning method with pre-trained BERT models.
    Qiao Y; Zhu X; Gong H
    Bioinformatics; 2022 Jan; 38(3):648-654. PubMed ID: 34643684
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Relation Extraction from Clinical Narratives Using Pre-trained Language Models.
    Wei Q; Ji Z; Si Y; Du J; Wang J; Tiryaki F; Wu S; Tao C; Roberts K; Xu H
    AMIA Annu Symp Proc; 2019; 2019():1236-1245. PubMed ID: 32308921
    [TBL] [Abstract][Full Text] [Related]  

  • 14. FAD-BERT: Improved prediction of FAD binding sites using pre-training of deep bidirectional transformers.
    Ho QT; Nguyen TT; Khanh Le NQ; Ou YY
    Comput Biol Med; 2021 Apr; 131():104258. PubMed ID: 33601085
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A comparison of word embeddings for the biomedical natural language processing.
    Wang Y; Liu S; Afzal N; Rastegar-Mojarad M; Wang L; Shen F; Kingsbury P; Liu H
    J Biomed Inform; 2018 Nov; 87():12-20. PubMed ID: 30217670
    [TBL] [Abstract][Full Text] [Related]  

  • 16. BioBERT and Similar Approaches for Relation Extraction.
    Bhasuran B
    Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Stacked DeBERT: All attention in incomplete data for text classification.
    Cunha Sergio G; Lee M
    Neural Netw; 2021 Apr; 136():87-96. PubMed ID: 33453522
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.
    Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T
    J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Extracting comprehensive clinical information for breast cancer using deep learning methods.
    Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
    Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Improved biomedical word embeddings in the transformer era.
    Noh J; Kavuluru R
    J Biomed Inform; 2021 Aug; 120():103867. PubMed ID: 34284119
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 23.