These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

304 related articles for article (PubMed ID: 33538820)

  • 1. DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome.
    Ji Y; Zhou Z; Liu H; Davuluri RV
    Bioinformatics; 2021 Aug; 37(15):2112-2120. PubMed ID: 33538820
    [TBL] [Abstract][Full Text] [Related]  

  • 2. 4 mC site recognition algorithm based on pruned pre-trained DNABert-Pruning model and fused artificial feature encoding.
    Xie GB; Yu Y; Lin ZY; Chen RB; Xie JH; Liu ZG
    Anal Biochem; 2024 Jun; 689():115492. PubMed ID: 38458307
    [TBL] [Abstract][Full Text] [Related]  

  • 3. DNABERT-S: LEARNING SPECIES-AWARE DNA EMBEDDING WITH GENOME FOUNDATION MODELS.
    Zhou Z; Wu W; Ho H; Wang J; Shi L; Davuluri RV; Wang Z; Liu H
    ArXiv; 2024 Feb; ():. PubMed ID: 38410647
    [TBL] [Abstract][Full Text] [Related]  

  • 4. BERT-TFBS: a novel BERT-based model for predicting transcription factor binding sites by transfer learning.
    Wang K; Zeng X; Zhou J; Liu F; Luan X; Wang X
    Brief Bioinform; 2024 Mar; 25(3):. PubMed ID: 38701417
    [TBL] [Abstract][Full Text] [Related]  

  • 5. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Investigation of the BERT model on nucleotide sequences with non-standard pre-training and evaluation of different k-mer embeddings.
    Zhang YZ; Bai Z; Imoto S
    Bioinformatics; 2023 Oct; 39(10):. PubMed ID: 37815839
    [TBL] [Abstract][Full Text] [Related]  

  • 7. MRM-BERT: a novel deep neural network predictor of multiple RNA modifications by fusing BERT representation and sequence features.
    Wang L; Zhou Y
    RNA Biol; 2024 Jan; 21(1):1-10. PubMed ID: 38357904
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports.
    Bressem KK; Adams LC; Gaudin RA; Tröltzsch D; Hamm B; Makowski MR; Schüle CY; Vahldiek JL; Niehues SM
    Bioinformatics; 2021 Jan; 36(21):5255-5261. PubMed ID: 32702106
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.
    Stojanov R; Popovski G; Cenikj G; Koroušić Seljak B; Eftimov T
    J Med Internet Res; 2021 Aug; 23(8):e28229. PubMed ID: 34383671
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Prediction of RNA-protein interactions using a nucleotide language model.
    Yamada K; Hamada M
    Bioinform Adv; 2022; 2(1):vbac023. PubMed ID: 36699410
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A span-based joint model for extracting entities and relations of bacteria biotopes.
    Zuo M; Zhang Y
    Bioinformatics; 2021 Dec; 38(1):220-227. PubMed ID: 34398194
    [TBL] [Abstract][Full Text] [Related]  

  • 12. miRe2e: a full end-to-end deep model based on transformers for prediction of pre-miRNAs.
    Raad J; Bugnon LA; Milone DH; Stegmayer G
    Bioinformatics; 2022 Feb; 38(5):1191-1197. PubMed ID: 34875006
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Employing bimodal representations to predict DNA bendability within a self-supervised pre-trained framework.
    Yang M; Zhang S; Zheng Z; Zhang P; Liang Y; Tang S
    Nucleic Acids Res; 2024 Apr; 52(6):e33. PubMed ID: 38375921
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Predicting protein-peptide binding residues via interpretable deep learning.
    Wang R; Jin J; Zou Q; Nakai K; Wei L
    Bioinformatics; 2022 Jun; 38(13):3351-3360. PubMed ID: 35604077
    [TBL] [Abstract][Full Text] [Related]  

  • 15. LBERT: Lexically aware Transformer-based Bidirectional Encoder Representation model for learning universal bio-entity relations.
    Warikoo N; Chang YC; Hsu WL
    Bioinformatics; 2021 Apr; 37(3):404-412. PubMed ID: 32810217
    [TBL] [Abstract][Full Text] [Related]  

  • 16. BERT-Kcr: prediction of lysine crotonylation sites by a transfer learning method with pre-trained BERT models.
    Qiao Y; Zhu X; Gong H
    Bioinformatics; 2022 Jan; 38(3):648-654. PubMed ID: 34643684
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Chinese Clinical Named Entity Recognition From Electronic Medical Records Based on Multisemantic Features by Using Robustly Optimized Bidirectional Encoder Representation From Transformers Pretraining Approach Whole Word Masking and Convolutional Neural Networks: Model Development and Validation.
    Wang W; Li X; Ren H; Gao D; Fang A
    JMIR Med Inform; 2023 May; 11():e44597. PubMed ID: 37163343
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Continually adapting pre-trained language model to universal annotation of single-cell RNA-seq data.
    Wan H; Yuan M; Fu Y; Deng M
    Brief Bioinform; 2024 Jan; 25(2):. PubMed ID: 38388681
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Multi-Label Classification in Patient-Doctor Dialogues With the RoBERTa-WWM-ext + CNN (Robustly Optimized Bidirectional Encoder Representations From Transformers Pretraining Approach With Whole Word Masking Extended Combining a Convolutional Neural Network) Model: Named Entity Study.
    Sun Y; Gao D; Shen X; Li M; Nan J; Zhang W
    JMIR Med Inform; 2022 Apr; 10(4):e35606. PubMed ID: 35451969
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study.
    Albalawi Y; Nikolov NS; Buckley J
    JMIR Form Res; 2022 Jun; 6(6):e34834. PubMed ID: 35767322
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 16.