BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

945 related articles for article (PubMed ID: 35319481)

  • 41. DeBERTa-BiLSTM: A multi-label classification model of Arabic medical questions using pre-trained models and deep learning.
    Al-Smadi BS
    Comput Biol Med; 2024 Mar; 170():107921. PubMed ID: 38295474
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Reading comprehension based question answering system in Bangla language with transformer-based learning.
    Aurpa TT; Rifat RK; Ahmed MS; Anwar MM; Ali ABMS
    Heliyon; 2022 Oct; 8(10):e11052. PubMed ID: 36254291
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Automated labelling of radiology reports using natural language processing: Comparison of traditional and newer methods.
    Chng SY; Tern PJW; Kan MRX; Cheng LTE
    Health Care Sci; 2023 Apr; 2(2):120-128. PubMed ID: 38938764
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Application of Entity-BERT model based on neuroscience and brain-like cognition in electronic medical record entity recognition.
    Lu W; Jiang J; Shi Y; Zhong X; Gu J; Huangfu L; Gong M
    Front Neurosci; 2023; 17():1259652. PubMed ID: 37799340
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Does BERT need domain adaptation for clinical negation detection?
    Lin C; Bethard S; Dligach D; Sadeque F; Savova G; Miller TA
    J Am Med Inform Assoc; 2020 Apr; 27(4):584-591. PubMed ID: 32044989
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Ontology-driven and weakly supervised rare disease identification from clinical notes.
    Dong H; Suárez-Paniagua V; Zhang H; Wang M; Casey A; Davidson E; Chen J; Alex B; Whiteley W; Wu H
    BMC Med Inform Decis Mak; 2023 May; 23(1):86. PubMed ID: 37147628
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.
    Naseem U; Dunn AG; Khushi M; Kim J
    BMC Bioinformatics; 2022 Apr; 23(1):144. PubMed ID: 35448946
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study.
    Kades K; Sellner J; Koehler G; Full PM; Lai TYE; Kleesiek J; Maier-Hein KH
    JMIR Med Inform; 2021 Feb; 9(2):e22795. PubMed ID: 33533728
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Incorporating natural language processing to improve classification of axial spondyloarthritis using electronic health records.
    Zhao SS; Hong C; Cai T; Xu C; Huang J; Ermann J; Goodson NJ; Solomon DH; Cai T; Liao KP
    Rheumatology (Oxford); 2020 May; 59(5):1059-1065. PubMed ID: 31535693
    [TBL] [Abstract][Full Text] [Related]  

  • 50. Extracting Multiple Worries From Breast Cancer Patient Blogs Using Multilabel Classification With the Natural Language Processing Model Bidirectional Encoder Representations From Transformers: Infodemiology Study of Blogs.
    Watanabe T; Yada S; Aramaki E; Yajima H; Kizaki H; Hori S
    JMIR Cancer; 2022 Jun; 8(2):e37840. PubMed ID: 35657664
    [TBL] [Abstract][Full Text] [Related]  

  • 51. BERT-Kcr: prediction of lysine crotonylation sites by a transfer learning method with pre-trained BERT models.
    Qiao Y; Zhu X; Gong H
    Bioinformatics; 2022 Jan; 38(3):648-654. PubMed ID: 34643684
    [TBL] [Abstract][Full Text] [Related]  

  • 52. Limitations of Transformers on Clinical Text Classification.
    Gao S; Alawad M; Young MT; Gounley J; Schaefferkoetter N; Yoon HJ; Wu XC; Durbin EB; Doherty J; Stroup A; Coyle L; Tourassi G
    IEEE J Biomed Health Inform; 2021 Sep; 25(9):3596-3607. PubMed ID: 33635801
    [TBL] [Abstract][Full Text] [Related]  

  • 53. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 54. GT-Finder: Classify the family of glucose transporters with pre-trained BERT language models.
    Ali Shah SM; Taju SW; Ho QT; Nguyen TT; Ou YY
    Comput Biol Med; 2021 Apr; 131():104259. PubMed ID: 33581474
    [TBL] [Abstract][Full Text] [Related]  

  • 55. BERT2OME: Prediction of 2'-O-Methylation Modifications From RNA Sequence by Transformer Architecture Based on BERT.
    Soylu NN; Sefer E
    IEEE/ACM Trans Comput Biol Bioinform; 2023; 20(3):2177-2189. PubMed ID: 37819796
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Sample Size Considerations for Fine-Tuning Large Language Models for Named Entity Recognition Tasks: Methodological Study.
    Majdik ZP; Graham SS; Shiva Edward JC; Rodriguez SN; Karnes MS; Jensen JT; Barbour JB; Rousseau JF
    JMIR AI; 2024 May; 3():e52095. PubMed ID: 38875593
    [TBL] [Abstract][Full Text] [Related]  

  • 57. PharmBERT: a domain-specific BERT model for drug labels.
    ValizadehAslani T; Shi Y; Ren P; Wang J; Zhang Y; Hu M; Zhao L; Liang H
    Brief Bioinform; 2023 Jul; 24(4):. PubMed ID: 37317617
    [TBL] [Abstract][Full Text] [Related]  

  • 58. A novel deep learning approach to extract Chinese clinical entities for lung cancer screening and staging.
    Zhang H; Hu D; Duan H; Li S; Wu N; Lu X
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):214. PubMed ID: 34330277
    [TBL] [Abstract][Full Text] [Related]  

  • 59. OpticalBERT and OpticalTable-SQA: Text- and Table-Based Language Models for the Optical-Materials Domain.
    Zhao J; Huang S; Cole JM
    J Chem Inf Model; 2023 Apr; 63(7):1961-1981. PubMed ID: 36940385
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Natural language processing in urology: Automated extraction of clinical information from histopathology reports of uro-oncology procedures.
    Huang H; Lim FXY; Gu GT; Han MJ; Fang AHS; Chia EHS; Bei EYT; Tham SZ; Ho HSS; Yuen JSP; Sun A; Lim JKS
    Heliyon; 2023 Apr; 9(4):e14793. PubMed ID: 37025805
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 48.