These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 37740227)

  • 1. Semi-automating abstract screening with a natural language model pretrained on biomedical literature.
    Ng SH; Teow KL; Ang GY; Tan WS; Hum A
    Syst Rev; 2023 Sep; 12(1):172. PubMed ID: 37740227
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Text mining to support abstract screening for knowledge syntheses: a semi-automated workflow.
    Pham B; Jovanovic J; Bagheri E; Antony J; Ashoor H; Nguyen TT; Rios P; Robson R; Thomas SM; Watt J; Straus SE; Tricco AC
    Syst Rev; 2021 May; 10(1):156. PubMed ID: 34039433
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Automated Paper Screening for Clinical Reviews Using Large Language Models: Data Analysis Study.
    Guo E; Gupta M; Deng J; Park YJ; Paget M; Naugler C
    J Med Internet Res; 2024 Jan; 26():e48996. PubMed ID: 38214966
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Comparison of a traditional systematic review approach with review-of-reviews and semi-automation as strategies to update the evidence.
    Reddy SM; Patel S; Weyrich M; Fenton J; Viswanathan M
    Syst Rev; 2020 Oct; 9(1):243. PubMed ID: 33076975
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Assessing the accuracy of machine-assisted abstract screening with DistillerAI: a user study.
    Gartlehner G; Wagner G; Lux L; Affengruber L; Dobrescu A; Kaminski-Hartenthaler A; Viswanathan M
    Syst Rev; 2019 Nov; 8(1):277. PubMed ID: 31727159
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Natural language processing was effective in assisting rapid title and abstract screening when updating systematic reviews.
    Qin X; Liu J; Wang Y; Liu Y; Deng K; Ma Y; Zou K; Li L; Sun X
    J Clin Epidemiol; 2021 May; 133():121-129. PubMed ID: 33485929
    [TBL] [Abstract][Full Text] [Related]  

  • 7. PICO entity extraction for preclinical animal literature.
    Wang Q; Liao J; Lapata M; Macleod M
    Syst Rev; 2022 Sep; 11(1):209. PubMed ID: 36180888
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study.
    Albalawi Y; Nikolov NS; Buckley J
    JMIR Form Res; 2022 Jun; 6(6):e34834. PubMed ID: 35767322
    [TBL] [Abstract][Full Text] [Related]  

  • 9. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Bioformer: an efficient transformer language model for biomedical text mining.
    Fang L; Chen Q; Wei CH; Lu Z; Wang K
    ArXiv; 2023 Feb; ():. PubMed ID: 36945685
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Evaluation of text mining to reduce screening workload for injury-focused systematic reviews.
    Giummarra MJ; Lau G; Gabbe BJ
    Inj Prev; 2020 Feb; 26(1):55-60. PubMed ID: 31451565
    [TBL] [Abstract][Full Text] [Related]  

  • 12. BioBERTurk: Exploring Turkish Biomedical Language Model Development Strategies in Low-Resource Setting.
    Türkmen H; Dikenelli O; Eraslan C; Çallı MC; Özbek SS
    J Healthc Inform Res; 2023 Dec; 7(4):433-446. PubMed ID: 37927378
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Semi-automated screening of biomedical citations for systematic reviews.
    Wallace BC; Trikalinos TA; Lau J; Brodley C; Schmid CH
    BMC Bioinformatics; 2010 Jan; 11():55. PubMed ID: 20102628
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool.
    Gates A; Johnson C; Hartling L
    Syst Rev; 2018 Mar; 7(1):45. PubMed ID: 29530097
    [TBL] [Abstract][Full Text] [Related]  

  • 15. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Deep learning to refine the identification of high-quality clinical research articles from the biomedical literature: Performance evaluation.
    Lokker C; Bagheri E; Abdelkader W; Parrish R; Afzal M; Navarro T; Cotoi C; Germini F; Linkins L; Haynes RB; Chu L; Iorio A
    J Biomed Inform; 2023 Jun; 142():104384. PubMed ID: 37164244
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Using the contextual language model BERT for multi-criteria classification of scientific articles.
    Ambalavanan AK; Devarakonda MV
    J Biomed Inform; 2020 Dec; 112():103578. PubMed ID: 33059047
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Depression Risk Prediction for Chinese Microblogs via Deep-Learning Methods: Content Analysis.
    Wang X; Chen S; Li T; Li W; Zhou Y; Zheng J; Chen Q; Yan J; Tang B
    JMIR Med Inform; 2020 Jul; 8(7):e17958. PubMed ID: 32723719
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia.
    Osborne SR; Alston LV; Bolton KA; Whelan J; Reeve E; Wong Shee A; Browne J; Walker T; Versace VL; Allender S; Nichols M; Backholer K; Goodwin N; Lewis S; Dalton H; Prael G; Curtin M; Brooks R; Verdon S; Crockett J; Hodgins G; Walsh S; Lyle DM; Thompson SC; Browne LJ; Knight S; Pit SW; Jones M; Gillam MH; Leach MJ; Gonzalez-Chica DA; Muyambi K; Eshetie T; Tran K; May E; Lieschke G; Parker V; Smith A; Hayes C; Dunlop AJ; Rajappa H; White R; Oakley P; Holliday S
    Med J Aust; 2020 Dec; 213 Suppl 11():S3-S32.e1. PubMed ID: 33314144
    [TBL] [Abstract][Full Text] [Related]  

  • 20.
    ; ; . PubMed ID:
    [No Abstract]   [Full Text] [Related]  

    [Next]    [New Search]
    of 7.