These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

337 related articles for article (PubMed ID: 36080961)

  • 21. Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.
    Feng R; Zhou Z; Gotway MB; Liang J
    Domain Adapt Represent Transf Distrib Collab Learn (2020); 2020 Oct; 12444():85-95. PubMed ID: 35713588
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels.
    Tavolara TE; Gurcan MN; Niazi MKK
    Cancers (Basel); 2022 Nov; 14(23):. PubMed ID: 36497258
    [TBL] [Abstract][Full Text] [Related]  

  • 23. RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm.
    Gharagozlou H; Mohammadzadeh J; Bastanfard A; Ghidary SS
    Comput Intell Neurosci; 2022; 2022():7839840. PubMed ID: 35571722
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Representation learning for continuous action spaces is beneficial for efficient policy learning.
    Zhao T; Wang Y; Sun W; Chen Y; Niu G; Sugiyama M
    Neural Netw; 2023 Feb; 159():137-152. PubMed ID: 36566604
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Vision-Based Efficient Robotic Manipulation with a Dual-Streaming Compact Convolutional Transformer.
    Guo H; Song M; Ding Z; Yi C; Jiang F
    Sensors (Basel); 2023 Jan; 23(1):. PubMed ID: 36617113
    [TBL] [Abstract][Full Text] [Related]  

  • 26. What matters in reinforcement learning for tractography.
    Théberge A; Desrosiers C; Boré A; Descoteaux M; Jodoin PM
    Med Image Anal; 2024 Apr; 93():103085. PubMed ID: 38219499
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Emergence of integrated behaviors through direct optimization for homeostasis.
    Yoshida N; Daikoku T; Nagai Y; Kuniyoshi Y
    Neural Netw; 2024 Sep; 177():106379. PubMed ID: 38762941
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Kernel Temporal Difference based Reinforcement Learning for Brain Machine Interfaces
    Shen X; Zhang X; Wang Y
    Annu Int Conf IEEE Eng Med Biol Soc; 2021 Nov; 2021():6721-6724. PubMed ID: 34892650
    [TBL] [Abstract][Full Text] [Related]  

  • 29. A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers.
    Villarrubia-Martin EA; Rodriguez-Benitez L; Jimenez-Linares L; Muñoz-Valero D; Liu J
    Int J Neural Syst; 2023 Dec; 33(12):2350065. PubMed ID: 37857407
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Temporal-difference reinforcement learning with distributed representations.
    Kurth-Nelson Z; Redish AD
    PLoS One; 2009 Oct; 4(10):e7362. PubMed ID: 19841749
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Asymmetric and adaptive reward coding via normalized reinforcement learning.
    Louie K
    PLoS Comput Biol; 2022 Jul; 18(7):e1010350. PubMed ID: 35862443
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Continuous action deep reinforcement learning for propofol dosing during general anesthesia.
    Schamberg G; Badgeley M; Meschede-Krasa B; Kwon O; Brown EN
    Artif Intell Med; 2022 Jan; 123():102227. PubMed ID: 34998516
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Efficient Reinforcement Learning from Demonstration via Bayesian Network-Based Knowledge Extraction.
    Zhang Y; Lan Y; Fang Q; Xu X; Li J; Zeng Y
    Comput Intell Neurosci; 2021; 2021():7588221. PubMed ID: 34603434
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Multiple Self-Supervised Auxiliary Tasks for Target-Driven Visual Navigation Using Deep Reinforcement Learning.
    Zhang W; He L; Wang H; Yuan L; Xiao W
    Entropy (Basel); 2023 Jun; 25(7):. PubMed ID: 37509957
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Curiosity-driven recommendation strategy for adaptive learning via deep reinforcement learning.
    Han R; Chen K; Tan C
    Br J Math Stat Psychol; 2020 Nov; 73(3):522-540. PubMed ID: 32080828
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Graph contrastive learning with implicit augmentations.
    Liang H; Du X; Zhu B; Ma Z; Chen K; Gao J
    Neural Netw; 2023 Jun; 163():156-164. PubMed ID: 37054514
    [TBL] [Abstract][Full Text] [Related]  

  • 37. A reinforcement learning algorithm acquires demonstration from the training agent by dividing the task space.
    Zu L; He X; Yang J; Liu L; Wang W
    Neural Netw; 2023 Jul; 164():419-427. PubMed ID: 37187108
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Reinforcement Learning with Side Information for the Uncertainties.
    Yang J
    Sensors (Basel); 2022 Dec; 22(24):. PubMed ID: 36560180
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Combining STDP and binary networks for reinforcement learning from images and sparse rewards.
    Chevtchenko SF; Ludermir TB
    Neural Netw; 2021 Dec; 144():496-506. PubMed ID: 34601362
    [TBL] [Abstract][Full Text] [Related]  

  • 40. MOSAIC for multiple-reward environments.
    Sugimoto N; Haruno M; Doya K; Kawato M
    Neural Comput; 2012 Mar; 24(3):577-606. PubMed ID: 22168558
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 17.