These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

333 related articles for article (PubMed ID: 36080961)

  • 1. Visual Pretraining via Contrastive Predictive Model for Pixel-Based Reinforcement Learning.
    Luu TM; Vu T; Nguyen T; Yoo CD
    Sensors (Basel); 2022 Aug; 22(17):. PubMed ID: 36080961
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Action-driven contrastive representation for reinforcement learning.
    Kim M; Rho K; Kim YD; Jung K
    PLoS One; 2022; 17(3):e0265456. PubMed ID: 35303031
    [TBL] [Abstract][Full Text] [Related]  

  • 3. STACoRe: Spatio-temporal and action-based contrastive representations for reinforcement learning in Atari.
    Lee YJ; Kim J; Kwak M; Park YJ; Kim SB
    Neural Netw; 2023 Mar; 160():1-11. PubMed ID: 36587439
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Reward-predictive representations generalize across tasks in reinforcement learning.
    Lehnert L; Littman ML; Frank MJ
    PLoS Comput Biol; 2020 Oct; 16(10):e1008317. PubMed ID: 33057329
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information.
    Osa T; Tangkaratt V; Sugiyama M
    Neural Netw; 2022 Aug; 152():90-104. PubMed ID: 35523085
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Selective particle attention: Rapidly and flexibly selecting features for deep reinforcement learning.
    Blakeman S; Mareschal D
    Neural Netw; 2022 Jun; 150():408-421. PubMed ID: 35358888
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Masked Contrastive Representation Learning for Reinforcement Learning.
    Zhu J; Xia Y; Wu L; Deng J; Zhou W; Qin T; Liu TY; Li H
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3421-3433. PubMed ID: 35594229
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Context meta-reinforcement learning via neuromodulation.
    Ben-Iwhiwhu E; Dick J; Ketz NA; Pilly PK; Soltoggio A
    Neural Netw; 2022 Aug; 152():70-79. PubMed ID: 35512540
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Forward and inverse reinforcement learning sharing network weights and hyperparameters.
    Uchibe E; Doya K
    Neural Netw; 2021 Dec; 144():138-153. PubMed ID: 34492548
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Exploratory State Representation Learning.
    Merckling A; Perrin-Gilbert N; Coninx A; Doncieux S
    Front Robot AI; 2022; 9():762051. PubMed ID: 35237669
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Action-Driven Visual Object Tracking With Deep Reinforcement Learning.
    Yun S; Choi J; Yoo Y; Yun K; Choi JY
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2239-2252. PubMed ID: 29771675
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Inverse reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units.
    Yu C; Liu J; Zhao H
    BMC Med Inform Decis Mak; 2019 Apr; 19(Suppl 2):57. PubMed ID: 30961594
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Multimodal information bottleneck for deep reinforcement learning with multiple sensors.
    You B; Liu H
    Neural Netw; 2024 Aug; 176():106347. PubMed ID: 38688069
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Inverse Reinforcement Learning in Tracking Control Based on Inverse Optimal Control.
    Xue W; Kolaric P; Fan J; Lian B; Chai T; Lewis FL
    IEEE Trans Cybern; 2022 Oct; 52(10):10570-10581. PubMed ID: 33877993
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Lessons from reinforcement learning for biological representations of space.
    Muryy A; Siddharth N; Nardelli N; Glennerster A; Torr PHS
    Vision Res; 2020 Sep; 174():79-93. PubMed ID: 32683096
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Strangeness-driven exploration in multi-agent reinforcement learning.
    Kim JB; Choi HB; Han YH
    Neural Netw; 2024 Apr; 172():106149. PubMed ID: 38306786
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty.
    Kim M; Kim JS; Choi MS; Park JH
    Sensors (Basel); 2022 Sep; 22(19):. PubMed ID: 36236366
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning.
    Zhou X; Bai T; Gao Y; Han Y
    Sensors (Basel); 2019 Apr; 19(7):. PubMed ID: 30939807
    [TBL] [Abstract][Full Text] [Related]  

  • 19. LJIR: Learning Joint-Action Intrinsic Reward in cooperative multi-agent reinforcement learning.
    Chen Z; Luo B; Hu T; Xu X
    Neural Netw; 2023 Oct; 167():450-459. PubMed ID: 37683459
    [TBL] [Abstract][Full Text] [Related]  

  • 20. C2RL: Convolutional-Contrastive Learning for Reinforcement Learning Based on Self-Pretraining for Strong Augmentation.
    Park S; Kim J; Jeong HY; Kim TK; Yoo J
    Sensors (Basel); 2023 May; 23(10):. PubMed ID: 37430860
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 17.