BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

146 related articles for article (PubMed ID: 34350213)

  • 1. Compositional RL Agents That Follow Language Commands in Temporal Logic.
    Kuo YL; Katz B; Barbu A
    Front Robot AI; 2021; 8():689550. PubMed ID: 34350213
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Hierarchical clustering optimizes the tradeoff between compositionality and expressivity of task structures for flexible reinforcement learning.
    Liu RG; Frank MJ
    Artif Intell; 2022 Nov; 312():. PubMed ID: 36711165
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Hierarchical planning with state abstractions for temporal task specifications.
    Oh Y; Patel R; Nguyen T; Huang B; Berg M; Pavlick E; Tellex S
    Auton Robots; 2022; 46(6):667-683. PubMed ID: 35692555
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions.
    Yamada T; Murata S; Arie H; Ogata T
    Front Neurorobot; 2017; 11():70. PubMed ID: 29311891
    [TBL] [Abstract][Full Text] [Related]  

  • 5. State-Temporal Compression in Reinforcement Learning With the Reward-Restricted Geodesic Metric.
    Guo S; Yan Q; Su X; Hu X; Chen F
    IEEE Trans Pattern Anal Mach Intell; 2022 Sep; 44(9):5572-5589. PubMed ID: 33764874
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Safe reinforcement learning under temporal logic with reward design and quantum action selection.
    Cai M; Xiao S; Li J; Kan Z
    Sci Rep; 2023 Feb; 13(1):1925. PubMed ID: 36732441
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Transfer of Temporal Logic Formulas in Reinforcement Learning.
    Xu Z; Topcu U
    IJCAI (U S); 2019; 28():4010-4018. PubMed ID: 31631953
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Human-level control through deep reinforcement learning.
    Mnih V; Kavukcuoglu K; Silver D; Rusu AA; Veness J; Bellemare MG; Graves A; Riedmiller M; Fidjeland AK; Ostrovski G; Petersen S; Beattie C; Sadik A; Antonoglou I; King H; Kumaran D; Wierstra D; Legg S; Hassabis D
    Nature; 2015 Feb; 518(7540):529-33. PubMed ID: 25719670
    [TBL] [Abstract][Full Text] [Related]  

  • 9. MetaDrive: Composing Diverse Driving Scenarios for Generalizable Reinforcement Learning.
    Li Q; Peng Z; Feng L; Zhang Q; Xue Z; Zhou B
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3461-3475. PubMed ID: 35830412
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks.
    Han D; Doya K; Tani J
    Neural Netw; 2020 Sep; 129():149-162. PubMed ID: 32534378
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Variational Cross-Graph Reasoning and Adaptive Structured Semantics Learning for Compositional Temporal Grounding.
    Li J; Tang S; Zhu L; Zhang W; Yang Y; Chua TS; Wu F; Zhuang Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12601-12617. PubMed ID: 37155378
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Meta-Reinforcement Learning in Nonstationary and Nonparametric Environments.
    Bing Z; Knak L; Cheng L; Morin FO; Huang K; Knoll A
    IEEE Trans Neural Netw Learn Syst; 2023 May; PP():. PubMed ID: 37224358
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Curriculum learning for human compositional generalization.
    Dekker RB; Otto F; Summerfield C
    Proc Natl Acad Sci U S A; 2022 Oct; 119(41):e2205582119. PubMed ID: 36191191
    [TBL] [Abstract][Full Text] [Related]  

  • 14. NeuroLISP: High-level symbolic programming with attractor neural networks.
    Davis GP; Katz GE; Gentili RJ; Reggia JA
    Neural Netw; 2022 Feb; 146():200-219. PubMed ID: 34894482
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Multi-label zero-shot learning with graph convolutional networks.
    Ou G; Yu G; Domeniconi C; Lu X; Zhang X
    Neural Netw; 2020 Dec; 132():333-341. PubMed ID: 32977278
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Generalizing to generalize: Humans flexibly switch between compositional and conjunctive structures during reinforcement learning.
    Franklin NT; Frank MJ
    PLoS Comput Biol; 2020 Apr; 16(4):e1007720. PubMed ID: 32282795
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Human-like systematic generalization through a meta-learning neural network.
    Lake BM; Baroni M
    Nature; 2023 Nov; 623(7985):115-121. PubMed ID: 37880371
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Compositional memory in attractor neural networks with one-step learning.
    Davis GP; Katz GE; Gentili RJ; Reggia JA
    Neural Netw; 2021 Jun; 138():78-97. PubMed ID: 33631609
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Multi-label zero-shot human action recognition via joint latent ranking embedding.
    Wang Q; Chen K
    Neural Netw; 2020 Feb; 122():1-23. PubMed ID: 31675624
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Context-Based Meta-Reinforcement Learning with Bayesian Nonparametric Models.
    Bing Z; Yun Y; Huang K; Knoll A
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; PP():. PubMed ID: 38593010
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.