These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 33481721)

  • 1. Scalable Inverse Reinforcement Learning Through Multifidelity Bayesian Optimization.
    Imani M; Ghoreishi SF
    IEEE Trans Neural Netw Learn Syst; 2022 Aug; 33(8):4125-4132. PubMed ID: 33481721
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Two-Stage Bayesian Optimization for Scalable Inference in State-Space Models.
    Imani M; Ghoreishi SF
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5138-5149. PubMed ID: 33819163
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Graph-Based Bayesian Optimization for Large-Scale Objective-Based Experimental Design.
    Imani M; Ghoreishi SF
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5913-5925. PubMed ID: 33877989
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Emergence of belief-like representations through reinforcement learning.
    Hennig JA; Romero Pinto SA; Yamaguchi T; Linderman SW; Uchida N; Gershman SJ
    PLoS Comput Biol; 2023 Sep; 19(9):e1011067. PubMed ID: 37695776
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks.
    Triche A; Maida AS; Kumar A
    Neural Netw; 2022 Jul; 151():16-33. PubMed ID: 35367735
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Orientation-Preserving Rewards' Balancing in Reinforcement Learning.
    Ren J; Guo S; Chen F
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6458-6472. PubMed ID: 34115593
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Continuous Action Reinforcement Learning From a Mixture of Interpretable Experts.
    Akrour R; Tateo D; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6795-6806. PubMed ID: 34375280
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Energy-efficient and damage-recovery slithering gait design for a snake-like robot based on reinforcement learning and inverse reinforcement learning.
    Bing Z; Lemke C; Cheng L; Huang K; Knoll A
    Neural Netw; 2020 Sep; 129():323-333. PubMed ID: 32593929
    [TBL] [Abstract][Full Text] [Related]  

  • 9. State-space Model Based Inverse Reinforcement Learning for Reward Function Estimation in Brain-machine Interfaces.
    Tan J; Zhang X; Wu S; Wang Y
    Annu Int Conf IEEE Eng Med Biol Soc; 2023 Jul; 2023():1-4. PubMed ID: 38083150
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Efficient Reinforcement Learning from Demonstration via Bayesian Network-Based Knowledge Extraction.
    Zhang Y; Lan Y; Fang Q; Xu X; Li J; Zeng Y
    Comput Intell Neurosci; 2021; 2021():7588221. PubMed ID: 34603434
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Social learning across adolescence: A Bayesian neurocognitive perspective.
    Hofmans L; van den Bos W
    Dev Cogn Neurosci; 2022 Dec; 58():101151. PubMed ID: 36183664
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Examining multi-objective deep reinforcement learning frameworks for molecular design.
    Al-Jumaily A; Mukaidaisi M; Vu A; Tchagang A; Li Y
    Biosystems; 2023 Oct; 232():104989. PubMed ID: 37544406
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A Generalized Framework of Multifidelity Max-Value Entropy Search Through Joint Entropy.
    Takeno S; Fukuoka H; Tsukada Y; Koyama T; Shiga M; Takeuchi I; Karasuyama M
    Neural Comput; 2022 Sep; 34(10):2145-2203. PubMed ID: 36027725
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Bayesian reinforcement learning: A basic overview.
    Kang P; Tobler PN; Dayan P
    Neurobiol Learn Mem; 2024 May; 211():107924. PubMed ID: 38579896
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Deep inverse reinforcement learning for structural evolution of small molecules.
    Agyemang B; Wu WP; Addo D; Kpiebaareh MY; Nanor E; Roland Haruna C
    Brief Bioinform; 2021 Jul; 22(4):. PubMed ID: 33348357
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Learning the Dynamic Treatment Regimes from Medical Registry Data through Deep Q-network.
    Liu N; Liu Y; Logan B; Xu Z; Tang J; Wang Y
    Sci Rep; 2019 Feb; 9(1):1495. PubMed ID: 30728403
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Uncertainty propagation for dropout-based Bayesian neural networks.
    Mae Y; Kumagai W; Kanamori T
    Neural Netw; 2021 Dec; 144():394-406. PubMed ID: 34562813
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Bayesian Optimization for Design of Multiscale Biological Circuits.
    Merzbacher C; Mac Aodha O; OyarzĂșn DA
    ACS Synth Biol; 2023 Jul; 12(7):2073-2082. PubMed ID: 37339382
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.
    Yuan M; Wu X; Yan R; Tang H
    Neural Comput; 2019 Dec; 31(12):2368-2389. PubMed ID: 31614099
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Towards more efficient and robust evaluation of sepsis treatment with deep reinforcement learning.
    Yu C; Huang Q
    BMC Med Inform Decis Mak; 2023 Mar; 23(1):43. PubMed ID: 36859257
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.