These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

246 related articles for article (PubMed ID: 34659393)

  • 41. Mobility-Aware Resource Allocation in IoRT Network for Post-Disaster Communications with Parameterized Reinforcement Learning.
    Kabir H; Tham ML; Chang YC; Chow CO; Owada Y
    Sensors (Basel); 2023 Jul; 23(14):. PubMed ID: 37514742
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Dynamic sub-route-based self-adaptive beam search Q-learning algorithm for traveling salesman problem.
    Zhang J; Liu Q; Han X
    PLoS One; 2023; 18(3):e0283207. PubMed ID: 36943840
    [TBL] [Abstract][Full Text] [Related]  

  • 44. A Reinforcement Learning Handover Parameter Adaptation Method Based on LSTM-Aided Digital Twin for UDN.
    He J; Xiang T; Wang Y; Ruan H; Zhang X
    Sensors (Basel); 2023 Feb; 23(4):. PubMed ID: 36850792
    [TBL] [Abstract][Full Text] [Related]  

  • 45. A learning algorithm for adaptive canonical correlation analysis of several data sets.
    Vía J; Santamaría I; Pérez J
    Neural Netw; 2007 Jan; 20(1):139-52. PubMed ID: 17113263
    [TBL] [Abstract][Full Text] [Related]  

  • 46. FSPBO-DQN: SeGAN based segmentation and Fractional Student Psychology Optimization enabled Deep Q Network for skin cancer detection in IoT applications.
    Kumar KS; Suganthi N; Muppidi S; Kumar BS
    Artif Intell Med; 2022 Jul; 129():102299. PubMed ID: 35659386
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Feedback stabilization of probabilistic finite state machines based on deep Q-network.
    Tian H; Su X; Hou Y
    Front Comput Neurosci; 2024; 18():1385047. PubMed ID: 38756915
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks.
    Sun M; Jin Y; Wang S; Mei E
    Entropy (Basel); 2022 Nov; 24(12):. PubMed ID: 36554127
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Koopman Operator-Based Knowledge-Guided Reinforcement Learning for Safe Human-Robot Interaction.
    Sinha A; Wang Y
    Front Robot AI; 2022; 9():779194. PubMed ID: 35783024
    [TBL] [Abstract][Full Text] [Related]  

  • 50. A Heuristically Accelerated Reinforcement Learning-Based Neurosurgical Path Planner.
    Ji G; Gao Q; Zhang T; Cao L; Sun Z
    Cyborg Bionic Syst; 2023; 4():0026. PubMed ID: 37229101
    [TBL] [Abstract][Full Text] [Related]  

  • 51. A Deep Reinforcement Learning-Based MPPT Control for PV Systems under Partial Shading Condition.
    Phan BC; Lai YC; Lin CE
    Sensors (Basel); 2020 May; 20(11):. PubMed ID: 32471144
    [TBL] [Abstract][Full Text] [Related]  

  • 52. A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers.
    Villarrubia-Martin EA; Rodriguez-Benitez L; Jimenez-Linares L; Muñoz-Valero D; Liu J
    Int J Neural Syst; 2023 Dec; 33(12):2350065. PubMed ID: 37857407
    [TBL] [Abstract][Full Text] [Related]  

  • 53. A complementary learning approach for expertise transference of human-optimized controllers.
    Perrusquía A
    Neural Netw; 2022 Jan; 145():33-41. PubMed ID: 34715533
    [TBL] [Abstract][Full Text] [Related]  

  • 54. State Abstraction via Deep Supervised Hash Learning.
    Yang G; Xu Z; Huo J; Yang S; Ding T; Chen X; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2024 Oct; PP():. PubMed ID: 39423076
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Towards Deep Q-Network Based Resource Allocation in Industrial Internet of Things.
    Liang F; Yu W; Liu X; Griffith D; Golmie N
    IEEE Internet Things J; 2022 Jun; 9(12):. PubMed ID: 38486943
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments.
    Cross L; Cockburn J; Yue Y; O'Doherty JP
    Neuron; 2021 Feb; 109(4):724-738.e7. PubMed ID: 33326755
    [TBL] [Abstract][Full Text] [Related]  

  • 57. A Reinforcement Learning Approach for Flexible Job Shop Scheduling Problem With Crane Transportation and Setup Times.
    Du Y; Li J; Li C; Duan P
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5695-5709. PubMed ID: 36215382
    [TBL] [Abstract][Full Text] [Related]  

  • 58. Deep Reinforcement Learning-Empowered Resource Allocation for Mobile Edge Computing in Cellular V2X Networks.
    Li D; Xu S; Li P
    Sensors (Basel); 2021 Jan; 21(2):. PubMed ID: 33430386
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Self-Supervised Discovering of Interpretable Features for Reinforcement Learning.
    Shi W; Huang G; Song S; Wang Z; Lin T; Wu C
    IEEE Trans Pattern Anal Mach Intell; 2022 May; 44(5):2712-2724. PubMed ID: 33186101
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Deep Reinforcement Learning for Traffic Signal Control Model and Adaptation Study.
    Tan J; Yuan Q; Guo W; Xie N; Liu F; Wei J; Zhang X
    Sensors (Basel); 2022 Nov; 22(22):. PubMed ID: 36433328
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 13.