These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

346 related articles for article (PubMed ID: 33540868)

  • 1. Intelligent Decision-Making of Scheduling for Dynamic Permutation Flowshop via Deep Reinforcement Learning.
    Yang S; Xu Z; Wang J
    Sensors (Basel); 2021 Feb; 21(3):. PubMed ID: 33540868
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An actor-critic framework based on deep reinforcement learning for addressing flexible job shop scheduling problems.
    Zhao C; Deng N
    Math Biosci Eng; 2024 Jan; 21(1):1445-1471. PubMed ID: 38303472
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Dynamic Intelligent Scheduling in Low-Carbon Heterogeneous Distributed Flexible Job Shops with Job Insertions and Transfers.
    Chen Y; Liao X; Chen G; Hou Y
    Sensors (Basel); 2024 Mar; 24(7):. PubMed ID: 38610462
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Intelligent control of self-driving vehicles based on adaptive sampling supervised actor-critic and human driving experience.
    Zhang J; Ma N; Wu Z; Wang C; Yao Y
    Math Biosci Eng; 2024 May; 21(5):6077-6096. PubMed ID: 38872570
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Actor-critic learning-based energy optimization for UAV access and backhaul networks.
    Yuan Y; Lei L; Vu TX; Chatzinotas S; Sun S; Ottersten B
    EURASIP J Wirel Commun Netw; 2021; 2021(1):78. PubMed ID: 34777489
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Attention-Shared Multi-Agent Actor-Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks.
    Jiang C; Wang Z; Chen S; Li J; Wang H; Xiang J; Xiao W
    Entropy (Basel); 2022 Jul; 24(7):. PubMed ID: 35885188
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Granular Prediction and Dynamic Scheduling Based on Adaptive Dynamic Programming for the Blast Furnace Gas System.
    Zhao J; Wang T; Pedrycz W; Wang W
    IEEE Trans Cybern; 2021 Apr; 51(4):2201-2214. PubMed ID: 30951483
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A novel method-based reinforcement learning with deep temporal difference network for flexible double shop scheduling problem.
    Wang X; Zhong P; Liu M; Zhang C; Yang S
    Sci Rep; 2024 Apr; 14(1):9047. PubMed ID: 38641689
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing.
    Sheng S; Chen P; Chen Z; Wu L; Yao Y
    Sensors (Basel); 2021 Feb; 21(5):. PubMed ID: 33671072
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Flexible Job Shop Scheduling via Dual Attention Network-Based Reinforcement Learning.
    Wang R; Wang G; Sun J; Deng F; Chen J
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3091-3102. PubMed ID: 37695952
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Teleconsultation dynamic scheduling with a deep reinforcement learning approach.
    Chen W; Li J
    Artif Intell Med; 2024 Mar; 149():102806. PubMed ID: 38462294
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A Reinforcement Learning Approach to Robust Scheduling of Permutation Flow Shop.
    Zhou T; Luo L; Ji S; He Y
    Biomimetics (Basel); 2023 Oct; 8(6):. PubMed ID: 37887609
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A priority experience replay actor-critic algorithm using self-attention mechanism for strategy optimization of discrete problems.
    Sun Y; Yang B
    PeerJ Comput Sci; 2024; 10():e2161. PubMed ID: 38983226
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Deep Reinforcement Learning Multi-Agent System for Resource Allocation in Industrial Internet of Things.
    Rosenberger J; Urlaub M; Rauterberg F; Lutz T; Selig A; Bühren M; Schramm D
    Sensors (Basel); 2022 May; 22(11):. PubMed ID: 35684720
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Deep reinforcement learning task scheduling method based on server real-time performance.
    Wang J; Li S; Zhang X; Wu F; Xie C
    PeerJ Comput Sci; 2024; 10():e2120. PubMed ID: 38983221
    [TBL] [Abstract][Full Text] [Related]  

  • 16. End-to-End AUV Motion Planning Method Based on Soft Actor-Critic.
    Yu X; Sun Y; Wang X; Zhang G
    Sensors (Basel); 2021 Sep; 21(17):. PubMed ID: 34502781
    [TBL] [Abstract][Full Text] [Related]  

  • 17. An Effective Evolutionary Hybrid for Solving the Permutation Flowshop Scheduling Problem.
    Amirghasemi M; Zamani R
    Evol Comput; 2017; 25(1):87-111. PubMed ID: 26223000
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Dynamic sparse coding-based value estimation network for deep reinforcement learning.
    Zhao H; Li Z; Su W; Xie S
    Neural Netw; 2023 Nov; 168():180-193. PubMed ID: 37757726
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Research on reinforcement learning-based safe decision-making methodology for multiple unmanned aerial vehicles.
    Yue L; Yang R; Zhang Y; Zuo J
    Front Neurorobot; 2022; 16():1105480. PubMed ID: 36704719
    [TBL] [Abstract][Full Text] [Related]  

  • 20. An Improved Approach towards Multi-Agent Pursuit-Evasion Game Decision-Making Using Deep Reinforcement Learning.
    Wan K; Wu D; Zhai Y; Li B; Gao X; Hu Z
    Entropy (Basel); 2021 Oct; 23(11):. PubMed ID: 34828131
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 18.