These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

225 related articles for article (PubMed ID: 33774425)

  • 1. Diversity-driven knowledge distillation for financial trading using Deep Reinforcement Learning.
    Tsantekidis A; Passalis N; Tefas A
    Neural Netw; 2021 Aug; 140():193-202. PubMed ID: 33774425
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Price Trailing for Financial Trading Using Deep Reinforcement Learning.
    Tsantekidis A; Passalis N; Toufa AS; Saitas-Zarkias K; Chairistanidis S; Tefas A
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2837-2846. PubMed ID: 32516114
    [TBL] [Abstract][Full Text] [Related]  

  • 3. MSPM: A modularized and scalable multi-agent reinforcement learning-based system for financial portfolio management.
    Huang Z; Tanaka F
    PLoS One; 2022; 17(2):e0263689. PubMed ID: 35180235
    [TBL] [Abstract][Full Text] [Related]  

  • 4. QF-TraderNet: Intraday Trading
    Qiu Y; Qiu Y; Yuan Y; Chen Z; Lee R
    Front Artif Intell; 2021; 4():749878. PubMed ID: 34778753
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Deep Direct Reinforcement Learning for Financial Signal Representation and Trading.
    Deng Y; Bao F; Kong Y; Ren Z; Dai Q
    IEEE Trans Neural Netw Learn Syst; 2017 Mar; 28(3):653-664. PubMed ID: 26890927
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Modeling limit order trading with a continuous action policy for deep reinforcement learning.
    Tsantekidis A; Passalis N; Tefas A
    Neural Netw; 2023 Aug; 165():506-515. PubMed ID: 37348431
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation.
    Zhang T; Wang X; Liang B; Yuan B
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9925-9939. PubMed ID: 35439142
    [TBL] [Abstract][Full Text] [Related]  

  • 8. KnowRU: Knowledge Reuse via Knowledge Distillation in Multi-Agent Reinforcement Learning.
    Gao Z; Xu K; Ding B; Wang H
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441184
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Achieving efficient interpretability of reinforcement learning via policy distillation and selective input gradient regularization.
    Xing J; Nagata T; Zou X; Neftci E; Krichmar JL
    Neural Netw; 2023 Apr; 161():228-241. PubMed ID: 36774862
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Spatiotemporal knowledge teacher-student reinforcement learning to detect liver tumors without contrast agents.
    Xu C; Song Y; Zhang D; Bittencourt LK; Tirumani SH; Li S
    Med Image Anal; 2023 Dec; 90():102980. PubMed ID: 37820417
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Asynchronous Deep Double Dueling Q-learning for trading-signal execution in limit order book markets.
    Nagy P; Calliess JP; Zohren S
    Front Artif Intell; 2023; 6():1151003. PubMed ID: 37818429
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Multi-view Teacher-Student Network.
    Tian Y; Sun S; Tang J
    Neural Netw; 2022 Feb; 146():69-84. PubMed ID: 34839092
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adversarial Distillation for Learning with Privileged Provisions.
    Wang X; Zhang R; Sun Y; Qi J
    IEEE Trans Pattern Anal Mach Intell; 2021 Mar; 43(3):786-797. PubMed ID: 31545712
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models.
    Boutros F; Damer N; Raja K; Kirchbuchner F; Kuijper A
    Sensors (Basel); 2022 Mar; 22(5):. PubMed ID: 35271074
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Memory-Replay Knowledge Distillation.
    Wang J; Zhang P; Li Y
    Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A General Dynamic Knowledge Distillation Method for Visual Analytics.
    Tu Z; Liu X; Xiao X
    IEEE Trans Image Process; 2022 Oct; PP():. PubMed ID: 36227819
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Feature Map Distillation of Thin Nets for Low-Resolution Object Recognition.
    Huang Z; Yang S; Zhou M; Li Z; Gong Z; Chen Y
    IEEE Trans Image Process; 2022; 31():1364-1379. PubMed ID: 35025743
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Knowledge Fusion Distillation: Improving Distillation with Multi-scale Attention Mechanisms.
    Li L; Su W; Liu F; He M; Liang X
    Neural Process Lett; 2023 Jan; ():1-16. PubMed ID: 36619739
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Meta-Reinforcement Learning With Dynamic Adaptiveness Distillation.
    Hu H; Huang G; Li X; Song S
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; 34(3):1454-1464. PubMed ID: 34464267
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Eliminating Primacy Bias in Online Reinforcement Learning by Self-Distillation.
    Li J; Shi H; Wu H; Zhao C; Hwang KS
    IEEE Trans Neural Netw Learn Syst; 2024 May; PP():. PubMed ID: 38758623
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.