These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

116 related articles for article (PubMed ID: 37818429)

  • 21. Reinforcement learning-guided control strategies for CAR T-cell activation and expansion.
    Ferdous S; Shihab IF; Chowdhury R; Reuel NF
    Biotechnol Bioeng; 2024 May; ():. PubMed ID: 38812405
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Ecology of trading strategies in a forex market for limit and market orders.
    Sueshige T; Kanazawa K; Takayasu H; Takayasu M
    PLoS One; 2018; 13(12):e0208332. PubMed ID: 30557323
    [TBL] [Abstract][Full Text] [Related]  

  • 23. LSTM-DDPG for Trading with Variable Positions.
    Jia Z; Gao Q; Peng X
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640890
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks.
    Valdivieso Caraguay ÁL; Vásconez JP; Barona López LI; Benalcázar ME
    Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112246
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A novel carbon price forecasting method based on model matching, adaptive decomposition, and reinforcement learning ensemble strategy.
    Cao Z; Liu H
    Environ Sci Pollut Res Int; 2023 Mar; 30(13):36044-36067. PubMed ID: 36539662
    [TBL] [Abstract][Full Text] [Related]  

  • 26. From deterministic to stochastic: an interpretable stochastic model-free reinforcement learning framework for portfolio optimization.
    Song Z; Wang Y; Qian P; Song S; Coenen F; Jiang Z; Su J
    Appl Intell (Dordr); 2023; 53(12):15188-15203. PubMed ID: 36405345
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Structural break-aware pairs trading strategy using deep reinforcement learning.
    Lu JY; Lai HC; Shih WY; Chen YF; Huang SH; Chang HH; Wang JZ; Huang JL; Dai TS
    J Supercomput; 2022; 78(3):3843-3882. PubMed ID: 34421218
    [No Abstract]   [Full Text] [Related]  

  • 28. Analyzing the Importance of Broker Identities in the Limit Order Book Through Deep Learning.
    Choi SP; Chan YH; Lam SS; Hung HY
    Big Data; 2021 Apr; 9(2):89-99. PubMed ID: 33202194
    [TBL] [Abstract][Full Text] [Related]  

  • 29. A Study on the Impact of Integrating Reinforcement Learning for Channel Prediction and Power Allocation Scheme in MISO-NOMA System.
    Gaballa M; Abbod M; Aldallal A
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772422
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Extending the Omega model with momentum and reversal strategies to intraday trading.
    Yu JR; Wei CH; Lai CJ; Lee WY
    PLoS One; 2023; 18(9):e0291119. PubMed ID: 37682858
    [TBL] [Abstract][Full Text] [Related]  

  • 31. An LSTM and GRU based trading strategy adapted to the Moroccan market.
    Touzani Y; Douzi K
    J Big Data; 2021; 8(1):126. PubMed ID: 34603936
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Molecular Autonomous Pathfinder Using Deep Reinforcement Learning.
    Nomura KI; Mishra A; Sang T; Kalia RK; Nakano A; Vashishta P
    J Phys Chem Lett; 2024 May; 15(19):5288-5294. PubMed ID: 38722699
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Deep Reinforcement Learning With Explicit Context Representation.
    Munguia-Galeano F; Tan AH; Ji Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; PP():. PubMed ID: 37906492
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Using algorithmic trading to analyze short term profitability of Bitcoin.
    Ahmad I; Ahmad MO; Alqarni MA; Almazroi AA; Khalil MIK
    PeerJ Comput Sci; 2021; 7():e337. PubMed ID: 33816988
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Path Planning Algorithm for Unmanned Surface Vessel Based on Multiobjective Reinforcement Learning.
    Yang C; Zhao Y; Cai X; Wei W; Feng X; Zhou K
    Comput Intell Neurosci; 2023; 2023():2146314. PubMed ID: 36844696
    [TBL] [Abstract][Full Text] [Related]  

  • 36. A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm.
    Falces Marin J; Díaz Pardo de Vera D; Lopez Gonzalo E
    PLoS One; 2022; 17(12):e0277042. PubMed ID: 36538547
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Reactive Reinforcement Learning in Asynchronous Environments.
    Travnik JB; Mathewson KW; Sutton RS; Pilarski PM
    Front Robot AI; 2018; 5():79. PubMed ID: 33500958
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Biclustering Learning of Trading Rules.
    Huang Q; Wang T; Tao D; Li X
    IEEE Trans Cybern; 2015 Oct; 45(10):2287-98. PubMed ID: 25494520
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games.
    Zhu Y; Zhao D
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1228-1241. PubMed ID: 33306474
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Time-varying spillovers among pilot carbon emission trading markets in China.
    Xiao Z; Ma S; Sun H; Ren J; Feng C; Cui S
    Environ Sci Pollut Res Int; 2022 Aug; 29(38):57421-57436. PubMed ID: 35349066
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 6.