These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

112 related articles for article (PubMed ID: 34614862)

  • 1. Photonic decision-making for arbitrary-number-armed bandit problem utilizing parallel chaos generation.
    Peng J; Jiang N; Zhao A; Liu S; Zhang Y; Qiu K; Zhang Q
    Opt Express; 2021 Aug; 29(16):25290-25301. PubMed ID: 34614862
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Decision making for large-scale multi-armed bandit problems using bias control of chaotic temporal waveforms in semiconductor lasers.
    Morijiri K; Mihana T; Kanno K; Naruse M; Uchida A
    Sci Rep; 2022 May; 12(1):8073. PubMed ID: 35577847
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Decision making for the multi-armed bandit problem using lag synchronization of chaos in mutually coupled semiconductor lasers.
    Mihana T; Mitsui Y; Takabayashi M; Kanno K; Sunada S; Naruse M; Uchida A
    Opt Express; 2019 Sep; 27(19):26989-27008. PubMed ID: 31674568
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Scalable photonic reinforcement learning by time-division multiplexing of laser chaos.
    Naruse M; Mihana T; Hori H; Saigo H; Okamura K; Hasegawa M; Uchida A
    Sci Rep; 2018 Jul; 8(1):10890. PubMed ID: 30022085
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Time-delay signature concealment of chaos and ultrafast decision making in mutually coupled semiconductor lasers with a phase-modulated Sagnac loop.
    Ma Y; Xiang S; Guo X; Song Z; Wen A; Hao Y
    Opt Express; 2020 Jan; 28(2):1665-1678. PubMed ID: 32121874
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Ultrafast photonic reinforcement learning based on laser chaos.
    Naruse M; Terashima Y; Uchida A; Kim SJ
    Sci Rep; 2017 Aug; 7(1):8772. PubMed ID: 28821739
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Laser network decision making by lag synchronization of chaos in a ring configuration.
    Mihana T; Fujii K; Kanno K; Naruse M; Uchida A
    Opt Express; 2020 Dec; 28(26):40112-40130. PubMed ID: 33379544
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Conflict-free collective stochastic decision making by orbital angular momentum of photons through quantum interference.
    Amakasu T; Chauvet N; Bachelier G; Huant S; Horisaki R; Naruse M
    Sci Rep; 2021 Oct; 11(1):21117. PubMed ID: 34702905
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Altered Statistical Learning and Decision-Making in Methamphetamine Dependence: Evidence from a Two-Armed Bandit Task.
    Harlé KM; Zhang S; Schiff M; Mackey S; Paulus MP; Yu AJ
    Front Psychol; 2015; 6():1910. PubMed ID: 26733906
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Dynamic channel selection in wireless communications via a multi-armed bandit algorithm using laser chaos time series.
    Takeuchi S; Hasegawa M; Kanno K; Uchida A; Chauvet N; Naruse M
    Sci Rep; 2020 Jan; 10(1):1574. PubMed ID: 32005883
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.
    Wang A; Wang L; Li P; Wang Y
    Opt Express; 2017 Feb; 25(4):3153-3164. PubMed ID: 28241531
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data.
    Chen X; Lee KM; Villar SS; Robertson DS
    PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Arm order recognition in multi-armed bandit problem with laser chaos time series.
    Narisawa N; Chauvet N; Hasegawa M; Naruse M
    Sci Rep; 2021 Feb; 11(1):4459. PubMed ID: 33627692
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.
    Kamiura M; Sano K
    Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871
    [TBL] [Abstract][Full Text] [Related]  

  • 16. On-chip photonic decision maker using spontaneous mode switching in a ring laser.
    Homma R; Kochi S; Niiyama T; Mihana T; Mitsui Y; Kanno K; Uchida A; Naruse M; Sunada S
    Sci Rep; 2019 Jul; 9(1):9429. PubMed ID: 31263142
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Decision-making without a brain: how an amoeboid organism solves the two-armed bandit.
    Reid CR; MacDonald H; Mann RP; Marshall JA; Latty T; Garnier S
    J R Soc Interface; 2016 Jun; 13(119):. PubMed ID: 27278359
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Risk-aware multi-armed bandit problem with application to portfolio selection.
    Huo X; Fu F
    R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Entangled and correlated photon mixed strategy for social decision making.
    Maeda S; Chauvet N; Saigo H; Hori H; Bachelier G; Huant S; Naruse M
    Sci Rep; 2021 Mar; 11(1):4832. PubMed ID: 33649385
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges.
    Villar SS; Bowden J; Wason J
    Stat Sci; 2015; 30(2):199-215. PubMed ID: 27158186
    [No Abstract]   [Full Text] [Related]  

    [Next]    [New Search]
    of 6.