119 related articles for article (PubMed ID: 34255637)
1. Reservoir Memory Machines as Neural Computers.
Paassen B; Schulz A; Stewart TC; Hammer B
IEEE Trans Neural Netw Learn Syst; 2022 Jun; 33(6):2575-2585. PubMed ID: 34255637
[TBL] [Abstract][Full Text] [Related]
2. Multiresolution Reservoir Graph Neural Network.
Pasa L; Navarin N; Sperduti A
IEEE Trans Neural Netw Learn Syst; 2022 Jun; 33(6):2642-2653. PubMed ID: 34232893
[TBL] [Abstract][Full Text] [Related]
3. Breaking Neural Reasoning Architectures With Metamorphic Relation-Based Adversarial Examples.
Chan A; Ma L; Juefei-Xu F; Ong YS; Xie X; Xue M; Liu Y
IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6976-6982. PubMed ID: 33886479
[TBL] [Abstract][Full Text] [Related]
4. Hybrid computing using a neural network with dynamic external memory.
Graves A; Wayne G; Reynolds M; Harley T; Danihelka I; Grabska-Barwińska A; Colmenarejo SG; Grefenstette E; Ramalho T; Agapiou J; Badia AP; Hermann KM; Zwols Y; Ostrovski G; Cain A; King H; Summerfield C; Blunsom P; Kavukcuoglu K; Hassabis D
Nature; 2016 Oct; 538(7626):471-476. PubMed ID: 27732574
[TBL] [Abstract][Full Text] [Related]
5. CSLM: Convertible Short-Term and Long-Term Memory in Differential Neural Computers.
Xiang S; Tang B
IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):4026-4038. PubMed ID: 32841126
[TBL] [Abstract][Full Text] [Related]
6. Multifunctionality in a reservoir computer.
Flynn A; Tsachouridis VA; Amann A
Chaos; 2021 Jan; 31(1):013125. PubMed ID: 33754772
[TBL] [Abstract][Full Text] [Related]
7. Experimentally validated memristive memory augmented neural network with efficient hashing and similarity search.
Mao R; Wen B; Kazemi A; Zhao Y; Laguna AF; Lin R; Wong N; Niemier M; Hu XS; Sheng X; Graves CE; Strachan JP; Li C
Nat Commun; 2022 Oct; 13(1):6284. PubMed ID: 36271072
[TBL] [Abstract][Full Text] [Related]
8. Learning continuous chaotic attractors with a reservoir computer.
Smith LM; Kim JZ; Lu Z; Bassett DS
Chaos; 2022 Jan; 32(1):011101. PubMed ID: 35105129
[TBL] [Abstract][Full Text] [Related]
9. Optimised weight programming for analogue memory-based deep neural networks.
Mackin C; Rasch MJ; Chen A; Timcheck J; Bruce RL; Li N; Narayanan P; Ambrogio S; Le Gallo M; Nandakumar SR; Fasoli A; Luquin J; Friz A; Sebastian A; Tsai H; Burr GW
Nat Commun; 2022 Jun; 13(1):3765. PubMed ID: 35773285
[TBL] [Abstract][Full Text] [Related]
10. Noise-injected analog Ising machines enable ultrafast statistical sampling and machine learning.
Böhm F; Alonso-Urquijo D; Verschaffelt G; Van der Sande G
Nat Commun; 2022 Oct; 13(1):5847. PubMed ID: 36195589
[TBL] [Abstract][Full Text] [Related]
11. Echo Memory-Augmented Network for time series classification.
Ma Q; Zheng Z; Zhuang W; Chen E; Wei J; Wang J
Neural Netw; 2021 Jan; 133():177-192. PubMed ID: 33220642
[TBL] [Abstract][Full Text] [Related]
12. Transferring learning from external to internal weights in echo-state networks with sparse connectivity.
Sussillo D; Abbott LF
PLoS One; 2012; 7(5):e37372. PubMed ID: 22655041
[TBL] [Abstract][Full Text] [Related]
13. Computational analysis of memory capacity in echo state networks.
Farkaš I; Bosák R; Gergeľ P
Neural Netw; 2016 Nov; 83():109-120. PubMed ID: 27599031
[TBL] [Abstract][Full Text] [Related]
14. Time series classification with Echo Memory Networks.
Ma Q; Zhuang W; Shen L; Cottrell GW
Neural Netw; 2019 Sep; 117():225-239. PubMed ID: 31176962
[TBL] [Abstract][Full Text] [Related]
15. Augmented Graph Neural Network with hierarchical global-based residual connections.
Rassil A; Chougrad H; Zouaki H
Neural Netw; 2022 Jun; 150():149-166. PubMed ID: 35313247
[TBL] [Abstract][Full Text] [Related]
16. Sparsity-control ternary weight networks.
Deng X; Zhang Z
Neural Netw; 2022 Jan; 145():221-232. PubMed ID: 34773898
[TBL] [Abstract][Full Text] [Related]
17. Neuromorphic Time-Multiplexed Reservoir Computing With On-the-Fly Weight Generation for Edge Devices.
Gupta S; Chakraborty S; Thakur CS
IEEE Trans Neural Netw Learn Syst; 2022 Jun; 33(6):2676-2685. PubMed ID: 34125686
[TBL] [Abstract][Full Text] [Related]
18. Understanding and mitigating noise in trained deep neural networks.
Semenova N; Larger L; Brunner D
Neural Netw; 2022 Feb; 146():151-160. PubMed ID: 34864223
[TBL] [Abstract][Full Text] [Related]
19. A hybrid quantum-classical neural network with deep residual learning.
Liang Y; Peng W; Zheng ZJ; Silvén O; Zhao G
Neural Netw; 2021 Nov; 143():133-147. PubMed ID: 34139629
[TBL] [Abstract][Full Text] [Related]
20. DANTE: Deep alternations for training neural networks.
Sinha VB; Kudugunta S; Sankar AR; Chavali ST; Balasubramanian VN
Neural Netw; 2020 Nov; 131():127-143. PubMed ID: 32771843
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]