These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
115 related articles for article (PubMed ID: 35275829)
1. Prototype-Based Interpretation of the Functionality of Neurons in Winner-Take-All Neural Networks. Zarei-Sabzevar R; Ghiasi-Shirazi K; Harati A IEEE Trans Neural Netw Learn Syst; 2023 Nov; 34(11):9016-9028. PubMed ID: 35275829 [TBL] [Abstract][Full Text] [Related]
2. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation. Liu Q; Dang C; Cao J IEEE Trans Neural Netw; 2010 Jul; 21(7):1140-8. PubMed ID: 20659863 [TBL] [Abstract][Full Text] [Related]
3. Dynamic analysis of a general class of winner-take-all competitive neural networks. Fang Y; Cohen MA; Kincaid TG IEEE Trans Neural Netw; 2010 May; 21(5):771-83. PubMed ID: 20215068 [TBL] [Abstract][Full Text] [Related]
4. Layer Winner-Take-All neural networks based on existing competitive structures. Chen CM; Yang JF IEEE Trans Syst Man Cybern B Cybern; 2000; 30(1):25-30. PubMed ID: 18244726 [TBL] [Abstract][Full Text] [Related]
5. Emergent Inference of Hidden Markov Models in Spiking Neural Networks Through Winner-Take-All. Yu Z; Guo S; Deng F; Yan Q; Huang K; Liu JK; Chen F IEEE Trans Cybern; 2020 Mar; 50(3):1347-1354. PubMed ID: 30295641 [TBL] [Abstract][Full Text] [Related]
6. Margined winner-take-all: New learning rule for pattern recognition. Fukushima K Neural Netw; 2018 Jan; 97():152-161. PubMed ID: 29126068 [TBL] [Abstract][Full Text] [Related]
7. Robust k-WTA Network Generation, Analysis, and Applications to Multiagent Coordination. Qi Y; Jin L; Luo X; Shi Y; Liu M IEEE Trans Cybern; 2022 Aug; 52(8):8515-8527. PubMed ID: 34133299 [TBL] [Abstract][Full Text] [Related]
8. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity. Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221 [TBL] [Abstract][Full Text] [Related]
9. A 4K-Input High-Speed Winner-Take-All (WTA) Circuit with Single-Winner Selection for Change-Driven Vision Sensors. Pardo F; Reig C; Boluda JA; Vegara F Sensors (Basel); 2019 Jan; 19(2):. PubMed ID: 30669700 [TBL] [Abstract][Full Text] [Related]
10. Initialization-Based k-Winners-Take-All Neural Network Model Using Modified Gradient Descent. Zhang Y; Li S; Geng G IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4130-4138. PubMed ID: 34752408 [TBL] [Abstract][Full Text] [Related]
11. Training winner-take-all simultaneous recurrent neural networks. Cai X; Prokhorov DV; Wunsch DC IEEE Trans Neural Netw; 2007 May; 18(3):674-84. PubMed ID: 17526335 [TBL] [Abstract][Full Text] [Related]
12. Efficient IntVec: High recognition rate with reduced computational cost. Fukushima K Neural Netw; 2019 Nov; 119():323-331. PubMed ID: 31499356 [TBL] [Abstract][Full Text] [Related]
13. Computation with spikes in a winner-take-all network. Oster M; Douglas R; Liu SC Neural Comput; 2009 Sep; 21(9):2437-65. PubMed ID: 19548795 [TBL] [Abstract][Full Text] [Related]
14. Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. Li S; Liu B; Li Y IEEE Trans Neural Netw Learn Syst; 2013 Feb; 24(2):301-9. PubMed ID: 24808283 [TBL] [Abstract][Full Text] [Related]
15. An Online Unsupervised Structural Plasticity Algorithm for Spiking Neural Networks. Roy S; Basu A IEEE Trans Neural Netw Learn Syst; 2017 Apr; 28(4):900-910. PubMed ID: 27411229 [TBL] [Abstract][Full Text] [Related]
17. A general mean-based iterative winner-take-all neural network. Yang JF; Chen CM; Wang WC; Lee JY IEEE Trans Neural Netw; 1995; 6(1):14-24. PubMed ID: 18263281 [TBL] [Abstract][Full Text] [Related]
18. Towards Adversarial Robustness for Multi-Mode Data through Metric Learning. Khan S; Chen JC; Liao WH; Chen CS Sensors (Basel); 2023 Jul; 23(13):. PubMed ID: 37448021 [TBL] [Abstract][Full Text] [Related]
19. Dynamics of a Winner-Take-All Neural Network. KINCAID TG; COHEN MA; FANG Y Neural Netw; 1996 Oct; 9(7):1141-1154. PubMed ID: 12662589 [TBL] [Abstract][Full Text] [Related]
20. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation. Vuković N; Miljković Z Neural Netw; 2013 Oct; 46():210-26. PubMed ID: 23811384 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]