These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

133 related articles for article (PubMed ID: 35615470)

  • 1. Neural Network Training With Asymmetric Crosspoint Elements.
    Onen M; Gokmen T; Todorov TK; Nowicki T; Del Alamo JA; Rozen J; Haensch W; Kim S
    Front Artif Intell; 2022; 5():891624. PubMed ID: 35615470
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm.
    Lee C; Noh K; Ji W; Gokmen T; Kim S
    Front Neurosci; 2021; 15():767953. PubMed ID: 35069098
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Enabling Training of Neural Networks on Noisy Hardware.
    Gokmen T
    Front Artif Intell; 2021; 4():699148. PubMed ID: 34568813
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Algorithm for Training Neural Networks on Resistive Device Arrays.
    Gokmen T; Haensch W
    Front Neurosci; 2020; 14():103. PubMed ID: 32174807
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Design and characterization of superconducting nanowire-based processors for acceleration of deep neural network training.
    Onen M; Butters BA; Toomey E; Gokmen T; Berggren KK
    Nanotechnology; 2020 Jan; 31(2):025204. PubMed ID: 31553955
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems.
    Zhang Q; Wu H; Yao P; Zhang W; Gao B; Deng N; Qian H
    Neural Netw; 2018 Dec; 108():217-223. PubMed ID: 30216871
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Low-Latency DNN Accelerator Enabled by DFT-Based Convolution Execution Within Crossbar Arrays.
    Veluri H; Chand U; Chen CK; Thean AV
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019632
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Learning-Rate Modulable and Reliable TiO
    Jang J; Gi S; Yeo I; Choi S; Jang S; Ham S; Lee B; Wang G
    Adv Sci (Weinh); 2022 Aug; 9(22):e2201117. PubMed ID: 35666073
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations.
    Gokmen T; Vlasov Y
    Front Neurosci; 2016; 10():333. PubMed ID: 27493624
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Training fully connected networks with resistive memories: impact of device failures.
    Romero LP; Ambrogio S; Giordano M; Cristiano G; Bodini M; Narayanan P; Tsai H; Shelby RM; Burr GW
    Faraday Discuss; 2019 Feb; 213(0):371-391. PubMed ID: 30357183
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.
    Miranda E; Suñé J
    Materials (Basel); 2020 Feb; 13(4):. PubMed ID: 32093164
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.
    Gokmen T; Onen M; Haensch W
    Front Neurosci; 2017; 11():538. PubMed ID: 29066942
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Impact of Synaptic Device Variations on Classification Accuracy in a Binarized Neural Network.
    Kim S; Kim HD; Choi SJ
    Sci Rep; 2019 Oct; 9(1):15237. PubMed ID: 31645636
    [TBL] [Abstract][Full Text] [Related]  

  • 14.
    Li Y; Xiao TP; Bennett CH; Isele E; Melianas A; Tao H; Marinella MJ; Salleo A; Fuller EJ; Talin AA
    Front Neurosci; 2021; 15():636127. PubMed ID: 33897351
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Signal and noise extraction from analog memory elements for neuromorphic computing.
    Gong N; Idé T; Kim S; Boybat I; Sebastian A; Narayanan V; Ando T
    Nat Commun; 2018 May; 9(1):2102. PubMed ID: 29844421
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Linear conductance update improvement of CMOS-compatible second-order memristors for fast and energy-efficient training of a neural network using a memristor crossbar array.
    Park SO; Park T; Jeong H; Hong S; Seo S; Kwon Y; Lee J; Choi S
    Nanoscale Horiz; 2023 Sep; 8(10):1366-1376. PubMed ID: 37403772
    [TBL] [Abstract][Full Text] [Related]  

  • 17. On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.
    Kwon D; Lim S; Bae JH; Lee ST; Kim H; Seo YT; Oh S; Kim J; Yeom K; Park BG; Lee JH
    Front Neurosci; 2020; 14():423. PubMed ID: 32733180
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Thousands of conductance levels in memristors integrated on CMOS.
    Rao M; Tang H; Wu J; Song W; Zhang M; Yin W; Zhuo Y; Kiani F; Chen B; Jiang X; Liu H; Chen HY; Midya R; Ye F; Jiang H; Wang Z; Wu M; Hu M; Wang H; Xia Q; Ge N; Li J; Yang JJ
    Nature; 2023 Mar; 615(7954):823-829. PubMed ID: 36991190
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A Low-Power DNN Accelerator Enabled by a Novel Staircase RRAM Array.
    Veluri H; Chand U; Li Y; Tang B; Thean AV
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4416-4427. PubMed ID: 34669580
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.
    Dorzhigulov A; Saxena V
    Front Neurosci; 2023; 17():1177592. PubMed ID: 37534034
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.