These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
2. Global importance analysis: An interpretability method to quantify importance of genomic features in deep neural networks. Koo PK; Majdandzic A; Ploenzke M; Anand P; Paul SB PLoS Comput Biol; 2021 May; 17(5):e1008925. PubMed ID: 33983921 [TBL] [Abstract][Full Text] [Related]
3. Interpretation of deep learning in genomics and epigenomics. Talukder A; Barham C; Li X; Hu H Brief Bioinform; 2021 May; 22(3):. PubMed ID: 34020542 [TBL] [Abstract][Full Text] [Related]
4. Explaining decisions of graph convolutional neural networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer. Chereda H; Bleckmann A; Menck K; Perera-Bel J; Stegmaier P; Auer F; Kramer F; Leha A; Beißbarth T Genome Med; 2021 Mar; 13(1):42. PubMed ID: 33706810 [TBL] [Abstract][Full Text] [Related]
9. ENNGene: an Easy Neural Network model building tool for Genomics. Chalupová E; Vaculík O; Poláček J; Jozefov F; Majtner T; Alexiou P BMC Genomics; 2022 Mar; 23(1):248. PubMed ID: 35361122 [TBL] [Abstract][Full Text] [Related]
10. Explainable deep transfer learning model for disease risk prediction using high-dimensional genomic data. Liu L; Meng Q; Weng C; Lu Q; Wang T; Wen Y PLoS Comput Biol; 2022 Jul; 18(7):e1010328. PubMed ID: 35839250 [TBL] [Abstract][Full Text] [Related]
11. Discovering epistatic feature interactions from neural network models of regulatory DNA sequences. Greenside P; Shimko T; Fordyce P; Kundaje A Bioinformatics; 2018 Sep; 34(17):i629-i637. PubMed ID: 30423062 [TBL] [Abstract][Full Text] [Related]
12. Interpretable Artificial Intelligence through Locality Guided Neural Networks. Tan R; Gao L; Khan N; Guan L Neural Netw; 2022 Nov; 155():58-73. PubMed ID: 36041281 [TBL] [Abstract][Full Text] [Related]
13. Interpretable single-cell transcription factor prediction based on deep learning with attention mechanism. Gong M; He Y; Wang M; Zhang Y; Ding C Comput Biol Chem; 2023 Oct; 106():107923. PubMed ID: 37598467 [TBL] [Abstract][Full Text] [Related]
14. Interpretable neural architecture search and transfer learning for understanding CRISPR-Cas9 off-target enzymatic reactions. Zhang Z; Lamson AR; Shelley M; Troyanskaya O Nat Comput Sci; 2023 Dec; 3(12):1056-1066. PubMed ID: 38177723 [TBL] [Abstract][Full Text] [Related]
16. Visual interpretability in 3D brain tumor segmentation network. Saleem H; Shahid AR; Raza B Comput Biol Med; 2021 Jun; 133():104410. PubMed ID: 33894501 [TBL] [Abstract][Full Text] [Related]
17. SpliceRover: interpretable convolutional neural networks for improved splice site prediction. Zuallaert J; Godin F; Kim M; Soete A; Saeys Y; De Neve W Bioinformatics; 2018 Dec; 34(24):4180-4188. PubMed ID: 29931149 [TBL] [Abstract][Full Text] [Related]
18. Knowledge-primed neural networks enable biologically interpretable deep learning on single-cell sequencing data. Fortelny N; Bock C Genome Biol; 2020 Aug; 21(1):190. PubMed ID: 32746932 [TBL] [Abstract][Full Text] [Related]
19. Correcting gradient-based interpretations of deep neural networks for genomics. Majdandzic A; Rajesh C; Koo PK Genome Biol; 2023 May; 24(1):109. PubMed ID: 37161475 [TBL] [Abstract][Full Text] [Related]
20. Multinomial Convolutions for Joint Modeling of Regulatory Motifs and Sequence Activity Readouts. Park M; Singh S; Khan SR; Abrar MA; Grisanti F; Rahman MS; Samee MAH Genes (Basel); 2022 Sep; 13(9):. PubMed ID: 36140783 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]