These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
126 related articles for article (PubMed ID: 38610256)
1. Efficient Speech Detection in Environmental Audio Using Acoustic Recognition and Knowledge Distillation. Priebe D; Ghani B; Stowell D Sensors (Basel); 2024 Mar; 24(7):. PubMed ID: 38610256 [TBL] [Abstract][Full Text] [Related]
2. Deep learning in automatic detection of dysphonia: Comparing acoustic features and developing a generalizable framework. Chen Z; Zhu P; Qiu W; Guo J; Li Y Int J Lang Commun Disord; 2023 Mar; 58(2):279-294. PubMed ID: 36117378 [TBL] [Abstract][Full Text] [Related]
3. Computational bioacoustics with deep learning: a review and roadmap. Stowell D PeerJ; 2022; 10():e13152. PubMed ID: 35341043 [TBL] [Abstract][Full Text] [Related]
4. Zero-shot test time adaptation via knowledge distillation for personalized speech denoising and dereverberation. Kim S; Athi M; Shi G; Kim M; Kristjansson T J Acoust Soc Am; 2024 Feb; 155(2):1353-1367. PubMed ID: 38364043 [TBL] [Abstract][Full Text] [Related]
5. End-to-end emotional speech recognition using acoustic model adaptation based on knowledge distillation. Yun HI; Park JS Multimed Tools Appl; 2023; 82(15):22759-22776. PubMed ID: 36817556 [TBL] [Abstract][Full Text] [Related]
6. A deep learning knowledge distillation framework using knee MRI and arthroscopy data for meniscus tear detection. Ying M; Wang Y; Yang K; Wang H; Liu X Front Bioeng Biotechnol; 2023; 11():1326706. PubMed ID: 38292305 [No Abstract] [Full Text] [Related]
7. MSKD: Structured knowledge distillation for efficient medical image segmentation. Zhao L; Qian X; Guo Y; Song J; Hou J; Gong J Comput Biol Med; 2023 Sep; 164():107284. PubMed ID: 37572439 [TBL] [Abstract][Full Text] [Related]
8. Soundscape analysis using eco-acoustic indices for the birds biodiversity assessment in urban parks (case study: Isfahan City, Iran). Latifi M; Fakheran S; Moshtaghie M; Ranaie M; Tussi PM Environ Monit Assess; 2023 May; 195(6):629. PubMed ID: 37127732 [TBL] [Abstract][Full Text] [Related]
9. Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT. Zhang Y; Chen Z; Yang X Comput Biol Med; 2024 Mar; 170():108088. PubMed ID: 38320339 [TBL] [Abstract][Full Text] [Related]
10. A lightweight speech recognition method with target-swap knowledge distillation for Mandarin air traffic control communications. Ren J; Yang S; Shi Y; Yang J PeerJ Comput Sci; 2023; 9():e1650. PubMed ID: 38077570 [TBL] [Abstract][Full Text] [Related]
11. Acoustic indices as proxies for biodiversity: a meta-analysis. Alcocer I; Lima H; Sugai LSM; Llusia D Biol Rev Camb Philos Soc; 2022 Dec; 97(6):2209-2236. PubMed ID: 35978471 [TBL] [Abstract][Full Text] [Related]
12. Efficient image classification through collaborative knowledge distillation: A novel AlexNet modification approach. Kuldashboy A; Umirzakova S; Allaberdiev S; Nasimov R; Abdusalomov A; Cho YI Heliyon; 2024 Jul; 10(14):e34376. PubMed ID: 39113984 [TBL] [Abstract][Full Text] [Related]
13. Teacher-student complementary sample contrastive distillation. Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039 [TBL] [Abstract][Full Text] [Related]
14. Research on Chinese Speech Emotion Recognition Based on Deep Neural Network and Acoustic Features. Lee MC; Yeh SC; Chang JW; Chen ZY Sensors (Basel); 2022 Jun; 22(13):. PubMed ID: 35808238 [TBL] [Abstract][Full Text] [Related]
15. Applications and advances in acoustic monitoring for infectious disease epidemiology. Johnson E; Campos-Cerqueira M; Jumail A; Yusni ASA; Salgado-Lynn M; Fornace K Trends Parasitol; 2023 May; 39(5):386-399. PubMed ID: 36842917 [TBL] [Abstract][Full Text] [Related]
16. Joint learning method with teacher-student knowledge distillation for on-device breast cancer image classification. Sepahvand M; Abdali-Mohammadi F Comput Biol Med; 2023 Mar; 155():106476. PubMed ID: 36841060 [TBL] [Abstract][Full Text] [Related]
17. Acoustic compression in Zoom audio does not compromise voice recognition performance. Perepelytsia V; Dellwo V Sci Rep; 2023 Oct; 13(1):18742. PubMed ID: 37907749 [TBL] [Abstract][Full Text] [Related]
18. Knowledge distillation under ideal joint classifier assumption. Li H; Chen X; Ditzler G; Roveda J; Li A Neural Netw; 2024 May; 173():106160. PubMed ID: 38330746 [TBL] [Abstract][Full Text] [Related]
19. Adversarial learning-based multi-level dense-transmission knowledge distillation for AP-ROP detection. Xie H; Liu Y; Lei H; Song T; Yue G; Du Y; Wang T; Zhang G; Lei B Med Image Anal; 2023 Feb; 84():102725. PubMed ID: 36527770 [TBL] [Abstract][Full Text] [Related]
20. Mitigating carbon footprint for knowledge distillation based deep learning model compression. Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]