These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

134 related articles for article (PubMed ID: 33003849)

  • 1. A two-stage deep learning algorithm for talker-independent speaker separation in reverberant conditions.
    Delfarah M; Liu Y; Wang D
    J Acoust Soc Am; 2020 Sep; 148(3):1157. PubMed ID: 33003849
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Deep Learning for Talker-dependent Reverberant Speaker Separation: An Empirical Study.
    Delfarah M; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2019 Nov; 27(11):1839-1848. PubMed ID: 33748321
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Causal Deep CASA for Monaural Talker-Independent Speaker Separation.
    Liu Y; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2020; 28():2109-2118. PubMed ID: 33178880
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A talker-independent deep learning algorithm to increase intelligibility for hearing-impaired listeners in reverberant competing talker conditions.
    Healy EW; Johnson EM; Delfarah M; Wang D
    J Acoust Soc Am; 2020 Jun; 147(6):4106. PubMed ID: 32611178
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Divide and Conquer: A Deep CASA Approach to Talker-independent Monaural Speaker Separation.
    Liu Y; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2019; 27(12):2092-2102. PubMed ID: 33748322
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A causal and talker-independent speaker separation/dereverberation deep learning algorithm: Cost associated with conversion to real-time capable operation.
    Healy EW; Taherian H; Johnson EM; Wang D
    J Acoust Soc Am; 2021 Nov; 150(5):3976. PubMed ID: 34852625
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A dual-stream deep attractor network with multi-domain learning for speech dereverberation and separation.
    Chen H; Zhang P
    Neural Netw; 2021 Sep; 141():238-248. PubMed ID: 33930565
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speech Separation.
    Wang ZQ; Wang P; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2021; 29():2001-2014. PubMed ID: 34212067
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Deep Learning Based Binaural Speech Separation in Reverberant Environments.
    Zhang X; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2017 May; 25(5):1075-1084. PubMed ID: 29057291
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Deep learning based speaker separation and dereverberation can generalize across different languages to improve intelligibility.
    Healy EW; Johnson EM; Delfarah M; Krishnagiri DS; Sevich VA; Taherian H; Wang D
    J Acoust Soc Am; 2021 Oct; 150(4):2526. PubMed ID: 34717521
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Triple-0: Zero-shot denoising and dereverberation on an end-to-end frozen anechoic speech separation network.
    Gul S; Khan MS; Ur-Rehman A
    PLoS One; 2024; 19(7):e0301692. PubMed ID: 39012881
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Long short-term memory for speaker generalization in supervised speech separation.
    Chen J; Wang D
    J Acoust Soc Am; 2017 Jun; 141(6):4705. PubMed ID: 28679261
    [TBL] [Abstract][Full Text] [Related]  

  • 13. ONLINE BINAURAL SPEECH SEPARATION OF MOVING SPEAKERS WITH A WAVESPLIT NETWORK.
    Han C; Mesgarani N
    Proc IEEE Int Conf Acoust Speech Signal Process; 2023 Jun; 2023():. PubMed ID: 37577180
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception.
    Ceolini E; Hjortkjær J; Wong DDE; O'Sullivan J; Raghavan VS; Herrero J; Mehta AD; Liu SC; Mesgarani N
    Neuroimage; 2020 Dec; 223():117282. PubMed ID: 32828921
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Impact of Different Acoustic Components on EEG-Based Auditory Attention Decoding in Noisy and Reverberant Conditions.
    Aroudi A; Mirkovic B; De Vos M; Doclo S
    IEEE Trans Neural Syst Rehabil Eng; 2019 Apr; 27(4):652-663. PubMed ID: 30843845
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Speaker separation in realistic noise environments with applications to a cognitively-controlled hearing aid.
    Borgström BJ; Brandstein MS; Ciccarelli GA; Quatieri TF; Smalt CJ
    Neural Netw; 2021 Aug; 140():136-147. PubMed ID: 33765529
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Noise-robust cortical tracking of attended speech in real-world acoustic scenes.
    Fuglsang SA; Dau T; Hjortkjær J
    Neuroimage; 2017 Aug; 156():435-444. PubMed ID: 28412441
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Two-stage Deep Learning for Noisy-reverberant Speech Enhancement.
    Zhao Y; Wang ZQ; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2019 Jan; 27(1):53-62. PubMed ID: 31106230
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Supervised Speech Separation Based on Deep Learning: An Overview.
    Wang D; Chen J
    IEEE/ACM Trans Audio Speech Lang Process; 2018 Oct; 26(10):1702-1726. PubMed ID: 31223631
    [TBL] [Abstract][Full Text] [Related]  

  • 20. EEG-based auditory attention detection: boundary conditions for background noise and speaker positions.
    Das N; Bertrand A; Francart T
    J Neural Eng; 2018 Dec; 15(6):066017. PubMed ID: 30207293
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.