These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

141 related articles for article (PubMed ID: 34717460)

  • 1. Spatial alignment between faces and voices improves selective attention to audio-visual speech.
    Fleming JT; Maddox RK; Shinn-Cunningham BG
    J Acoust Soc Am; 2021 Oct; 150(4):3085. PubMed ID: 34717460
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Gaze patterns and audiovisual speech enhancement.
    Yi A; Wong W; Eizenman M
    J Speech Lang Hear Res; 2013 Apr; 56(2):471-80. PubMed ID: 23275394
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception.
    Patel P; van der Heijden K; Bickel S; Herrero JL; Mehta AD; Mesgarani N
    Curr Biol; 2022 Sep; 32(18):3971-3986.e4. PubMed ID: 35973430
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The role of visual speech cues in reducing energetic and informational masking.
    Helfer KS; Freyman RL
    J Acoust Soc Am; 2005 Feb; 117(2):842-9. PubMed ID: 15759704
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Integrating speech information across talkers, gender, and sensory modality: female faces and male voices in the McGurk effect.
    Green KP; Kuhl PK; Meltzoff AN; Stevens EB
    Percept Psychophys; 1991 Dec; 50(6):524-36. PubMed ID: 1780200
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Talker-specific learning in speech perception.
    Nygaard LC; Pisoni DB
    Percept Psychophys; 1998 Apr; 60(3):355-76. PubMed ID: 9599989
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Lexical and indexical cues in masking by competing speech.
    Helfer KS; Freyman RL
    J Acoust Soc Am; 2009 Jan; 125(1):447-56. PubMed ID: 19173430
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Simple displays of talker location improve voice identification performance in multitalker, spatialized audio environments.
    Kilgore RM
    Hum Factors; 2009 Apr; 51(2):224-39. PubMed ID: 19653485
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Audio and visual cues in a two-talker divided attention speech-monitoring task.
    Brungart DS; Kordik AJ; Simpson BD
    Hum Factors; 2005; 47(3):562-73. PubMed ID: 16435697
    [TBL] [Abstract][Full Text] [Related]  

  • 10. The effect of audiovisual and binaural listening on the acceptable noise level (ANL): establishing an ANL conceptual model.
    Wu YH; Stangl E; Pang C; Zhang X
    J Am Acad Audiol; 2014 Feb; 25(2):141-53. PubMed ID: 24828215
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Development and evaluation of the listening in spatialized noise test.
    Cameron S; Dillon H; Newall P
    Ear Hear; 2006 Feb; 27(1):30-42. PubMed ID: 16446563
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Visual Speech Benefit in Clear and Degraded Speech Depends on the Auditory Intelligibility of the Talker and the Number of Background Talkers.
    Blackburn CL; Kitterick PT; Jones G; Sumner CJ; Stacey PC
    Trends Hear; 2019; 23():2331216519837866. PubMed ID: 30909814
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.
    van Hoesel RJ
    J Assoc Res Otolaryngol; 2015 Apr; 16(2):309-15. PubMed ID: 25582430
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Implicit and explicit learning in talker identification.
    Lee JJ; Perrachione TK
    Atten Percept Psychophys; 2022 Aug; 84(6):2002-2015. PubMed ID: 35534783
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration During Speech Reception With Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head.
    Schreitmüller S; Frenken M; Bentz L; Ortmann M; Walger M; Meister H
    Ear Hear; 2018; 39(3):503-516. PubMed ID: 29068860
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Speech-material and talker effects in speech band importance.
    Yoho SE; Healy EW; Youngdahl CL; Barrett TS; Apoux F
    J Acoust Soc Am; 2018 Mar; 143(3):1417. PubMed ID: 29604719
    [TBL] [Abstract][Full Text] [Related]  

  • 17. EEG activity evoked in preparation for multi-talker listening by adults and children.
    Holmes E; Kitterick PT; Summerfield AQ
    Hear Res; 2016 Jun; 336():83-100. PubMed ID: 27178442
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Clearly, fame isn't everything: Talker familiarity does not augment talker adaptation.
    Hatter ER; King CJ; Shorey AE; Stilp CE
    Atten Percept Psychophys; 2024 Apr; 86(3):962-975. PubMed ID: 36417128
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition.
    Bidelman GM; Brown B; Mankel K; Nelms Price C
    Ear Hear; 2020; 41(2):268-277. PubMed ID: 31283529
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Speech-In-Noise Comprehension is Improved When Viewing a Deep-Neural-Network-Generated Talking Face.
    Shan T; Wenner CE; Xu C; Duan Z; Maddox RK
    Trends Hear; 2022; 26():23312165221136934. PubMed ID: 36384325
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.