These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

118 related articles for article (PubMed ID: 36850648)

  • 1. Improving Speech Recognition Performance in Noisy Environments by Enhancing Lip Reading Accuracy.
    Li D; Gao Y; Zhu C; Wang Q; Wang R
    Sensors (Basel); 2023 Feb; 23(4):. PubMed ID: 36850648
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Lip-Reading Enables the Brain to Synthesize Auditory Features of Unknown Silent Speech.
    Bourguignon M; Baart M; Kapnoula EC; Molinaro N
    J Neurosci; 2020 Jan; 40(5):1053-1065. PubMed ID: 31889007
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Deep Audio-Visual Speech Recognition.
    Afouras T; Chung JS; Senior A; Vinyals O; Zisserman A
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):8717-8727. PubMed ID: 30582526
    [TBL] [Abstract][Full Text] [Related]  

  • 4. MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading.
    Bröhl F; Keitel A; Kayser C
    eNeuro; 2022; 9(3):. PubMed ID: 35728955
    [TBL] [Abstract][Full Text] [Related]  

  • 5. [Intermodal timing cues for audio-visual speech recognition].
    Hashimoto M; Kumashiro M
    J UOEH; 2004 Jun; 26(2):215-25. PubMed ID: 15244074
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild.
    He Y; Seng KP; Ang LM
    Sensors (Basel); 2023 Feb; 23(4):. PubMed ID: 36850432
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Visual Enhancement of Relevant Speech in a 'Cocktail Party'.
    Jaha N; Shen S; Kerlin JR; Shahin AJ
    Multisens Res; 2020 Feb; 33(3):277-294. PubMed ID: 32508080
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Seeing to hear better: evidence for early audio-visual interactions in speech identification.
    Schwartz JL; Berthommier F; Savariaux C
    Cognition; 2004 Sep; 93(2):B69-78. PubMed ID: 15147940
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Multimodal Sparse Transformer Network for Audio-Visual Speech Recognition.
    Song Q; Sun B; Li S
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10028-10038. PubMed ID: 35412992
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Audio-visual matching of speech and non-speech oral gestures in patients with aphasia and apraxia of speech.
    Schmid G; Ziegler W
    Neuropsychologia; 2006; 44(4):546-55. PubMed ID: 16129459
    [TBL] [Abstract][Full Text] [Related]  

  • 11. During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in Noise.
    Bernstein LE; Auer ET; Eberhardt SP
    Am J Audiol; 2022 Mar; 31(1):57-77. PubMed ID: 34965362
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A novel approach to study audiovisual integration in speech perception: localizer fMRI and sparse sampling.
    Szycik GR; Tausche P; Münte TF
    Brain Res; 2008 Jul; 1220():142-9. PubMed ID: 17880929
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.
    Ma WJ; Zhou X; Ross LA; Foxe JJ; Parra LC
    PLoS One; 2009; 4(3):e4638. PubMed ID: 19259259
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Lip-read me now, hear me better later: cross-modal transfer of talker-familiarity effects.
    Rosenblum LD; Miller RM; Sanchez K
    Psychol Sci; 2007 May; 18(5):392-6. PubMed ID: 17576277
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The self-advantage in visual speech processing enhances audiovisual speech recognition in noise.
    Tye-Murray N; Spehar BP; Myerson J; Hale S; Sommers MS
    Psychon Bull Rev; 2015 Aug; 22(4):1048-53. PubMed ID: 25421408
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech.
    Yu C; Yu J; Qian Z; Tan Y
    Sensors (Basel); 2023 Feb; 23(4):. PubMed ID: 36850669
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Correlating subword articulation with lip shapes for embedding aware audio-visual speech enhancement.
    Chen H; Du J; Hu Y; Dai LR; Yin BC; Lee CH
    Neural Netw; 2021 Nov; 143():171-182. PubMed ID: 34157642
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A study of lip movements during spontaneous dialog and its application to voice activity detection.
    Sodoyer D; Rivet B; Girin L; Savariaux C; Schwartz JL; Jutten C
    J Acoust Soc Am; 2009 Feb; 125(2):1184-96. PubMed ID: 19206891
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Electrophysiological evidence for Audio-visuo-lingual speech integration.
    Treille A; Vilain C; Schwartz JL; Hueber T; Sato M
    Neuropsychologia; 2018 Jan; 109():126-133. PubMed ID: 29248497
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Visual abilities are important for auditory-only speech recognition: evidence from autism spectrum disorder.
    Schelinski S; Riedel P; von Kriegstein K
    Neuropsychologia; 2014 Dec; 65():1-11. PubMed ID: 25283605
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.