These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

135 related articles for article (PubMed ID: 32678878)

  • 1. Computational framework for fusing eye movements and spoken narratives for image annotation.
    Vaidyanathan P; Prud'hommeaux E; Alm CO; Pelz JB
    J Vis; 2020 Jul; 20(7):13. PubMed ID: 32678878
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Visual context constrains language-mediated anticipatory eye movements.
    Hintz F; Meyer AS; Huettig F
    Q J Exp Psychol (Hove); 2020 Mar; 73(3):458-467. PubMed ID: 31552807
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Literacy effects on language and vision: emergent effects from an amodal shared resource (ASR) computational model.
    Smith AC; Monaghan P; Huettig F
    Cogn Psychol; 2014 Dec; 75():28-54. PubMed ID: 25171049
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing.
    Tanenhaus MK; Magnuson JS; Dahan D; Chambers C
    J Psycholinguist Res; 2000 Nov; 29(6):557-80. PubMed ID: 11196063
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Gaze patterns and audiovisual speech enhancement.
    Yi A; Wong W; Eizenman M
    J Speech Lang Hear Res; 2013 Apr; 56(2):471-80. PubMed ID: 23275394
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing.
    Lyu B; Ge J; Niu Z; Tan LH; Gao JH
    J Neurosci; 2016 Oct; 36(42):10813-10822. PubMed ID: 27798136
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Using Eye Movements Recorded in the Visual World Paradigm to Explore the Online Processing of Spoken Language.
    Zhan L
    J Vis Exp; 2018 Oct; (140):. PubMed ID: 30371678
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.
    Shen W; Qu Q; Li X
    Atten Percept Psychophys; 2016 Jul; 78(5):1267-84. PubMed ID: 26993126
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Patterns of saliency and semantic features distinguish gaze of expert and novice viewers of surveillance footage.
    Peng Y; Burling JM; Todorova GK; Neary C; Pollick FE; Lu H
    Psychon Bull Rev; 2024 Aug; 31(4):1745-1758. PubMed ID: 38273144
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Differences in Working Memory Capacity Affect Online Spoken Word Recognition: Evidence From Eye Movements.
    Nitsan G; Wingfield A; Lavie L; Ben-David BM
    Trends Hear; 2019; 23():2331216519839624. PubMed ID: 31010398
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Probing the link between vision and language in material perception using psychophysics and unsupervised learning.
    Liao C; Sawayama M; Xiao B
    PLoS Comput Biol; 2024 Oct; 20(10):e1012481. PubMed ID: 39361707
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations.
    Pedziwiatr MA; Kümmerer M; Wallis TSA; Bethge M; Teufel C
    Cognition; 2021 Jan; 206():104465. PubMed ID: 33096374
    [TBL] [Abstract][Full Text] [Related]  

  • 13. From spoken narratives to domain knowledge: mining linguistic data for medical image understanding.
    Guo X; Yu Q; Alm CO; Calvelli C; Pelz JB; Shi P; Haake AR
    Artif Intell Med; 2014 Oct; 62(2):79-90. PubMed ID: 25174882
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Cortical encoding of acoustic and linguistic rhythms in spoken narratives.
    Luo C; Ding N
    Elife; 2020 Dec; 9():. PubMed ID: 33345775
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Cross-modal representation of spoken and written word meaning in left pars triangularis.
    Liuzzi AG; Bruffaerts R; Peeters R; Adamczuk K; Keuleers E; De Deyne S; Storms G; Dupont P; Vandenberghe R
    Neuroimage; 2017 Apr; 150():292-307. PubMed ID: 28213115
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Low-frequency neural activity reflects rule-based chunking during speech listening.
    Jin P; Lu Y; Ding N
    Elife; 2020 Apr; 9():. PubMed ID: 32310082
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition.
    Mathe S; Sminchisescu C
    IEEE Trans Pattern Anal Mach Intell; 2015 Jul; 37(7):1408-24. PubMed ID: 26352449
    [TBL] [Abstract][Full Text] [Related]  

  • 18. When meaning matters: The temporal dynamics of semantic influences on visual attention.
    de Groot F; Huettig F; Olivers CN
    J Exp Psychol Hum Percept Perform; 2016 Feb; 42(2):180-96. PubMed ID: 26322686
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A unified computational framework for visual attention dynamics.
    Zanca D; Gori M; Rufa A
    Prog Brain Res; 2019; 249():183-188. PubMed ID: 31325977
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A naturalistic viewing paradigm using 360° panoramic video clips and real-time field-of-view changes with eye-gaze tracking.
    Kim HC; Jin S; Jo S; Lee JH
    Neuroimage; 2020 Aug; 216():116617. PubMed ID: 32057996
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.