These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
202 related articles for article (PubMed ID: 29474378)
41. Perceptual mechanism underlying gaze guidance in chimpanzees and humans. Kano F; Tomonaga M Anim Cogn; 2011 May; 14(3):377-86. PubMed ID: 21305329 [TBL] [Abstract][Full Text] [Related]
42. Introducing context-dependent and spatially-variant viewing biases in saccadic models. Le Meur O; Coutrot A Vision Res; 2016 Apr; 121():72-84. PubMed ID: 26898752 [TBL] [Abstract][Full Text] [Related]
43. Visual Attention Saccadic Models Learn to Emulate Gaze Patterns From Childhood to Adulthood. Le Meur O; Coutrot A; Liu Z; Rama P; Le Roch A; Helo A IEEE Trans Image Process; 2017 Oct; 26(10):4777-4789. PubMed ID: 28682255 [TBL] [Abstract][Full Text] [Related]
45. A computational visual saliency model based on statistics and machine learning. Lin RJ; Lin WS J Vis; 2014 Aug; 14(9):. PubMed ID: 25084782 [TBL] [Abstract][Full Text] [Related]
46. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. Borji A; Sihite DN; Itti L IEEE Trans Image Process; 2013 Jan; 22(1):55-69. PubMed ID: 22868572 [TBL] [Abstract][Full Text] [Related]
47. Can we accurately predict where we look at paintings? Le Meur O; Le Pen T; Cozot R PLoS One; 2020; 15(10):e0239980. PubMed ID: 33035250 [TBL] [Abstract][Full Text] [Related]
48. A proto-object based saliency model in three-dimensional space. Hu B; Kane-Jackson R; Niebur E Vision Res; 2016 Feb; 119():42-9. PubMed ID: 26739278 [TBL] [Abstract][Full Text] [Related]
49. Learning to Model Task-Oriented Attention. Zou X; Zhao X; Wang J; Yang Y Comput Intell Neurosci; 2016; 2016():2381451. PubMed ID: 27247561 [TBL] [Abstract][Full Text] [Related]
50. Scene and screen center bias early eye movements in scene viewing. Bindemann M Vision Res; 2010 Nov; 50(23):2577-87. PubMed ID: 20732344 [TBL] [Abstract][Full Text] [Related]
52. Learning-based saliency model with depth information. Ma CY; Hang HM J Vis; 2015; 15(6):19. PubMed ID: 26024466 [TBL] [Abstract][Full Text] [Related]
53. Predicting human gaze beyond pixels. Xu J; Jiang M; Wang S; Kankanhalli MS; Zhao Q J Vis; 2014 Jan; 14(1):. PubMed ID: 24474825 [TBL] [Abstract][Full Text] [Related]
54. State-of-the-art in visual attention modeling. Borji A; Itti L IEEE Trans Pattern Anal Mach Intell; 2013 Jan; 35(1):185-207. PubMed ID: 22487985 [TBL] [Abstract][Full Text] [Related]
55. Attention-based fusion network for human eye-fixation prediction in 3D images. Lv Y; Zhou W; Lei J; Ye L; Luo T Opt Express; 2019 Nov; 27(23):34056-34066. PubMed ID: 31878462 [TBL] [Abstract][Full Text] [Related]
56. Camera-Assisted Video Saliency Prediction and Its Applications. Sun X; Hu Y; Zhang L; Chen Y; Li P; Xie Z; Liu Z IEEE Trans Cybern; 2018 Sep; 48(9):2520-2530. PubMed ID: 29990269 [TBL] [Abstract][Full Text] [Related]
57. Predicting the Valence of a Scene from Observers' Eye Movements. R-Tavakoli H; Atyabi A; Rantanen A; Laukka SJ; Nefti-Meziani S; Heikkilä J PLoS One; 2015; 10(9):e0138198. PubMed ID: 26407322 [TBL] [Abstract][Full Text] [Related]
58. Decisions about objects in real-world scenes are influenced by visual saliency before and during their inspection. Underwood G; Humphrey K; van Loon E Vision Res; 2011 Sep; 51(18):2031-8. PubMed ID: 21820003 [TBL] [Abstract][Full Text] [Related]