These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Comparison of speech and music input in North American infants' home environment over the first 2 years of life.
    Author: Hippe L, Hennessy V, Ramirez NF, Zhao TC.
    Journal: Dev Sci; 2024 Sep; 27(5):e13528. PubMed ID: 38770599.
    Abstract:
    Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4 RESEARCH HIGHLIGHTS: This study is the first to compare music input to speech input in infants' natural home environment across infancy. We utilized a crowdsourcing approach to annotate a longitudinal dataset of daylong audio recordings collected in North American home environments. Our main results show that infants overall receive significantly more speech input than music input. This gap widens as the infants get older. Our results also showed that the music input was largely from electronic devices and not intended for the infants, a pattern opposite to speech input.
    [Abstract] [Full Text] [Related] [New Search]