首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
言语理解是听者接受外部语音输入并且获得意义的心理过程。日常交流中,听觉言语理解受多尺度节律信息的影响,常见有韵律结构节律、语境节律、和说话者身体语言节律三方面外部节律。它们改变听者在言语理解中的音素判别、词汇感知以及言语可懂度等过程。内部节律表现为大脑内神经振荡,其能够表征外部言语输入在不同时间尺度下的层级特征。外部节律性刺激与内部神经活动的神经夹带能够优化大脑对言语刺激的处理,并受到听者自上而下的认知过程的调节进一步增强目标言语的内在表征。我们认为它可能是实现内外节律相互联系并共同影响言语理解的关键机制。对内外节律及其联系机制的揭示能够为理解言语这种在多层级时间尺度上具有结构规律的复杂序列提供了一个研究窗口。  相似文献   

2.
《Brain and cognition》2013,81(3):329-336
Humans perceive a wide range of temporal patterns, including those rhythms that occur in music, speech, and movement; however, there are constraints on the rhythmic patterns that we can represent. Past research has shown that sequences in which sounds occur regularly at non-metrical locations in a repeating beat period (non-integer ratio subdivisions of the beat, e.g. sounds at 430 ms in a 1000 ms beat) are represented less accurately than sequences with metrical relationships, where events occur at even subdivisions of the beat (integer ratios, e.g. sounds at 500 ms in a 1000 ms beat). Why do non-integer ratio rhythms present cognitive challenges? An emerging theory is that non-integer ratio sequences are represented incorrectly, “regularized” in the direction of the nearest metrical pattern, and the present study sought evidence of such perceptual regularization toward integer ratio relationships. Participants listened to metrical and non-metrical rhythmic auditory sequences during electroencephalogram recording, and sounds were pseudorandomly omitted from the stimulus sequence. Cortical responses to these omissions (omission elicited potentials; OEPs) were used to estimate the timing of expectations for omitted sounds in integer ratio and non-integer ratio locations. OEP amplitude and onset latency measures indicated that expectations for non-integer ratio sequences are distorted toward the nearest metrical location in the rhythmic period. These top-down effects demonstrate metrical regularization in a purely perceptual context, and provide support for dynamical accounts of rhythm perception.  相似文献   

3.
Numerous music cultures use nonsense syllables to represent percussive sounds. Covert reciting of these syllable sequences along with percussion music aids active listeners in keeping track of music. Owing to the acoustic dissimilarity between the representative syllables and the referent percussive sounds, associative learning is necessary for the oral representation of percussion music. We used functional magnetic resonance imaging (fMRI) to explore the neural processes underlying oral rehearsals of music. There were four music conditions in the experiment: (1) passive listening to unlearned percussion music, (2) active listening to learned percussion music, (3) active listening to the syllable representation of (2), and (4) active listening to learned melodic music. Our results specified two neural substrates of the association mechanisms involved in the oral representation of percussion music. First, information integration of heard sounds and the auditory consequences of subvocal rehearsals may engage the right planum temporale during active listening to percussion music. Second, mapping heard sounds to articulatory and laryngeal gestures may engage the left middle premotor cortex.  相似文献   

4.
Walking to a pacing stimulus has proven useful in motor rehabilitation, and it has been suggested that spontaneous synchronization could be preferable to intentional synchronization. But it is still unclear if the paced walking effect can occur spontaneously, or if intentionality plays a role. The aim of this work is to analyze the effect of sound pacing on gait with and without instruction to synchronize, and with different rhythmic auditory cues, while walking on a treadmill.Firstly, the baseline step frequency while walking on a treadmill was determined for all participants, followed by experimental sessions with both music and footstep sound cues. Participants were split into two groups, with one being instructed to synchronize their gait to the auditory stimuli, and the other being simply told to walk. Individual auditory cues were generated for each participant: for each trial, cues were provided at the participant’s baseline walking frequency, at 5% and 10% above baseline, and at 5% and 10% below baseline.This study’s major finding was the role of intention on synchronization, given that only the instructed group synchronized their gait with the auditory cues. No differences were found between the effects of step or music stimuli on step frequency.In conclusion, without intention or cues that direct the individual’s attention, spontaneous gait synchronization does not occur during treadmill walking.  相似文献   

5.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non‐speech sounds. In this study, we investigated rhythmic perception of non‐linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants’ biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non‐linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.  相似文献   

6.
In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded ‘Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual ‘Morse-code’ sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities.  相似文献   

7.
Entrainment of walking to rhythmic auditory cues (e.g., metronome and/or music) improves gait in people with Parkinson's disease (PD). Studies on healthy individuals indicate that entrainment to pleasant musical rhythm can be more beneficial for gait facilitation than entrainment to isochronous rhythm, potentially as a function of emotional/motivational responses to music and their associated influence on motor function. Here, we sought to investigate how emotional attributes of music and isochronous cues influence stride and arm swing amplitude in people with PD. A within-subjects experimental trial was completed with persons with PD serving as their own controls. Twenty-three individuals with PD walked to the cue of self-chosen pleasant music cue, pitch-distorted unpleasant music, and an emotionally neutral isochronous drumbeat. All music cues were tempo-matched to individual walking pace at baseline. Greater gait velocity, stride length, arm swing peak velocity and arm swing range of motion (RoM) were found when patients walked to pleasant music cues compared to baseline, walking to unpleasant music, and walking to isochronous cues. Cued walking in general marginally increased variability of stride-to-stride time and length compared with uncued walking. Enhanced stride and arm swing amplitude were most strongly associated with increases in perceived enjoyment and pleasant musical emotions such as power, tenderness, and joyful activation. Musical pleasure contributes to improvement of stride and arm swing amplitude in people with PD, independent of perceived familiarity with music, cognitive demands of music listening, and beat salience. Our findings aid in understanding the role of musical pleasure in invigorating gait in PD, and inform novel approaches for restoring or compensating impaired motor circuits.  相似文献   

8.
The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right‐brain‐damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right‐brain‐damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age‐matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no‐sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder.  相似文献   

9.
People often move in synchrony with auditory rhythms (e.g., music), whereas synchronization of movement with purely visual rhythms is rare. In two experiments, this apparent attraction of movement to auditory rhythms was investigated by requiring participants to tap their index finger in synchrony with an isochronous auditory (tone) or visual (flashing light) target sequence while a distractor sequence was presented in the other modality at one of various phase relationships. The obtained asynchronies and their variability showed that auditory distractors strongly attracted participants' taps, whereas visual distractors had much weaker effects, if any. This asymmetry held regardless of the spatial congruence or relative salience of the stimuli in the two modalities. When different irregular timing patterns were imposed on target and distractor sequences, participants' taps tended to track the timing pattern of auditory distractor sequences when they were approximately in phase with visual target sequences, but not the reverse. These results confirm that rhythmic movement is more strongly attracted to auditory than to visual rhythms. To the extent that this is an innate proclivity, it may have been an important factor in the evolution of music.  相似文献   

10.
The human central auditory system has a remarkable ability to establish memory traces for invariant features in the acoustic environment despite continual acoustic variations in the sounds heard. By recording the memory-related mismatch negativity (MMN) component of the auditory electric and magnetic brain responses as well as behavioral performance, we investigated how subjects learn to discriminate changes in a melodic pattern presented at several frequency levels. In addition, we explored whether musical expertise facilitates this learning. Our data show that especially musicians who perform music primarily without a score learn easily to detect contour changes in a melodic pattern presented at variable frequency levels. After learning, their auditory cortex detects these changes even when their attention is directed away from the sounds. The present results thus show that, after perceptual learning during attentive listening has taken place, changes in a highly complex auditory pattern can be detected automatically by the human auditory cortex and, further, that this process is facilitated by musical expertise.  相似文献   

11.
Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological deficits and thereby SLI. We investigate this possibility by testing the auditory discrimination abilities of G-SLI children for speech and non-speech sounds, at varying presentation rates, and controlling for the effects of age and language on performance. For non-speech formant transitions, 69% of the G-SLI children showed normal auditory processing, whereas for the same acoustic information in speech, only 31% did so. For rapidly presented tones, 46% of the G-SLI children performed normally. Auditory performance with speech and non-speech sounds differentiated the G-SLI children from their age-matched controls, whereas speed of processing did not. The G-SLI children evinced no relationship between their auditory and phonological/grammatical abilities. We found no consistent evidence that a deficit in processing rapid acoustic information causes or maintains G-SLI. The findings, from at least those G-SLI children who do not exhibit any auditory deficits, provide further evidence supporting the existence of a primary domain-specific deficit underlying G-SLI.  相似文献   

12.
近年来听觉表象开始得到关注,相关研究包括言语声音、音乐声音、环境声音的听觉表象三类。本文梳理了认知神经科学领域对上述三种听觉表象所激活的脑区研究,比较了听觉表象和听觉对应脑区的异同,并展望了听觉表象未来的研究方向。  相似文献   

13.
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases—such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.  相似文献   

14.
Vocal learning is the modification of vocal output by reference to auditory information. It allows for the imitation and improvisation of sounds that otherwise would not occur. The emergence of this skill may have been a primary step in the evolution of human language, but vocal learning is not unique to humans. It also occurs in songbirds, where its biology can be studied with greater ease. What follows is a review of some of the salient anatomical, developmental, and behavioral features of vocal learning, alongside parallels and differences between vocal learning in songbirds and humans.  相似文献   

15.
Statistical analysis of timing errors.   总被引:8,自引:0,他引:8  
Human rhythmic activities are variable. Cycle-to-cycle fluctuations form the behavioral observable. Traditional analysis focuses on statistical measures such as mean and variance. In this article we show that, by treating the fluctuations as a time series, one can apply techniques such as power spectra and rescaled range analysis to gain insight into the mechanisms underlying the remarkable abilities of humans to perform a variety of rhythmic movements, from maintaining memorized temporal patterns to anticipating and timing their movements to predictable sensory stimuli.  相似文献   

16.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .

Research Highlights

  • Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
  • Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
  • Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
  • Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
  相似文献   

17.
McDermott J  Hauser M 《Cognition》2004,94(2):B11-B21
Humans find some sounds more pleasing than others; such preferences may underlie our enjoyment of music. To gain insight into the evolutionary origins of these preferences, we explored whether they are present in other animals. We designed a novel method to measure the spontaneous sound preferences of cotton-top tamarins, a species that has been extensively tested for other perceptual abilities. Animals were placed in a V-shaped maze, and their position within the maze controlled their auditory environment. One sound was played when they were in one branch of the maze, and a different sound for the opposite branch; no food was delivered during testing. We used the proportion of time spent in each branch as a measure of preference. The first two experiments were designed as tests of our method. In Experiment 1, we used loud and soft white noise as stimuli; all animals spent most of their time on the side with soft noise. In Experiment 2, tamarins spent more time on the side playing species-specific feeding chirps than on the side playing species-specific distress calls. Together, these two experiments suggest that the method is effective, providing a spontaneous measure of preference. In Experiment 3, however, subjects showed no preference for consonant over dissonant intervals. Finally, tamarins showed no preference in Experiment 4 for a screeching sound (comparable to fingernails on a blackboard) over amplitude-matched white noise. In contrast, humans showed clear preferences for the consonant intervals of Experiment 3 and the white noise of Experiment 4 using the same stimuli and a similar method. We conclude that tamarins' preferences differ qualitatively from those of humans. The preferences that support our capacity for music may, therefore, be unique among the primates, and could be music-specific adaptations.  相似文献   

18.
Previous research regarding the beneficial effects of auditory stimuli on learning and memory in humans has been inconsistent. In the current study, day-old chicks were used to reduce the impact of individual differences on responses. Chicks were trained on a passive avoidance task and exposed to various auditory stimuli. Exposure to a complex rhythmic sequence for 1 min strongly facilitated chicks' long-term memory. The optimal time of presentation of the stimulus was between 10 min before and 20 min after training. Moreover, the enhancing effect was not generalized to the other auditory stimuli tested. It is suggested that this effect may be due to arousal because arousal hormones are critical to long-term memory formation. This study indicates that the temporal characteristics and type of stimulus may be important considerations when investigating the effects of auditory stimuli on cognitive functioning.  相似文献   

19.
Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.  相似文献   

20.
《Acta psychologica》2013,142(2):238-244
Here we present two experiments investigating the implicit orienting of attention over time by entrainment to an auditory rhythmic stimulus. In the first experiment, participants carried out a detection and discrimination tasks with auditory and visual targets while listening to an isochronous, auditory sequence, which acted as the entraining stimulus. For the second experiment, we used musical extracts as entraining stimulus, and tested the resulting strength of entrainment with a visual discrimination task. Both experiments used reaction times as a dependent variable. By manipulating the appearance of targets across four selected metrical positions of the auditory entraining stimulus we were able to observe how entraining to a rhythm modulates behavioural responses. That our results were independent of modality gives a new insight into cross-modal interactions between auditory and visual modalities in the context of dynamic attending to auditory temporal structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号