共查询到20条相似文献,搜索用时 0 毫秒
1.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody. 相似文献
2.
Over the past years functional near-infrared spectroscopy (fNIRS) has substantially contributed to the understanding of language and its neural correlates. In contrast to other imaging techniques, fNIRS is well suited to study language function in healthy and psychiatric populations due to its cheap and easy application in a quiet and natural measurement setting. Its relative insensitivity for motion artifacts allows the use of overt speech tasks and the investigation of verbal conversation. The present review focuses on the numerous contributions of fNIRS to the field of language, its development, and related psychiatric disorders but also on its limitations and chances for the future. 相似文献
3.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli
consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion
intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar
(anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition
of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases.
Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly
lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences
were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual
emotions in studies on age-related differences in emotion recognition. 相似文献
4.
The aim of the present mixed cross-sectional and longitudinal study was to observe and describe some aspects of vocal imitation in natural mother-infant interaction. Specifically, maternal imitation of infant utterances was observed in relation to the imitative modeling, mirrored equivalence, and social guided learning models of infant speech development. Nine mother-infant dyads were audio-video recorded. Infants were recruited at different ages between 6 and 11 months and followed for 3 months, providing a quasi-longitudinal series of data from 6 through 14 months of age. It was observed that maternal imitation was more frequent than infant imitation even though vocal imitation was a rare maternal response. Importantly, mothers used a range of contingent and noncontingent vocal responses in interaction with their infants. Mothers responded to three-quarters of their infant's vocalizations, including speech-like and less mature vocalization types. The infants’ phonetic repertoire expanded with age. Overall, the findings are most consistent with the social guided learning approach. Infants rarely imitated their mothers, suggests a creative self-motivated learning mechanism that requires further investigation. 相似文献
5.
Perceptual reorganisation of infants' speech perception has been found from 6 months for consonants and earlier for vowels. Recently, similar reorganisation has been found for lexical tone between 6 and 9 months of age. Given that there is a close relationship between vowels and tones, this study investigates whether the perceptual reorganisation for tone begins earlier than 6 months. Non-tone language English and French infants were tested with the Thai low vs. rising lexical tone contrast, using the stimulus alternating preference procedure. Four- and 6-month-old infants discriminated the lexical tones, and there was no decline in discrimination performance across these ages. However, 9-month-olds failed to discriminate the lexical tones. This particular pattern of decline in nonnative tone discrimination over age indicates that perceptual reorganisation for tone does not parallel the developmentally prior decline observed in vowel perception. The findings converge with previous developmental cross-language findings on tone perception in English-language infants [Mattock, K., & Burnham, D. (2006). Chinese and English infants' tone perception: Evidence for perceptual reorganization. Infancy, 10(3)], and extend them by showing similar perceptual reorganisation for non-tone language infants learning rhythmically different non-tone languages (English and French). 相似文献
6.
Batchelder EO 《Cognition》2002,83(2):167-206
Prelinguistic infants must find a way to isolate meaningful chunks from the continuous streams of speech that they hear. BootLex, a new model which uses distributional cues to build a lexicon, demonstrates how much can be accomplished using this single source of information. This conceptually simple probabilistic algorithm achieves significant segmentation results on various kinds of language corpora - English, Japanese, and Spanish; child- and adult-directed speech, and written texts; and several variations in coding structure - and reveals which statistical characteristics of the input have an influence on segmentation performance. BootLex is then compared, quantitatively and qualitatively, with three other groups of computational models of the same infant segmentation process, paying particular attention to functional characteristics of the models and their similarity to human cognition. Commonalities and contrasts among the models are discussed, as well as their implications both for theories of the cognitive problem of segmentation itself, and for the general enterprise of computational cognitive modeling. 相似文献
7.
Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-old infants from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position (e.g. /baep/ not /paeb/). In a later head-turn preference test, infants listened longer to new syllables that violated the experimental phonotactic constraints than to new syllables that honored them. Thus, infants rapidly learned phonotactic regularities from brief auditory experience and extended them to unstudied syllables, documenting the sensitivity of the infant's language processing system to abstractions over linguistic experience. 相似文献
8.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. 相似文献
9.
Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners’ ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals’ learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations. 相似文献
10.
Empirical studies of creativity have focused on the importance of divergent thinking, which supports generating novel solutions to loosely defined problems. The present study examined creativity and frontal cortical activity in an externally-validated group of creative individuals (trained musicians) and demographically matched control participants, using behavioral tasks and near-infrared spectroscopy (NIRS). Experiment 1 examined convergent and divergent thinking with respect to intelligence and personality. Experiment 2 investigated frontal oxygenated and deoxygenated hemoglobin concentration changes during divergent thinking with NIRS. Results of Experiment 1 indicated enhanced creativity in musicians who also showed increased verbal ability and schizotypal personality but their enhanced divergent thinking remained robust after co-varying out these two factors. In Experiment 2, NIRS showed greater bilateral frontal activity in musicians during divergent thinking compared with nonmusicians. Overall, these results suggest that creative individuals are characterized by enhanced divergent thinking, which is supported by increased frontal cortical activity. 相似文献
11.
Upon stimulation, real time maps of cortical hemodynamic responses can be obtained by non-invasive functional near-infrared spectroscopy (fNIRS) which measures changes in oxygenated and deoxygenated hemoglobin after positioning multiple sources and detectors over the human scalp. The current commercially available transportable fNIRS systems have a time resolution of 1-10 Hz, a depth sensitivity of about 1.5 cm, and a spatial resolution of about 1 cm. The goal of this brief review is to report infants, children and adults fNIRS language studies. Since 1998, 60 studies have been published on cortical activation in the brain’s classic language areas in children/adults as well as newborns using fNIRS instrumentations of different complexity. In addition, the basic principles of fNIRS including features, strengths, advantages, and limitations are summarized in terms that can be understood even by non specialists. Future prospects of fNIRS in the field of language processing imaging are highlighted. 相似文献
12.
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in this study by applying a “sound-morphing” paradigm, where the presence of phonetic features were parametrically varied, creating a step-wise transition from a non-phonetic sound into a phonetic sound. The stimuli were presented in an event-related fMRI design. The fMRI-BOLD data were analysed using parametric contrasts. The results showed a higher sensitivity for sounds containing phonetic features compared to non-phonetic sounds in the middle part of STG, and in the anterior part of the planum temporale (PT) bilaterally. Although the same areas were involved in the processing of non-phonetic sounds, a difference in activation was evident in the STG, with an increase in activation related to increment of phonetic features in the sounds. The results indicate a stimulus-driven, bottom-up process that utilizes general auditory resources in the secondary auditory cortex, depending on specific phonetic features in the sounds. 相似文献
13.
A critical issue in perception is the manner in which top-down expectancies guide lower level perceptual processes. In speech, a common paradigm is to construct continua ranging between two phonetic endpoints and to determine how higher level lexical context influences the perceived boundary. We applied this approach to music, presenting participants with major/minor triad continua after brief musical contexts. Two experiments yielded results that differed from classic results in speech perception. In speech, context generally expands the category of the expected stimuli. We found the opposite in music: The major/minor boundary shifted toward the expected category, contracting it. Together, these experiments support the hypothesis that musical expectancy can feed back to affect lower-level perceptual processes. However, it may do so in a way that differs fundamentally from what has been seen in other domains. 相似文献
14.
Investigating the neuronal network underlying language processing may contribute to a better understanding of how the brain masters this complex cognitive function with surprising ease and how language is acquired at a fast pace in infancy. Modern neuroimaging methods permit to visualize the evolvement and the function of the language network. The present paper focuses on a specific methodology, functional near-infrared spectroscopy (fNIRS), providing an overview over studies on auditory language processing and acquisition. The methodology detects oxygenation changes elicited by functional activation of the cerebral cortex. The main advantages for research on auditory language processing and its development during infancy are an undemanding application, the lack of instrumental noise, and its potential to simultaneously register electrophysiological responses. Also it constitutes an innovative approach for studying developmental issues in infants and children. The review will focus on studies on word and sentence processing including research in infants and adults. 相似文献
15.
Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input. 相似文献
16.
This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level. 相似文献
17.
Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the contralateral hemisphere. Compared to a no-unilateral gaze control, right gaze/left hemisphere activation resulted in better recall, demonstrating left hemisphere superiority in recall of general world knowledge and offering equivocal support for the hemispheric encoding asymmetry model of memory. Unilateral gaze- regardless of direction- improved recall of spatial, but not verbal, information. Future research could investigate the conditions under which unilateral gaze increases recall. Sustained unilateral gaze can be used as a simple, inexpensive, means for testing theories of hemispheric specialization of cognitive functions. Results support an overall deficit in US geographical knowledge in undergraduate college students. 相似文献
18.
Hunter PG Glenn Schellenberg E Stalinski SM 《Journal of experimental child psychology》2011,110(1):80-93
Adults and children 5, 8, and 11 years of age listened to short excerpts of unfamiliar music that sounded happy, scary, peaceful, or sad. Listeners initially rated how much they liked each excerpt. They subsequently made a forced-choice judgment about the emotion that each excerpt conveyed. Identification accuracy was higher for young girls than for young boys, but both genders reached adult-like levels by age 11. High-arousal emotions (happiness and fear) were better identified than low-arousal emotions (peacefulness and sadness), and this advantage was exaggerated among younger children. Whereas children of all ages preferred excerpts depicting high-arousal emotions, adults favored excerpts depicting positive emotions (happiness and peacefulness). A preference for positive emotions over negative emotions was also evident among females of all ages. As identification accuracy improved, liking for positively valenced music increased among 5- and 8-year-olds but decreased among 11-year-olds. 相似文献
19.
Hroar Klempe 《Integrative psychological & behavioral science》2009,43(3):260-266
Music is to a large extent understood as if it is a language. This is also true when it comes to the recently published book
Communicative Musicality edited by Stephen Malloch and Colwyn Trevarthen (2009a). In this essay it is demonstrated that a lingocentric understanding of music is strongly connected to modernity, but also
that early experimental psychology presupposed a distinction between music and language. Polyphony, therefore, is here presented
as a characteristic for the musical system.
相似文献
Hroar KlempeEmail: |
20.
Nettle D 《Brain and cognition》2003,52(3):390-398
Several different associations between hand laterality and cognitive ability have been proposed. Studies reporting different conclusions vary in their procedures for defining laterality, and several of them rely on measures which are statistically problematic. Previous methods for measuring relative hand skill have not satisfactorily separated the overall level of hand skill, which is a known correlate of cognitive ability, from the asymmetry of its distribution. This paper uses a multiple regression paradigm that separates these two components. Support is found for Leask and Crow's [Trends in Cognitive Sciences, 5 (2001) 513] proposal that average cognitive ability increases monotonically with increasing strength of laterality, regardless of its direction. The small average advantage to dextrals stems from them being more strongly lateralised than sinistrals. The paucity of strong dextrals amongst the very gifted is due to a smaller variance in cognitive ability in this group. 相似文献