首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

2.
ABSTRACT

Listeners made same/different evaluations of a pair of musical presentations separated by a broadband noise. In the first experiment, the pair had either the same or a different: singers voice, language (Spanish or English), and reverberation (dry or very high reverb). A second experiment was conducted using the same vocal melodies played on guitars to emphasize non-linguistic content. In both Experiments 1 and 2, large-scale changes to Reverb were either completely undetected or ignored. A change of language in Experiment 1 supported only minimal sensitivity to change detection. Changes to multiple variables tended to increase listener sensitivity to a stimulus change. The results seem to indicate that the semantic coherence created by a musical background may limit attention to linguistic changes and voicing. Rather, those changes that more directly influence musical quality may be of greater salience in a musical context.  相似文献   

3.
Tilsen S 《Cognitive Science》2009,33(5):839-879
Temporal patterns in human movement, and in speech in particular, occur on multiple timescales. Regularities in such patterns have been observed between speech gestures, which are relatively quick movements of articulators (e.g., tongue fronting and lip protrusion), and also between rhythmic units (e.g., syllables and metrical feet), which occur more slowly. Previous work has shown that patterns in both domains can be usefully modeled with oscillatory dynamical systems. To investigate how rhythmic and gestural domains interact, an experiment was conducted in which speakers performed a phrase repetition task, and gestural kinematics were recorded using electromagnetic articulometry. Variance in relative timing of gestural movements was correlated with variance in rhythmic timing, indicating that gestural and rhythmic systems interact in the process of planning and producing speech. A model of rhythmic and gestural planning oscillators with multifrequency coupling is presented, which can simulate the observed covariability between rhythmic and gestural timing.  相似文献   

4.
Lee CS  Todd NP 《Cognition》2004,93(3):225-254
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.  相似文献   

5.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   

6.
It has been shown that harmonic structure may influence the processing of phonemes whatever the extent of participants' musical expertise [Bigand, E., Tillmann, B., Poulin, B., D'Adamo, D. A., & Madurell, F. (2001). The effect of harmonic context on phoneme monitoring in vocal music. Cognition, 81, B11-B20]. The present study goes a step further by investigating how musical harmony may potentially interfere with the processing of words in vocal music. Eight-chord sung sentences were presented, their last word being either semantically related (La girafe a un tres grand cou, The giraffe has a very long neck) or unrelated to the previous linguistic context (La girafe a un tres grand pied, The giraffe has a very long foot). The target word was sung on a chord that acted either as a referential tonic chord or as a congruent but less referential subdominant chord. Participants performed a lexical decision task on the target word. A significant interaction was observed between semantic and harmonic relatedness suggesting that music modulates semantic priming in vocal music. Following Jones' dynamic attention theory, we argue that music can modulate semantic priming in vocal music, by modifying the allocation of attentional resource necessary for linguistic computation.  相似文献   

7.
8.
The style of a set of Swedish nursery tunes is described in terms of a generative rule system. A generative rule system producing melodically similar versions of an old Swedish folk song is also presented. Examples of melodies generated by these two rule systems are given.Both these rule systems are similar in several respects. Thus, the marking of the hierarchical constituent structure seems to be one of the important principles in composing simple melodies.The rule systems also show a number of similarities with the Chomsky & Halle (1968) generative phonology of English. For instance, the procedures used for deriving a stress contour from a tree diagram are almost identical. Moreover, in sentences as in melodies this stress, or prominence contour is of decisive importance to the generation of the surface structure, such as meter, harmony, and sequences of pitches. It is believed that such parallels between language and music reflect characteristics of man's perceptual and cognitive capacities.  相似文献   

9.
Cassidy, G.G. & MacDonald, R.A.R. (2010). The effects of music on time perception and performance of a driving game. Scandinavian Journal of Psychology 51, 455–464. There is an established and growing body of evidence highlighting that music can influence behavior across a range of diverse domains ( Miell, MacDonald, & Hargreaves 2005 ). One area of interest is the monitoring of “internal timing mechanisms”, with features such as tempo, liking, perceived affective nature and everyday listening contexts implicated as important ( North & Hargreaves, 2008 ). The current study addresses these issues by comparing the effects of self‐selected and experimenter‐selected music (fast and slow) on actual and perceived performance of a driving game activity. Seventy participants completed three laps of a driving game in seven sound conditions: (1) silence; (2) car sounds; (3) car sounds with self‐selected music, and car sounds with experimenter‐selected music; (4) high‐arousal (70 bpm); (5) high‐arousal (130 bpm); (6) low‐arousal (70 bpm); and (7) low‐arousal (130 bpm) music. Six performance measures (time, accuracy, speed, and retrospective perception of these), and four experience measures (perceived distraction, liking, appropriateness and enjoyment) were taken. Exposure to self‐selected music resulted in overestimation of elapsed time and inaccuracy, while benefiting accuracy and experience. In contrast, exposure to experimenter‐selected music resulted in poorest performance and experience. Increasing the tempo of experimenter‐selected music resulted in faster performance and increased inaccuracy for high‐arousal music, but did not impact experience. It is suggested that personal meaning and subjective associations connected to self‐selected music promoted increased engagement with the activity, overriding detrimental effects attributed to unfamiliar, less liked and less appropriate experimenter‐selected music.  相似文献   

10.
人类大脑运动皮质的Beta(15~30 Hz)和Mu节律(8~14 Hz)有共同的活动特征,研究指出二者可能是联合的脑电成分。然而越来越多研究表明运动区Beta节律可能独立于Mu节律,且具有特殊的功能意义。线索诱发的Beta节律降低、运动前Beta节律降低和运动后Beta节律回复增强都是Beta与Mu节律不同的活动形式,表明运动区Beta节律有其独特的心理意义。本文主要对运动区Beta节律不同于Mu节律的活动特征及已有的理论解释进行了梳理和分析,并结合儿童研究的数据,从发展的角度对运动后Beta节律的理论假说进行了评述,最后在此基础上为进一步探索运动区Beta节律的功能意义提出了研究的展望。  相似文献   

11.
张晶晶  杨玉芳 《心理科学进展》2019,27(12):2043-2051
语言和音乐在呈现过程中, 小单元相互结合组成更大的单元, 最终形成层级结构。已有研究表明, 听者能够将连续的语流和音乐切分成层级结构, 并在大脑中形成层级表征。在感知基础之上, 听者还能将新出现的语言和音乐事件整合到层级结构之中, 形成连贯理解, 最终顺利地完成交流过程。未来研究应剖析边界线索在层级结构感知中的作用, 考察不同层级整合过程的影响因素, 进一步探索语言和音乐层级结构加工的关系。  相似文献   

12.
13.
Numerous studies have provided clues about the ontogeny of lateralization of auditory processing in humans, but most have employed specific subtypes of stimuli and/or have assessed responses in discrete temporal windows. The present study used near-infrared spectroscopy (NIRS) to establish changes in hemodynamic activity in the neocortex of preverbal infants (aged 4–11 months) while they were exposed to two distinct types of complex auditory stimuli (full sentences and musical phrases). Measurements were taken from bilateral temporal regions, including both anterior and posterior superior temporal gyri. When the infant sample was treated as a homogenous group, no significant effects emerged for stimulus type. However, when infants’ hemodynamic responses were categorized according to their overall changes in volume, two very clear neurophysiological patterns emerged. A high-responder group showed a pattern of early and increasing activation, primarily in the left hemisphere, similar to that observed in comparable studies with adults. In contrast, a low-responder group showed a pattern of gradual decreases in activation over time. Although age did track with responder type, no significant differences between these groups emerged for stimulus type, suggesting that the high- versus low-responder characterization generalizes across classes of auditory stimuli. These results highlight a new way to conceptualize the variable cortical blood flow patterns that are frequently observed across infants and stimuli, with hemodynamic response volumes potentially serving as an early indicator of developmental changes in auditory-processing sensitivity.  相似文献   

14.
Summary: The Work Motivation Inventory (WMI), a measure of Maslow's hierarchy of needs, and the Edwards Personal Preference Schedule (EPPS), a measure of Murray's manifest needs, were administered to 372 undergraduates. The two instruments were compared using canonical analysis. The analysis revealed three significant relationships between components of the two instruments. The first relationship supported Maslow's need hierarchy in general and its measurement by the WMI. The second suggested a fluctuating relationship between giving and receiving help and the levels of Maslow's hierarchy. The third relationship sumested that need for Achievement is associated with the intermediate levels of Maslow's hierarchy.  相似文献   

15.
This paper shows the formulation of nine methods of estimating the unknown communalities. Each of these methods has been used on experimental data and the results tabulated for comparison. The results show that the most accurate approximations are obtained from the Centroid No. 1 and theGraphical methods.The author wishes to express his appreciation to Professor L. L. Thurstone for his advice and for providing the facilities of the Psychometric Laboratory for this investigation.  相似文献   

16.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

17.
Latent distance analysis provides a probability model for the non-perfect Guttman scale; the restricted latent distance structure is simpler to compute than the general structure. Since no sampling theory for latent structure analysis is available, the advantages of the general structure cannot be expressed formally. The two structures are compared in terms of their fit to fifteen sets of empirical data. The computation schemes used are summarized.  相似文献   

18.
This study examined the family system differences between 40 volunteer natural-father and stepfather families. Family triads consisting of the husband, the wife, and a child whose age ranged from 12 to 15 years were studied. Four instruments were used: (a) the Family Concept Q-Sort; (b) a Semantic Differential; (c) a demographic questionnaire; and (d) an interaction-reaction questionnaire. Analyses of variance on the data obtained from the Q-sorts and the Semantic Differentials indicated that stepfather family systems are different from natural-father family systems along several salient dimensions including psychological adjustment, satisfaction with family, reciprocal understanding, and perceived goodness and potency. It was concluded that the differences between the family systems in terms of their interpersonal relations and perceptions affect the entire stepparent family system and its ability to function adequately.  相似文献   

19.
Peretz I 《Cognition》2006,100(1):1-32
Music, as language, is a universal human trait. Throughout human history and across all cultures, people have produced and enjoyed music. Despite its ubiquity, the musical capacity is rarely studied as a biological function. Music is typically viewed as a cultural invention. In this paper, the evidence bearing on the biological perspective of the musical capacity is reviewed. Related issues, such as domain-specificity, innateness, and brain localization, are addressed in an attempt to offer a unified conceptual basis for the study of music processing. This scheme should facilitate the study of the biological foundations of music by bringing together the fields of genetics, developmental and comparative research, neurosciences, and musicology.  相似文献   

20.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号