首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody–semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

2.
3.
Due to extensive variability in the phonetic realizations of words, there may be few or no proximal spectro-temporal cues that identify a word’s onset or even its presence. Dilley and Pitt (2010) showed that the rate of context speech, distal from a to-be-recognized word, can have a sizeable effect on whether or not a word is perceived. This investigation considered whether there is a distinct role for distal rhythm in the disappearing word effect. Listeners heard sentences that had a grammatical interpretation with or without a critical function word (FW) and transcribed what they heard (e.g., are in Jill got quite mad when she heard there are birds can be removed and Jill got quite mad when she heard their birds is still grammatical). Consistent with a perceptual grouping hypothesis, participants were more likely to report critical FWs when distal rhythm (repeating ternary or binary pitch patterns) matched the rhythm in the FW-containing region than when it did not. Notably, effects of distal rhythm and distal rate were additive. Results demonstrate a novel effect of distal rhythm on the amount of lexical material listeners hear, highlighting the importance of distal timing information and providing new constraints for models of spoken word recognition.  相似文献   

4.
Discriminating temporal relationships in speech is crucial for speech and language development. However, temporal variation of vowels is difficult to perceive for young infants when it is determined by surrounding speech sounds. Using a familiarization-discrimination paradigm, we show that English-learning 6- to 9-month-olds are capable of discriminating non-native acoustic vowel duration differences that systematically vary with subsequent consonantal durations. Furthermore, temporal regularity of stimulus presentation potentially makes the task easier for infants. These findings show that young infants can process fine-grained temporal aspects of speech sounds, a capacity that lays the foundation for building a phonological system of their ambient language(s).  相似文献   

5.
The present study investigated whether lexical processes that occur when we name objects can also be observed when an interaction partner is naming objects. We compared the behavioral and electrophysiological responses of participants performing a conditional go/no-go picture naming task in two different conditions: individually and jointly with a confederate participant. To obtain an index of lexical processing, we manipulated lexical frequency, so that half of the pictures had corresponding names of high-frequency and the remaining half had names of low-frequency. Color cues determined whether participants should respond, whether their task-partner should respond, or whether nobody should respond. Behavioral and ERP results showed that participants engaged in lexical processing when it was their turn to respond. Crucially, ERP results on no-go trials revealed that participants also engaged in lexical processing when it was their partner’s turn to act. In addition, ERP results showed increased response inhibition selectively when it was the partner’s turn to act. These findings provide evidence for the claim that listeners generate predictions about speakers’ utterances by relying on their own action production system.  相似文献   

6.
Building on previous research examining the implications for self-regulation and decision making of construing action at varying levels of abstraction, the authors proposed that construing action in terms of its abstract purposes facilitates orienting one’s decisions toward the standards, characteristics, and goals that define one’s desired self-concept. Consistent with this proposal, desiring for oneself a political candidate’s personal qualities predicted evaluating favorably (in Study 1) and voting for (in Study 2) that candidate to a greater extent among participants focused on the distal future (and presumably construing action at a relatively high-level of abstraction) than the proximal future (and presumably construing action at a relatively low-level of abstraction). Moreover, individuals chronically construing action in high-level terms responded more favorably to advertisements appealing to their desired self-concept (in Study 3) than to product quality. These findings’ implications for decision making are discussed.  相似文献   

7.
The pronunciation of the same word may vary considerably as a consequence of its context. The Dutch word tuin (English, garden) may be pronounced tuim if followed by bank (English, bench), but not if followed by stoel (English, chair). In a series of four experiments, we examined how Dutch listeners cope with this context sensitivity in their native language. A first word identification experiment showed that the perception of a word-final nasal depends on the subsequent context. Viable assimilations, but not unviable assimilations, were often confused perceptually with canonical word forms in a word identification task. Two control experiments ruled out the possibility that this effect was caused by perceptual masking or was influenced by lexical top-down effects. A passive-listening study in which electrophysiological measurements were used showed that only unviable, but not viable, phonological changes elicited a significant mismatch negativity. The results indicate that phonological assimilations are dealt with by an early prelexical mechanism.  相似文献   

8.
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca’s aphasia, and therefore inferred damage to Broca’s area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca’s area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d′, and was unrelated to the degree of non-fluency in the patients’ speech production. Performance on the auditory–visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory–visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory–visual matching of phonological forms.  相似文献   

9.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

10.
The present study used an interpersonal theoretical perspective to examine the interactions between Dutch teachers and kindergartners. Interpersonal theory provides explanations for dyadic interaction behaviors by stating that complementary behaviors (dissimilar in terms of control, and similar in terms of affiliation) elicit and sustain each other. We observed 69 kindergarten children (Mage = 5.79 years) and their 37 regular teachers during a dyadic interaction task. Every 5 s, independent observers rated teachers' and children's behaviors along the interpersonal dimensions of control and affiliation. Teachers reported on children's shyness and the quality of the teacher-child relationship. Multilevel modeling provided correlational evidence for complementarity within and between dyads. Cross-lagged analyses revealed that teachers showed complementarity for control and that children showed complementarity for affiliation. Children also reacted complementarily with respect to control but only if they were shy or shared positive relationships with their teachers. Implications for theory and practice are discussed.  相似文献   

11.
12.
13.
The nature and origin of the human capacity for acquiring language is not yet fully understood. Here we uncover early roots of this capacity by demonstrating that humans are born with a preference for listening to speech. Human neonates adjusted their high amplitude sucking to preferentially listen to speech, compared with complex non-speech analogues that controlled for critical spectral and temporal parameters of speech. These results support the hypothesis that human infants begin language acquisition with a bias for listening to speech. The implications of these results for language and communication development are discussed. For a commentary on this article see Rosen and Iverson (2007).  相似文献   

14.
January D  Kako E 《Cognition》2007,104(2):417-426
Six unsuccessful attempts at replicating a key finding in the linguistic relativity literature [Boroditsky, L. (2001). Does language shape thought?: Mandarin and English speakers' conceptions of time. Cognitive Psychology, 43, 1-22] are reported. In addition to these empirical issues in replicating the original finding, theoretical issues present in the original report are discussed. In sum, we conclude that Boroditsky (2001) provides no support for the Whorfian hypothesis.  相似文献   

15.
Correspondence between child and maternal perceptions of sibling relationship quality (standards, actual ratings, problems) and children's reports of daily interactions were assessed in 40 early adolescent children (M age=11.5 yrs) and their mothers (n=32). Children completed the Sibling Relationship Questionnaire (Furman & Buhrmester, 1985. Child Development, 56, 448–461) and Daily Checklist ratings of sibling interactions for 14 days. Mothers completed the Parental Expectations and Perceptions of Children's Sibling Relationship Questionnaire (Kramer & Baron, 1995. Family Relations, 44, 95–103). Overall, findings revealed correspondence between child perceptions of sibling warmth and maternal ratings of standards, actual ratings, and problems in sibling warmth but not conflict and rivalry. Maternal and child perceptions of sibling relationship qualities were positively associated with children's reports of ongoing interactions. Finally, regression analyses identified unique maternal and child correlates for both happy and prosocial daily interactions. Findings are discussed in light of recent research and theory on family dynamics. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
The aim of the present research was to investigate the relationship between oxytocin and maternal affect attunement, as well as the role of affect attunement in the relationship between oxytocin and infant social engagement during early mother-infant interactions. Forty-three mother-infant dyads participated in the present study when infants were 4 months. They were observed during (1) a situation where no communication took place and (2) a natural interaction between mother and infant. During this procedure, three saliva samples from mothers and their infants were collected to determine their levels of oxytocin at different time points. Maternal affect attunement (maintaining attention, warm sensitivity) and infant interactive behaviors (gaze, positive, and negative affect) were coded during the natural interaction. Results indicated that overall maternal oxytocin functioning was negatively related to her warm sensitivity, while infant oxytocin reactivity together with maternal affect attunement were associated with infant positive social engagement with their mothers. Specifically, infant oxytocin reactivity was significantly related to their gazes at mother, but only for infants of highly attuned mothers. These results point to the complex role oxytocin plays in parent-infant interactions while emphasizing the need to analyze both overall oxytocin functioning as well as reactivity as different indices of human affiliative behavior.  相似文献   

17.
It has been a matter of debate whether the specifically human capacity to process syntactic information draws on attentional resources or is automatic. To address this issue, we recorded neurophysiological indicators of syntactic processing to spoken sentences while subjects were distracted to different degrees from language processing. Subjects were either passively distracted, by watching a silent video film, or their attention was actively streamed away from the language input by performing a demanding acoustic signal detection task. An early index of syntactic violations, the syntactic Mismatch Negativity (sMMN), distinguished between grammatical and ungrammatical speech even under strongest distraction. The magnitude of the early sMMN (at <150ms) was unaffected by attention load of the distraction task. The independence of the early syntactic brain response of attentional distraction provides neurophysiological evidence for the automaticity of syntax and for its autonomy from other attention-demanding processes, including acoustic stimulus discrimination. The first attentional modulation of syntactic brain responses became manifest at a later stage, at approximately 200ms, thus demonstrating the narrowness of the early time window of syntactic autonomy. We discuss these results in the light of modular and interactive theories of cognitive processing and draw inferences on the automaticity of both the cognitive MMN response and certain grammar processes in general.  相似文献   

18.
Language selection in bilingual speech: evidence for inhibitory processes   总被引:1,自引:0,他引:1  
Kroll JF  Bobb SC  Misra M  Guo T 《Acta psychologica》2008,128(3):416-430
Although bilinguals rarely make random errors of language when they speak, research on spoken production provides compelling evidence to suggest that both languages are active when only one language is spoken (e.g., [Poulisse, N. (1999). Slips of the tongue: Speech errors in first and second language production. Amsterdam/Philadelphia: John Benjamins]). Moreover, the parallel activation of the two languages appears to characterize the planning of speech for highly proficient bilinguals as well as second language learners. In this paper, we first review the evidence for cross-language activity during single word production and then consider the two major alternative models of how the intended language is eventually selected. According to language-specific selection models, both languages may be active but bilinguals develop the ability to selectively attend to candidates in the intended language. The alternative model, that candidates from both languages compete for selection, requires that cross-language activity be modulated to allow selection to occur. On the latter view, the selection mechanism may require that candidates in the nontarget language be inhibited. We consider the evidence for such an inhibitory mechanism in a series of recent behavioral and neuroimaging studies.  相似文献   

19.
The development in the interface of smart devices has lead to voice interactive systems. An additional step in this direction is to enable the devices to recognize the speaker. But this is a challenging task because the interaction involves short duration speech utterances. The traditional Gaussian mixture models (GMM) based systems have achieved satisfactory results for speaker recognition only when the speech lengths are sufficiently long. The current state-of-the-art method utilizes i-vector based approach using a GMM based universal background model (GMM-UBM). It prepares an i-vector speaker model from a speaker’s enrollment data and uses it to recognize any new test speech. In this work, we propose a multi-model i-vector system for short speech lengths. We use an open database THUYG-20 for the analysis and development of short speech speaker verification and identification system. By using an optimum set of mel-frequency cepstrum coefficients (MFCC) based features we are able to achieve an equal error rate (EER) of 3.21% as compared to the previous benchmark score of EER 4.01% on the THUYG-20 database. Experiments are conducted for speech lengths as short as 0.25 s and the results are presented. The proposed method shows improvement as compared to the current i-vector based approach for shorter speech lengths. We are able to achieve improvement of around 28% even for 0.25 s speech samples. We also prepared and tested the proposed approach on our own database with 2500 speech recordings in English language consisting of actual short speech commands used in any voice interactive system.  相似文献   

20.
We report an experiment in which we test the possible influence of the tense of the verb and explicit negatives with indicative conditionals. We tested the effects of systematically negating the constituents of four fundamental inferences based on conditionals in three different tenses (present tense, past tense, future tense): Modus Ponens (i.e., inferences of the form: if p then q; p; therefore q), Modus Tollens (if p then q; not-q; therefore not-p), Affirmation of the Consequent (if p then q; q; therefore p), and Denial of the Antecedent (if p then q; not-p; therefore not-q). The latter two inferences are invalid for true conditionals, but are valid for bi-conditionals (if, and only if, p then q). The participants drew their own conclusions from premises about letters and numbers on cards. We discuss the results in relation to an affirmation premise bias, a negative conclusion bias, and a double negation effect. We outline the importance of our findings for theories about conditional and counterfactual thinking.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号