排序方式: 共有120条查询结果,搜索用时 15 毫秒
101.
Rockwell P 《Journal of psycholinguistic research》2007,36(5):361-369
This study investigated vocal cues that differentiate sarcastic utterances from non-sarcastic utterances. Utterances were
drawn from videotapes of participant interviews and arranged on a master tape for analysis. Utterances that were identified
as sarcastic by speakers and recognized as sarcastic by listeners were randomly arranged with utterances identified and recognized
as non-sarcastic by the same participants. Both sarcastic and non-sarcastic utterances were analyzed by two methods–acoustic
analysis and perceptual coding. The acoustic analysis proved slightly more successful than the perceptual coding in discriminating
between sarcastic and non-sarcastic utterances. The acoustic analysis indicated that fundamental frequency, frequency range,
length of utterance, and total amount of sound significantly discriminated sarcastic from non-sarcastic utterances. The perceptual
coding method revealed that pitch range, length of utterance, and total amount of sound significantly discriminated sarcastic
from non-sarcastic utterances. Moderate correlations were found between the acoustic and perceptual variables. 相似文献
102.
Tina Weis Barbara Estner Cees van Leeuwen 《Quarterly journal of experimental psychology (2006)》2016,69(7):1366-1383
Concepts, including the mental number line, or addressing pitch as high and low, suggest that the spatial–numerical and spatial–pitch association of response codes (SNARC and SPARC) effects are domain-specific and thus independent. Alternatively, there may be dependencies between these effects, because they share common automatic or controlled decision mechanisms. In two experiments, participants were presented with spoken numbers in different pitches; their numerical value, pitch, and response compatibility were varied systematically. This allowed us to study SNARC and SPARC effects in a factorial design (see also Fischer, Riello, Giordano, &; Rusconi, 2013). Participants judged the stimuli on numerical magnitude, pitch, or parity (odd–even). In all tasks, the SNARC and SPARC effects had superadditive interactions. These were interpreted as both effects sharing a common mechanism. The task variation probes the mechanism: In the magnitude judgement task, numerical magnitude was explicit, whereas pitch was implicit; in the pitch judgement task, it was vice versa. In the parity judgement task, both dimensions were implicit. Regardless of whether they were implicit or explicit, both SNARC and SPARC effects occurred in all tasks. We concluded that by not requiring focal attention the common mechanism operates automatically. 相似文献
103.
情绪韵律识别是从声学线索变化中提取情绪信息,进而推断他人情绪状态的过程。情绪韵律识别缺陷是孤独症谱系障碍者的一种常见表现,此缺陷会受到情绪韵律强度、字面语义、情景语境、心理声学能力和共患疾病的影响。目前,该人群情绪韵律识别缺陷的原因探析集中于心智化能力不足、社会动机缺失和经验匮乏假说等。孤独症谱系障碍者情绪韵律识别的神经机制研究主要集中于与健康人群的比较,相关发现包括情绪韵律识别的右半球优势效应、局部脑区激活增多、大脑网络连接不足和早期注意模式异常。未来,应该进一步提高实验范式的生态效度,注重孤独症个体差异因素,整合相关理论解释,并开发有效的测评工具和干预策略。 相似文献
104.
《Journal of Cognitive Psychology》2013,25(7):788-798
Contrasting results in visual and auditory working memory studies suggest that the mechanisms of association between location and identity of stimuli depend on the sensory modality of the input. In this auditory study, we tested whether the association of two features both encoded in the “what” stream is different from the association between a “what” and a “where” feature. In an old–new recognition task, blindfolded participants were presented with sequences of sounds varying in timbre, pitch and location. They were required to judge if either the timbre, pitch or location of a single-probe stimulus was identical or different to the timbre, pitch or location of one of the sounds of the previous sequence. Only variations in one of the three features were relevant for the task, whereas the other two features could vary, with task-irrelevant changes. Results showed that task-irrelevant variations in the “what” features (either timbre or pitch) caused an impaired recognition of sound location and in the other task-relevant “what” feature, whereas changes in sound location did not affect the recognition of either one of the “what” features. We conclude that the identity of sounds is incidentally processed even when not required by the task, whereas sound location is not maintained when task irrelevant. 相似文献
105.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky “sentences” spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky “sentences” in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality. 相似文献
106.
准确识别言语中的情绪韵律信息对社会交往非常重要。本研究采用功能近红外成像技术, 探索外显和内隐情绪加工条件下愤怒、恐惧、快乐三种情绪韵律加工过程中的大脑皮层神经活动。结果表明, 对愤怒、恐惧、快乐韵律进行特异性加工的脑区分别为左侧额极/眶额叶、右侧缘上回、左侧额下回, 其中右侧缘上回脑区同时受到情绪和任务的调控。此外, 右侧颞中回、颞下回和颞极在情绪外显任务中的激活明显强于内隐任务。本研究的结果部分支持了情绪韵律的层次模型, 也对该模型的第三层次, 即“额区对语音情绪信息的精细加工需要外显性情绪加工任务参与”提出了质疑。 相似文献
107.
Sekiguchi T 《Journal of psycholinguistic research》2006,35(4):369-384
Lexical prosody (e.g., stress and pitch accent) has been shown to constrain lexical activation of spoken words in various languages. In the present study, whether or not the constraint of lexical prosody is affected by word familiarity in lexical access of Japanese words was examined using a cross-modal priming task. The stimuli were pairs of prosodically different homophones (minimal accent pairs). When the targets were more familiar members of minimal accent pairs, the responses were facilitated by prior presentations of primes that were prosodically different homophones of the targets, suggesting that lexical prosody did not constrain lexical activation. In contrast, when less familiar members of minimal accent pairs were used as the targets, the prosodically different homophones did not facilitate the responses to the targets. These results suggest that the constraint of lexical prosody is not so strong but is affected by the factor of word relative familiarity. 相似文献
108.
Pitch perception is fundamental to melody in music and prosody in speech. Unlike many animals, the vast majority of human adults store melodic information primarily in terms of relative not absolute pitch, and readily recognize a melody whether rendered in a high or a low pitch range. We show that at 6 months infants are also primarily relative pitch processors. Infants familiarized with a melody for 7 days preferred, on the eighth day, to listen to a novel melody in comparison to the familiarized one, regardless of whether the melodies at test were presented at the same pitch as during familiarization or transposed up or down by a perfect fifth (7/12th of an octave) or a tritone (1/2 octave). On the other hand, infants showed no preference for a transposed over original-pitch version of the familiarized melody, indicating that either they did not remember the absolute pitch, or it was not as salient to them as the relative pitch. 相似文献
109.
This study examines the influence of wh-gaps on the prosodic contour of spoken utterances. A previous study (Nagel, Shapiro, & Nawy, 1994) claimed that the phonological representation of a sentence containing a filler-gap dependency explicitly encodes the location of the syntactic gap. In support of this hypothesis, Nagel et al. presented evidence that the word immediately preceding a gap is lengthened and that there is a reliable increase in pitch excursion across the gap location. Our study challenges Nagel et al.'s claim. We argue that their materials confounded the presence/absence of a gap with other factors that are known to affect intonational phrasing independently. We show that, when these factors are separated, the evidence that syntactic gaps are explicitly encoded in the phonological representation of a sentence disappears. 相似文献
110.
The present study examined the effect of visual feedback on the ability to recognise and consolidate pitch information. We trained two groups of nonmusicians to play a piano piece by ear, having one group receiving uninterrupted audiovisual feedback, while allowing the other only to hear, but not see their hand on the keyboard. Results indicate that subjects for whom visual information was deprived showed significantly poorer ability to recognise pitches from the musical piece they had learned. These results are interesting since pitch recognition ability would not intuitively seem to rely on visual feedback. In addition, we show that subjects with previous experience in computer touch-typing made fewer errors during training when trained with no visual feedback, but did not show improved pitch recognition ability posttraining. Our results demonstrate how sensory redundancy increases robustness of learning, and further encourage the use of audiovisual training procedures for facilitating the learning of new skills. 相似文献