首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
How can we explain children's understanding of the unseen world? Young children are generally able to distinguish between real unobservable entities and fantastical ones, but they attribute different characteristics to and show less confidence in their decisions about fantastical entities generally endorsed by adults, such as Santa Claus. One explanation for these conceptual differences is that the testimony children hear from others about unobservable entities varies in meaningful ways. Although this theory has some experimental support, its viability in actual conversation has yet to be investigated. Study 1 sought to examine this question in parent–child conversation and showed that parents provide similar types of content information when talking to children about both real entities and entities that they generally endorse. However, parents use different pragmatic cues when they communicate about endorsed entities than they do when talking about real ones. Study 2 showed that older siblings used discourse strategies similar to those used by parents when talking to young children about unobservable entities. These studies indicate that the types of cues children use to form their conceptions of unobservable entities are present in naturalistic conversations with others, supporting a role for testimony in children's early beliefs.  相似文献   

2.
During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.  相似文献   

3.
Communication is aided greatly when speakers and listeners take advantage of mutually shared knowledge (i.e., common ground). How such information is represented in memory is not well known. Using a neuropsychological-psycholinguistic approach to real-time language understanding, we investigated the ability to form and use common ground during conversation in memory-impaired participants with hippocampal amnesia. Analyses of amnesics' eye fixations as they interpreted their partner's utterances about a set of objects demonstrated successful use of common ground when the amnesics had immediate access to common-ground information, but dramatic failures when they did not. These findings indicate a clear role for declarative memory in maintenance of common-ground representations. Even when amnesics were successful, however, the eye movement record revealed subtle deficits in resolving potential ambiguity among competing intended referents; this finding suggests that declarative memory may be critical to more basic aspects of the on-line resolution of linguistic ambiguity.  相似文献   

4.
Previous research has largely focused on the influence of experienced affect on decision making; however, other sources of affective information may also shape decisions. In two studies, we examine the interacting influences of affective information, state affect, and personality on temporal discounting rates (i.e., the tendency to choose small rewards today rather than larger rewards in the future). In Study 1, participants were primed with either positive or negative affect adjectives before making reward choices. In Study 2, participants underwent either a positive or negative affect induction before making reward choices. Results in both studies indicate that neuroticism interacts with state unpleasant affect and condition (i.e., positive or negative primes or induction) to predict discounting rates. Moreover, the nature of the interactions depends on the regulatory cues of the affective information available. These results suggest that irrelevant (i.e., primes) and stable (i.e., personality traits) sources of affective information also shape judgments and decision making. Thus, current affect levels are not the only source of affective information that guides individuals when making decisions.  相似文献   

5.
During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.  相似文献   

6.
Based on feminist social constructionist theory, it was proposed that the sexual language women and men used would reflect male sexual power over women through degradation and objectification. In the first study, 79 women and 88 men (36 of whom were fraternity members) reported anonymously on the sexual language they used. The strongest effects found were that men (particularly those in a fraternity) were likely to use sexually degrading terms to refer to female genitals. Men were more likely than women to use aggressive terms to refer to copulation. In a second study, 56 women and 47 men college participants listened to a conversation between either two women or two men in which they were talking about having sex with someone they just met the night before. The speaker either used more degrading or less degrading language. In general, people judged anyone who used degrading language negatively. The person who was the object in the more degrading conversation compared to the less degrading conversation was judged as less intelligent and less moral. The results suggest that gender is associated with the sexual language people use, and that the degradation and objectification present in the sexual language men sometimes use might have harmful consequences on the person being objectified.  相似文献   

7.
We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.  相似文献   

8.
The way we refer to things in the world is shaped by the immediate physical context as well as the discourse history. But what part of the discourse history is relevant to language use in the present? In four experiments, we combine the study of task-based conversation with measures of recognition memory to examine the role of physical contextual cues that shape what speakers perceive to be a part of the relevant discourse history. Our studies leverage the differentiation effect, a phenomenon in which speakers are more likely to use a modified expression to refer to an object (e.g., dotted sock) if they had previously described a similar object (e.g., striped sock) than when they had not described a similar object. Two physical cues—the background that framed the to-be-described pictures and the position of the pictures in the display—were manipulated to alter perceptions about the relevant discourse context. We measured the rate with which speakers modify referring expressions to differentiate current from past referents. Recognition memory measures following the conversation probed what was and was not remembered about past discourse referents and contexts. Analysis of modification rates indicated that these contextual grouping cues shaped perceptions about the relevant discourse context. The contextual cues did not affect memory for the referents, but the memory for past referents was better for speakers than for listeners. Our findings show that perceptions about the relevant discourse history are a key determinant of how language is used in the moment but also that conversational partners form asymmetric representations of the discourse history.  相似文献   

9.
How is conceptual knowledge transmitted during conversation? When a speaker refers to an object, the name that the speaker chooses conveys information about categoryidentity. In addition, I propose that a speaker’s confidence in a classification can convey information about categorystructure. Because atypical instances of a category are more difficult to classify than typical instances, when speakers refer to these instances their lack of confidence will manifest itself “paralinguistically”—that is, in the form of hesitations, filled pauses, or rising prosody. These features can help listeners learn by enabling them to differentiate good from bad examples of a category. So that this hypothesis could be evaluated, in a category learning experiment participants learned a set of novel colors from a speaker. When the speaker’s paralinguistically expressed confidence was consistent with the underlying category structure, learners acquired the categories more rapidly and showed better category differentiation from the earliest moments of learning. These findings have important implications for theories of conversational coordination and language learning.  相似文献   

10.
During conversation, it is necessary to keep track of what others can and cannot understand. Previous research has focused largely on understanding the time course along which knowledge about interlocutors influences language comprehension/production rather than the cognitive process by which interlocutors take each other’s perspective. In addition, most work has looked at the effects of knowledge about a speaker on a listener’s comprehension, and not on the possible effects of other listeners on a participant’s comprehension process. In the current study, we introduce a novel joint comprehension paradigm that addresses the cognitive processes underlying perspective taking during language comprehension. Specifically, we show that participants who understand a language stimulus, but are simultaneously aware that someone sitting next to them does not understand the same stimulus, show an electrophysiological marker of semantic integration difficulty (i.e., an N400-effect). Crucially, in a second group of participants, we demonstrate that presenting exactly the same sentences to the participant alone (i.e. without a co-listener) results in no N400-effect. Our results suggest that (1) information about co-listeners as well as the speaker affect language comprehension, and (2) the cognitive process by which we understand what others comprehend mirrors our own language comprehension processes.  相似文献   

11.
Affect misattribution occurs when affective cues color subsequent unrelated evaluations. Research suggests that affect misattribution decreases when one is aware that affective cues are unrelated to the evaluation at hand. We propose that affect misattribution may even occur when one is aware that affective cues are irrelevant, as long as the source of these cues seems ambiguous. When source ambiguity exists, affective cues may freely influence upcoming unrelated evaluations. We examined this using an adapted affect misattribution procedure where pleasant and unpleasant responses served as affective cues that could influence later evaluations of unrelated targets. These affective cues were either perceived as reflecting a single source (i.e., a subliminal affective picture in Experiment 1; one's internal affective state in Experiment 2), or as reflecting two sources (i.e., both) suggesting source ambiguity. Results show that misattribution of affect decreased when participants perceived affective cues as representing one source rather than two.  相似文献   

12.
陈璟  孙昕怡  李红  李秀丽 《心理学报》2009,41(10):958-966
选取148名4岁儿童, 运用实验法考察了幼儿的愿望采择发展水平对其情感决策的影响。结果表明: (1) 幼儿的愿望采择水平对其情感决策具有显著影响。情境中他人愿望信息充足时, 幼儿会采择他人愿望并以此为线索为他人决策, 但对线索的利用程度受其愿望采择水平的制约。(2) 在无关于他人愿望信息提示的条件下, 4岁儿童为他人与为自己的情感决策不存在显著差异。(3) 4岁儿童能够采择他人的单一愿望, 但其冲突愿望采择能力还不成熟。  相似文献   

13.
The purpose of this article is to examine the communication behaviors of online leaders, or those who influence other members of online communities in triggering message replies, sparking conversation, and diffusing language. It relies on 632,622 messages from 33,450 participants across 16 discussion groups from Google Groups that took place over a 2‐year period. It utilizes automated text analysis, social network analysis, and hierarchical linear modeling to uncover the language and social behavior of online leaders. The findings show that online leaders influence others through high communication activity, credibility, network centrality, and the use of affective, assertive, and linguistic diversity in their online messages.  相似文献   

14.
Successful social interactions rely on the ability to make accurate judgments based on social cues as well as the ability to control the influence of internal or external affective information on those judgments. Prior research suggests that individuals with schizophrenia misinterpret social stimuli and this misinterpretation contributes to impaired social functioning. We tested the hypothesis that for people with schizophrenia, social judgments are abnormally influenced by affective information. Twenty-three patients with schizophrenia and 35 healthy control participants rated the trustworthiness of faces following the presentation of neutral, negative (threat-related), or positive affective primes. Results showed that all participants rated faces following negative affective primes as less trustworthy than faces following neutral or positive primes. Importantly, this effect was significantly more pronounced for participants with schizophrenia, suggesting that schizophrenia may be characterized by an exaggerated influence of negative affective information on social judgment. Furthermore, the extent that the negative affective prime influenced trustworthiness judgments was significantly associated with patients' severity of positive symptoms, particularly feelings of persecution. These findings suggest that for people with schizophrenia, negative affective information contributes to an interpretive bias, consistent with paranoid ideation, when judging the trustworthiness of others. This bias may contribute to social impairments in schizophrenia.  相似文献   

15.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

16.
We investigated whether the previously established effect of mood on episodic memory generalizes to semantic memory and whether mood affects metacognitive judgments associated with the retrieval of semantic information. Sixty-eight participants were induced into a happy or sad mood by viewing and describing IAPS images. Following mood induction, participants saw a total of 200 general knowledge trivia items (50 open-ended and 50 multiple-choice after each of two mood inductions) and were asked to provide a metacognitive judgment about their knowledge for each item before providing a response. A sample trivia item is: Author – – To kill a mockingbird. Results indicate that mood affects the retrieval of semantic information, but only when the participant believes they possess the requested semantic information; furthermore, this effect depends upon the presence of retrieval cues. In addition, we found that mood does not affect the likelihood of different metacognitive judgments associated with the retrieval of semantic information, but that, in some cases, having retrieval cues increases accuracy of these metacognitive judgments. Our results suggest that semantic retrieval processes are minimally susceptible to the influence of affective state but does not preclude the possibility that affective state may influence encoding of semantic information.  相似文献   

17.
Speech contains both explicit social information in semantic content and implicit cues to social behaviour and mate quality in voice pitch. Voice pitch has been demonstrated to have pervasive effects on social perceptions, but few studies have examined these perceptions in the context of meaningful speech. Here, we examined whether male voice pitch interacted with socially relevant cues in speech to influence listeners’ perceptions of trustworthiness and attractiveness. We artificially manipulated men's voices to be higher and lower in pitch when speaking words that were either prosocial or antisocial in nature. In Study 1 , we found that listeners perceived lower-pitched voices as more trustworthy and attractive in the context of prosocial words than in the context of antisocial words. In Study 2 , we found evidence that suggests this effect was driven by stronger preferences for higher-pitched voices in the context of antisocial cues, as voice pitch preferences were not significantly different in the context of prosocial cues. These findings suggest that higher male voice pitch may ameliorate the negative effects of antisocial speech content and that listeners may be particularly avoidant of those who express multiple cues to antisociality across modalities.  相似文献   

18.
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical ‘pseudo‐utterances’ were presented to listener groups with and without PD in two separate rating tasks. Task 1 required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo‐utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the polite/impolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language ( Pell & Leonard, 2003 ).  相似文献   

19.
A crucial step for acquiring a native language vocabulary is the ability to segment words from fluent speech. English-learning infants first display some ability to segment words at about 7.5 months of age. However, their initial attempts at segmenting words only approximate those of fluent speakers of the language. In particular, 7.5-month-old infants are able to segment words that conform to the predominant stress pattern of English words. The ability to segment words with other stress patterns appears to require the use of other sources of information about word boundaries. By 10.5 months, English learners display sensitivity to additional cues to word boundaries such as statistical regularities, allophonic cues and phonotactic patterns. Infants’ word segmentation abilities undergo further development during their second year when they begin to link sound patterns with particular meanings. By 24 months, the speed and accuracy with which infants recognize words in fluent speech is similar to that of native adult listeners. This review describes how infants use multiple sources of information to locate word boundaries in fluent speech, thereby laying the foundations for language understanding.  相似文献   

20.
Two experiments investigated the effects of mood on the use of global trait information in impression formation tasks. Participants in both experiments formed an impression of a target based on traits and a series of behaviors that were both consistent and inconsistent with the traits. In Experiment 1, participants in happy moods, relative to those in unhappy moods, made impression judgments that reflected the evaluative implications of the trait information to a greater extent than the behaviors, regardless of the order in which they received the information. In Experiment 2, both happy and sad participants engaged in systematic processing, as reflected by the recall data, but only happy participants’ recall of target information was significantly biased by the global trait information they received. These findings are consistent with the affect-as-information model in which affective cues influence the extent to which individuals rely on general knowledge and, importantly, are inconsistent with models that posit that happiness results in reduced motivation or ability to process information carefully.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号