首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
2.
The role of extrastriate cortical areas in selective attention was studied in 12 rhesus monkeys. Animals learned a series of color-form pattern discrimination problems, with either color or form cues relevant. After each problem was mastered, correct behavior required a shift in attention, i.e., that responses be made to be previously irrelevant dimension. On some problems shifting attention required that the animal maintain the same fixation; on other problems the color and form cues were separated in space, and the attention shift presumably required a shift in gaze. Matched groups of animals with inferotemporal, prestriate, or superior temporal sulcus lesions, and normal controls, differed significantly in their ability to shift attention. Analyses of inferred stages in attention shift showed that different processes were disturbed in the three lesion groups. Results are discussed in terms of cortical substrates for "looking" and "seeing".  相似文献   

3.
When viewing a portrait, we are often captured by its expressivity, even if the emotion depicted is not immediately identifiable. If the neural mechanisms underlying emotion processing of real faces have been largely clarified, we still know little about the neural basis of evaluation of (emotional) expressivity in portraits. In this study, we aimed at assessing—by means of transcranial magnetic stimulation (TMS)—whether the right superior temporal sulcus (STS) and the right somatosensory cortex (SC), that are important in discriminating facial emotion expressions, are also causally involved in the evaluation of expressivity of portraits. We found that interfering via TMS with activity in (the face region of) right STS significantly reduced the extent to which portraits (but not other paintings depicting human figures with faces only in the background) were perceived as expressive, without, though, affecting their liking. In turn, interfering with activity of the right SC had no impact on evaluating either expressivity or liking of either paintings’ category. Our findings suggest that evaluation of emotional cues in artworks recruit (at least partially) the same neural mechanisms involved in processing genuine biological others. Moreover, they shed light on the neural basis of liking decisions in art by art-naïve people, supporting the view that aesthetic appreciation relies on a multitude of factors beyond emotional evaluation.  相似文献   

4.
Three experiments are reported that investigate the hypothesis that head orientation and gaze direction interact in the processing of another individual's direction of social attention. A Stroop-type interference paradigm was adopted, in which gaze and head cues were placed into conflict. In separate blocks of trials, participants were asked to make speeded keypress responses contingent on either the direction of gaze, or the orientation of the head displayed in a digitized photograph of a male face. In Experiments 1 and 2, head and gaze cues showed symmetrical interference effects. Compared with congruent arrangements, incongruent head cues slowed responses to gaze cues, and incongruent gaze cues slowed responses to head cues, suggesting that head and gaze are mutually influential in the analysis of social attention direction. This mutuality was also evident in a cross-modal version of the task (Experiment 3) where participants responded to spoken directional words whilst ignoring the head/gaze images. It is argued that these interference effects arise from the independent influences of gaze and head orientation on decisions concerning social attention direction.  相似文献   

5.
Perceived gaze contact in seen faces may convey important social signals. We examined whether gaze perception affects face processing during two tasks: Online gender judgement, and later incidental recognition memory. Individual faces were presented with eyes directed either straight towards the viewer or away, while these faces were seen in either frontal or three-quarters view. Participants were slower to make gender judgements for faces with direct versus averted eye gaze, but this effect was particularly pronounced for faces with opposite gender to the observer, and seen in three-quarters view. During subsequent surprise recognition-memory testing, recognition was better for faces previously seen with direct than averted gaze, again especially for the opposite gender to the observer. The effect of direct gaze was stronger in both tasks when the head was seen in three-quarters rather than in frontal view, consistent with the greater salience of perceived eye contact for deviated faces. However, in the memory test, face recognition was also relatively enhanced for faces of opposite gender in front views when their gaze was averted rather than direct. Together, these results indicate that perceived eye contact can interact with facial processing during gender judgements and recognition memory, even when gaze direction is task-irrelevant, and particularly for faces of opposite gender to the observer (an influence which controls for stimulus factors when considering observers of both genders). These findings appear consistent with recent neuroimaging evidence that social facial cues can modulate visual processing in cortical regions involved in face processing and memory, presumably via interconnections with brain systems specialized for gaze perception and social monitoring.  相似文献   

6.
The saccadic latency to visual targets is susceptible to the properties of the currently fixated objects. For example, the disappearance of a fixation stimulus prior to presentation of a peripheral target shortens saccadic latencies (the gap effect). In the present study, we investigated the influences of a social signal from a facial fixation stimulus (i.e., gaze direction) on subsequent saccadic responses in the gap paradigm. In Experiment 1, a cartoon face with a direct or averted gaze was used as a fixation stimulus. The pupils of the face were unchanged (overlap), disappeared (gap), or were translated vertically to make or break eye contact (gaze shift). Participants were required to make a saccade toward a target to the left or the right of the fixation stimulus as quickly as possible. The results showed that the gaze direction influenced saccadic latencies only in the gaze shift condition, but not in the gap or overlap condition; the direct-to-averted gaze shift (i.e., breaking eye contact) yielded shorter saccadic latencies than did the averted-to-direct gaze shift (i.e., making eye contact). Further experiments revealed that this effect was eye contact specific (Exp. 2) and that the appearance of an eye gaze immediately before the saccade initiation also influenced the saccadic latency, depending on the gaze direction (Exp. 3). These results suggest that the latency of target-elicited saccades can be modulated not only by physical changes of the fixation stimulus, as has been seen in the conventional gap effect, but also by a social signal from the attended fixation stimulus.  相似文献   

7.
Gaze direction plays a central role in face recognition. Previous research suggests that faces with direct gaze are better remembered than faces with averted gaze. We compared recognition of faces with direct versus averted gaze in male versus female participants. A total of 52 adults (23 females, 29 males) and 46 children (25 females, 21 males) completed a computerised task that assessed their recognition of faces with direct gaze and faces with averted gaze. Adult male participants showed superior recognition of faces with direct gaze compared to faces with averted gaze. There was no difference between recognition of direct and averted gaze faces for the adult female participants. Children did not demonstrate this sex difference; rather, both male and female youth participants showed better recognition of faces with direct gaze compared to averted gaze. A large body of previous research has revealed superior recognition of faces with direct, compared to averted gaze. However, relatively few studies have examined sex differences. Our findings suggest that gaze direction has differential effects on face recognition for adult males and females, but not for children. These findings have implications for previous explanations of better recognition for direct versus averted gaze.  相似文献   

8.
Research has largely neglected the effects of gaze direction cues on the perception of facial expressions of emotion. It was hypothesized that when gaze direction matches the underlying behavioral intent (approach-avoidance) communicated by an emotional expression, the perception of that emotion would be enhanced (i.e., shared signal hypothesis). Specifically, the authors expected that (a) direct gaze would enhance the perception of approach-oriented emotions (anger and joy) and (b) averted eye gaze would enhance the perception of avoidance-oriented emotions (fear and sadness). Three studies supported this hypothesis. Study 1 examined emotional trait attributions made to neutral faces. Study 2 examined ratings of ambiguous facial blends of anger and fear. Study 3 examined the influence of gaze on the perception of highly prototypical expressions.  相似文献   

9.
Associating crossmodal auditory and visual stimuli is an important component of perception, with the posterior superior temporal sulcus (pSTS) hypothesized to support this. However, recent evidence has argued that the pSTS serves to associate two stimuli irrespective of modality. To examine the contribution of pSTS to crossmodal recognition, participants (N = 13) learned 12 abstract, non-linguistic pairs of stimuli over 3 weeks. These paired associates comprised four types: auditory–visual (AV), auditory–auditory (AA), visual–auditory (VA), and visual–visual (VV). At week four, participants were scanned using magnetoencephalography (MEG) while performing a correct/incorrect judgment on pairs of items. Using an implementation of synthetic aperture magnetometry that computes real statistics across trials (SAMspm), we directly contrasted crossmodal (AV and VA) with unimodal (AA and VV) pairs from stimulus-onset to 2 s in theta (4–8 Hz), alpha (9–15 Hz), beta (16–30 Hz), and gamma (31–50 Hz) frequencies. We found pSTS showed greater desynchronization in the beta frequency for crossmodal compared with unimodal trials, suggesting greater activity during the crossmodal pairs, which was not influenced by congruency of the paired stimuli. Using a sliding window SAM analysis, we found the timing of this difference began in a window from 250 to 750 ms after stimulus-onset. Further, when we directly contrasted all sub-types of paired associates from stimulus-onset to 2 s, we found that pSTS seemed to respond to dynamic, auditory stimuli, rather than crossmodal stimuli per se. These findings support an early role for pSTS in the processing of dynamic, auditory stimuli, and do not support claims that pSTS is responsible for associating two stimuli irrespective of their modality.  相似文献   

10.
This study examined children's ability to use mutual eye gaze as a cue to friendships in others. In Experiment 1, following a discussion about friendship, 4-, 5-, and 6-year-olds were shown animations in which three cartoon children looked at one another, and were told that one target character had a best friend. Although all age groups accurately detected the mutual gaze between the target and another character, only 5- and 6-year-olds used this cue to infer friendship. Experiment 2 replicated the effect with 5- and 6-year-olds when the target character was not explicitly identified. Finally, in Experiment 3, where the attribution of friendship could only be based on synchronized mutual gaze, 6-year-olds made this attribution, while 4- and 5-year-olds did not. Children occasionally referred to mutual eye gaze when asked to justify their responses in Experiments 2 and 3, but it was only by the age of 6 that reference to these cues correlated with the use of mutual gaze in judgements of affiliation. Although younger children detected mutual gaze, it was not until 6 years of age that children reliably detected and justified mutual gaze as a cue to friendship.  相似文献   

11.
The second year of life is a time when social communication skills typically develop, but this growth may be slower in toddlers with language delay. In the current study, we examined how brain functional connectivity is related to social communication abilities in a sample of 12-24 month-old toddlers including those with typical development (TD) and those with language delays (LD). We used an a-priori, seed-based approach to identify regions forming a functional network with the left posterior superior temporal cortex (LpSTC), a region associated with language and social communication in older children and adults. Social communication and language abilities were assessed using the Communication and Symbolic Behavior Scales (CSBS) and Mullen Scales of Early Learning. We found a significant association between concurrent CSBS scores and functional connectivity between the LpSTC and the right posterior superior temporal cortex (RpSTC), with greater connectivity between these regions associated with better social communication abilities. However, functional connectivity was not related to rate of change or language outcomes at 36 months of age. These data suggest an early marker of low communication abilities may be decreased connectivity between the left and right pSTC. Future longitudinal studies should test whether this neurobiological feature is predictive of later social or communication impairments.  相似文献   

12.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

13.
A comparative developmental framework was used to determine whether mutual gaze is unique to humans and, if not, whether common mechanisms support the development of mutual gaze in chimpanzees and humans. Mother-infant chimpanzees engaged in approximately 17 instances of mutual gaze per hour. Mutual gaze occurred in positive, nonagonistic contexts. Mother-infant chimpanzees at a Japanese center exhibited significantly more mutual gaze than those at a center in the United States. Cradling and motor stimulation varied across groups. Time spent cradling infants was inversely related to mutual gaze. It is suggested that in primates, mutual engagement is supported via an interchangeability of tactile and visual modalities. The importance of mutual gaze is best understood within a perspective that embraces both cross-species and cross-cultural data.  相似文献   

14.
Models of both speech perception and speech production typically postulate a processing level that involves some form of phonological processing. There is disagreement, however, on the question of whether there are separate phonological systems for speech input versus speech output. We review a range of neuroscientific data that indicate that input and output phonological systems partially overlap. An important anatomical site of overlap appears to be the left posterior superior temporal gyrus. We then present the results of a new event-related functional magnetic resonance imaging (fMRI) experiment in which participants were asked to listen to and then (covertly) produce speech. In each participant, we found two regions in the left posterior superior temporal gyrus that responded both to the perception and production components of the task, suggesting that there is overlap in the neural systems that participate in phonological aspects of speech perception and speech production. The implications for neural models of verbal working memory are also discussed in connection with our findings.  相似文献   

15.
An aspect of gaze processing, which so far has been given little attention, is the influence that intentional gaze processing can have on object processing. Converging evidence from behavioural neuroscience and developmental psychology strongly suggests that objects falling under the gaze of others acquire properties that they would not display if not looked at. Specifically, observing another person gazing at an object enriches that object of motor, affective and status properties that go beyond its chemical or physical structure. A conceptual analysis of available evidence leads to the conclusion that gaze has the potency to transfer to the object the intentionality of the person looking at it.  相似文献   

16.
Adults use gaze and voice signals as cues to the mental and emotional states of others. We examined the influence of voice cues on children’s judgments of gaze. In Experiment 1, 6-year-olds, 8-year-olds, and adults viewed photographs of faces fixating the center of the camera lens and a series of positions to the left and right and judged whether gaze was direct or averted. On each trial, participants heard the participant-directed voice cue (e.g., “I see you”), an object-directed voice cue (e.g., “I see that”), or no voice. In 6-year-olds, the range of directions of gaze leading to the perception of eye contact (the cone of gaze) was narrower for trials with object-directed voice cues than for trials with participant-directed voice cues or no voice. This effect was absent in 8-year-olds and adults, both of whom had a narrower cone of gaze than 6-year-olds. In Experiment 2, we investigated whether voice cues would influence adults’ judgments of gaze when the task was made more difficult by limiting the duration of exposure to the face. Adults’ cone of gaze was wider than in Experiment 1, and the effect of voice cues was similar to that observed in 6-year-olds in Experiment 1. Together, the results indicate that object-directed voice cues can decrease the width of the cone of gaze, allowing more adult-like judgments of gaze in young children, and that voice cues may be especially effective when the cone of gaze is wider because of immaturity (Experiment 1) or limited exposure (Experiment 2).  相似文献   

17.
Two- to 8-month-old infants interacted with their mother or a stranger in a prospective longitudinal gaze following study. Gaze following, as assessed by eye tracking, emerged between 2 and 4 months and stabilized between 6 and 8 months of age. Overall, infants followed the gaze of a stranger more than they followed the gaze of their mothers, demonstrating a stranger preference that emerged between 4 and 6 months of age. These findings do not support the notion that infants acquire gaze following through reinforcement learning. Instead, the findings are discussed with respect to the social cognitive framework, suggesting that young infants are driven by social cognitive motives in their interactions with others.  相似文献   

18.
We demonstrate that a person's eye gaze and his/her competitiveness are closely intertwined in social decision making. In an exploratory examination of this relationship, Study 1 uses field data from a high‐stakes TV game show to demonstrate that the frequency by which contestants gaze at their opponent's eyes predicts their defection in a variant on the prisoner's dilemma. Studies 2 and 3 use experiments to examine the underlying causality and demonstrate that the relationship between gazing and competitive behavior is bi‐directional. In Study 2, fixation on the eyes, compared to the face, increases competitive behavior toward the target in an ultimatum game. In Study 3, we manipulate the framing of a negotiation (cooperative vs. competitive) and use an eye tracker to measure fixation number and time spent fixating on the counterpart's eyes. We find that a competitive negotiation elicits more gazing, which in turn leads to more competitive behavior.  相似文献   

19.
The current study aims to separate conscious and unconscious behaviors by employing both online and offline measures while the participants were consciously performing a task. Using an eye-movement tracking paradigm, we observed participants' response patterns for distinguishing within-word-boundary and across-word-boundary reverse errors while reading Chinese sentences (also known as the "word inferiority effect"). The results showed that when the participants consciously detected errors, their gaze time for target words associated with across-word-boundary reverse errors was significantly longer than that for targets words associated with within-word-boundary reverse errors. Surprisingly, the same gaze time pattern was found even when the readers were not consciously aware of the reverse errors. The results were statistically robust, providing converging evidence for the feasibility of our experimental paradigm in decoupling offline behaviors and the online, automatic, and unconscious aspects of cognitive processing in reading.  相似文献   

20.
Whether 1-mo.-old infants were sensitive to social contingency of their mothers and strangers via a Double Video live-replay paradigm was studied. 8 infants were tested (M age = 45.4 days, SD = 7.4) to compare behavioral changes across conditions: a first contingency (Live 1), a noncontingent (Replay), and a second contingency (Live 2). Infants showed an increase in gaze during Replay, counter to expectation. Also, these infants could detect mothers' noncontingent responses but not those of strangers. The result suggests that detection and expectancy may be subcomponents of sensitivity to social contingency. Detection appeared first and seems basic, while expectancy in social contingency appeared later.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号