首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The visual context in which an object or face resides can provide useful top‐down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye‐tracking experiment, 6‐ and 10‐month‐old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6‐ and 10‐month‐olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top‐down information to facilitate orienting during memory‐guided visual search.  相似文献   

2.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

3.
Recognition of emotional facial expressions is a crucial skill for adaptive behavior. Past research suggests that at 5 to 7 months of age, infants look longer to an unfamiliar dynamic angry/happy face which emotionally matches a vocal expression. This suggests that they can match stimulations of distinct modalities on their emotional content. In the present study, olfaction–vision matching abilities were assessed across different age groups (3, 5 and 7 months) using dynamic expressive faces (happy vs. disgusted) and distinct hedonic odor contexts (pleasant, unpleasant and control) in a visual‐preference paradigm. At all ages the infants were biased toward the disgust faces. This visual bias reversed into a bias for smiling faces in the context of the pleasant odor context in the 3‐month‐old infants. In infants aged 5 and 7 months, no effect of the odor context appeared in the present conditions. This study highlights the role of the olfactory context in the modulation of visual behavior toward expressive faces in infants. The influence of olfaction took the form of a contingency effect in 3‐month‐old infants, but later evolved to vanish or to take another form that could not be evidenced in the present study.  相似文献   

4.
Event-related potentials were used to determine whether infants, like adults, show differences in spatial and temporal characteristics of brain activation during face and object recognition. Three aspects of visual processing were identified: (a) differentiation of face vs. object (P400 at occipital electrode was shorter latency for faces), (b) recognition of familiar identity (Nc, or negative component, at fronto-temporal electrodes [FTEs] was of larger amplitude for familiar stimuli), and (c) encoding novelty (slow wave at FTEs was larger for unfamiliar stimuli). The topography of the Nc was influenced by category type: Effects of familiarity were limited to the midline and right anterior temporal electrodes for faces but extended to all temporal electrodes for objects. Results show that infants' experience with specific examples within categories and their general category knowledge influence the neural correlates of visual processing.  相似文献   

5.
Abstract— The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action. We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations. In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning. Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general.  相似文献   

6.
Human faces are among the most important visual stimuli that we encounter at all ages. This importance partly stems from the face as a conveyer of information on the emotional state of other individuals. Previous research has demonstrated specific scanning patterns in response to threat-related compared to non-threat-related emotional expressions. This study investigated how visual scanning patterns toward faces which display different emotional expressions develop during infancy. The visual scanning patterns of 4-month-old and 7-month-old infants and adults when looking at threat-related (i.e., angry and fearful) versus non-threat-related (i.e., happy, sad, and neutral) emotional faces were examined. We found that infants as well as adults displayed an avoidant looking pattern in response to threat-related emotional expressions with reduced dwell times and relatively less fixations to the inner features of the face. In addition, adults showed a pattern of eye contact avoidance when looking at threat-related emotional expressions that was not yet present in infants. Thus, whereas a general avoidant reaction to threat-related facial expressions appears to be present from very early in life, the avoidance of eye contact might be a learned response toward others' anger and fear that emerges later during development.  相似文献   

7.
Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain-inspired algorithms that have recently reached human-level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human-like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human-like representation depends on a system that is optimized for face identification. In the current study, we examined the representation of DCNNs of faces that differ in features that are critical or non-critical for human face recognition. Our findings show that DCNNs optimized for face identification are tuned to the same facial features used by humans for face recognition. Sensitivity to these features was highly correlated with performance of the DCNN on a benchmark face recognition task. Moreover, sensitivity to these features and a view-invariant face representation emerged at higher layers of a DCNN optimized for face recognition but not for object recognition. This finding parallels the division to a face and an object system in high-level visual cortex. Taken together, these findings validate human perceptual models of face recognition, enable us to use DCNNs to test predictions about human face and object recognition as well as contribute to the interpretability of DCNNs.  相似文献   

8.
We examined 5-month-olds’ responses to adult facial versus vocal displays of happy and sad expressions during face-to-face social interactions in three experiments. Infants interacted with adults in either happy-sad-happy or happy-happy-happy sequences. Across experiments, either facial expressions were present while presence/absence of vocal expressions was manipulated or visual access to facial expressions was blocked but vocal expressions were present throughout. Both visual attention and infant affect were recorded. Although infants looked more when vocal expressions were present, they smiled significantly more to happy than to sad facial expressions regardless of presence or absence of the voice. In contrast, infants showed no evidence of differential responding to voices when faces were obscured; their smiling and visual attention simply declined over time. These results extend findings from non-social contexts to social interactions and also indicate that infants may require facial expressions to be present to discriminate among adult vocal expressions of affect.  相似文献   

9.
The ability to detect and prefer a face when embedded in complex visual displays was investigated in 3- and 6-month-old infants, as well as in adults, through a modified version of the visual search paradigm and the recording of eye movements. Participants (N=43) were shown 32 visual displays that comprised a target face among 3 or 5 heterogeneous objects as distractors. Results demonstrated that faces captured and maintained adults' and 6-month-olds' attention, but not 3-month-olds' attention. Overall, the current study contributes to knowledge of the capacity of social stimuli to attract and maintain visual attention over other complex objects in young infants as well as in adults.  相似文献   

10.
Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than faces with averted gaze. Control experiments ruled out the influence of low-level stimulus differences and differential response criteria. These results indicate an enhanced unconscious representation of direct gaze, enabling the automatic and rapid detection of other individuals making eye contact with the observer.  相似文献   

11.
Three experiments investigated the perception of collisions involving bouncing balls by 7- and 10-month-old infants and adults. In previous research, 10-month-old infants perceived the causality of launching collisions (events in which one object moves along a smooth horizontal trajectory toward a second object, apparently launching it into motion) in relatively simple event contexts. In more complex event contexts, infants failed to discriminate among the events or respond to changes in individual features. Experiments 1 and 2 of the present investigation revealed that 7- and 10-month-old infants attended to spatial and temporal contiguity, but not causality, in collisions involving the movement of bouncing balls. In Experiment 3, both spatiotemporal contiguity and general knowledge about movement trajectories influenced adults’ judgments of causality for these collisions. The present results add to a growing understanding of infants’ event perception as constructive and a function, in part, of the complexity of the event context.  相似文献   

12.
长期以来,关于面孔表情识别的研究主要是围绕着面孔本身的结构特征来进行的,但是近年来的研究发现,面孔表情的识别也会受到其所在的情境背景(如语言文字、身体背景、自然与社会场景等)的影响,特别是在识别表情相似的面孔时,情境对面孔表情识别的影响更大。本文首先介绍和分析了近几年关于语言文字、身体动作、自然场景和社会场景等情境影响个体对面孔表情的识别的有关研究;其次,又分析了文化背景、年龄以及焦虑程度等因素对面孔表情识别情境效应的影响;最后,强调了未来的研究应重视研究儿童被试群体、拓展情绪的类别、关注真实生活中的面孔情绪感知等。  相似文献   

13.
从读者重复学习新词时眼动行为经历的变化,揭示儿童和成人自然阅读中新词学习能力的差异。构造双字假词作为新词,将其嵌在五个语境中,记录儿童和成人阅读时的眼动轨迹。结果发现:随着新词学习次数的递增,儿童和成人在新词上的首次注视时间呈相同变化;在对新词的凝视时间和再注视概率上,成人在第二次阅读时就大幅下降,而小学生在第四次阅读时才开始下降。表明成人新词学习能力高于儿童体现在词汇加工的相对晚期阶段。  相似文献   

14.
Wu YC  Coulson S 《Brain and language》2011,119(3):184-195
Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent and incongruent contexts. Gestures were presented either dynamically in short, soundless video-clips, or statically as freeze frames extracted from gesture videos. In a separate ERP experiment, the same participants viewed related or unrelated pairs of photographs depicting common real-world objects. Both object photos and gesture stimuli elicited less negative ERPs from 400 to 600 ms post-stimulus when preceded by matching versus mismatching contexts (dN450). Object photos and static gesture stills also elicited less negative ERPs between 300 and 400 ms post-stimulus (dN300). Findings demonstrate commonalities between the conceptual integration processes underlying the interpretation of iconic gestures and other types of image-based representations of the visual world.  相似文献   

15.
What role does experience play in the development of face recognition? A growing body of evidence indicates that newborn brains need slowly changing visual experiences to develop accurate visual recognition abilities. All of the work supporting this “slowness constraint” on visual development comes from studies testing basic-level object recognition. Here, we present the results of controlled-rearing experiments that provide evidence for a slowness constraint on the development of face recognition, a prototypical subordinate-level object recognition task. We found that (1) newborn chicks can rapidly develop view-invariant face recognition and (2) the development of this ability relies on experience with slowly moving faces. When chicks were reared with quickly moving faces, they built distorted face representations that largely lacked invariance to viewpoint changes, effectively “breaking” their face recognition abilities. These results provide causal evidence that slowly changing visual experiences play a critical role in the development of face recognition, akin to basic-level object recognition. Thus, face recognition is not a hardwired property of vision but is learned rapidly as the visual system adapts to the temporal structure of the animal's visual environment.  相似文献   

16.
Altmann GT  Kamide Y 《Cognition》1999,73(3):247-264
Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.  相似文献   

17.
Corrow S  Granrud CE  Mathison J  Yonas A 《Perception》2011,40(11):1376-1383
In this study we investigated infants' perception of the hollow-face illusion. 6-month-old infants were shown a concave mask under monocular and binocular viewing conditions and the direction of their reaches toward the mask was recorded. Adults typically perceive a concave mask as convex under monocular conditions but as concave under binocular conditions, depending on viewing distance. Based on previous findings that infants reach preferentially toward the parts of a display that are closest to them, we expected that, if infants perceive the hollow-face illusion as adults do, they would reach to the center of the mask when viewing it monocularly and to the edges when viewing it binocularly. The results were consistent with these predictions. Our findings indicated that the infants perceived the mask as convex when viewing it with one eye and concave when viewing it with two eyes. The results show that 6-month-old infants respond to the hollow-face illusion. Our finding suggests that, early in life, the visual system uses the constraint, or assumption, that faces are convex when interpreting visual input.  相似文献   

18.
Viewpoint-dependent recognition of familiar faces   总被引:5,自引:0,他引:5  
Troje NF  Kersten D 《Perception》1999,28(4):483-487
The question whether object representations in the human brain are object-centered or viewer-centered has motivated a variety of experiments with divergent results. A key issue concerns the visual recognition of objects seen from novel views. If recognition performance depends on whether a particular view has been seen before, it can be interpreted as evidence for a viewer-centered representation. Earlier experiments used unfamiliar objects to provide the experimenter with complete control over the observer's previous experience with the object. In this study, we tested whether human recognition shows viewpoint dependence for the highly familiar faces of well-known colleagues and for the observer's own face. We found that observers are poorer at recognizing their own profile, whereas there is no difference in response time between frontal and profile views of other faces. This result shows that extensive experience and familiarity with one's own face is not sufficient to produce viewpoint invariance. Our result provides strong evidence for viewer-centered representations in human visual recognition even for highly familiar objects.  相似文献   

19.
Emotion influences memory in many ways. For example, when a mood-dependent processing shift is operative, happy moods promote global processing and sad moods direct attention to local features of complex visual stimuli. We hypothesized that an emotional context associated with to-be-learned facial stimuli could preferentially promote global or local processing. At learning, faces with neutral expressions were paired with a narrative providing either a happy or a sad context. At test, faces were presented in an upright or inverted orientation, emphasizing configural or analytical processing, respectively. A recognition advantage was found for upright faces learned in happy contexts relative to those in sad contexts, whereas recognition was better for inverted faces learned in sad contexts than for those in happy contexts. We thus infer that a positive emotional context prompted more effective storage of holistic, configural, or global facial information, whereas a negative emotional context prompted relatively more effective storage of local or feature-based facial information  相似文献   

20.
From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown faces with both direct and averted gaze, and subsequently given a preference test involving the same face and a novel one. A novelty preference during test was only found following initial exposure to a face with direct gaze. Further, face recognition was also generally enhanced for faces with both direct and with averted gaze when the infants started the task with the direct gaze condition. Together, these results indicate that the direction of the gaze modulates face recognition in early infancy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号