首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
Faces and bodies are more difficult to perceive when presented inverted than when presented upright (i.e., stimulus inversion effect), an effect that has been attributed to the disruption of holistic processing. The features that can trigger holistic processing in faces and bodies, however, still remain elusive. In this study, using a sequential matching task, we tested whether stimulus inversion affects various categories of visual stimuli: faces, faceless heads, faceless heads in body context, headless bodies naked, whole bodies naked, headless bodies clothed, and whole bodies clothed. Both accuracy and inversion efficiency score results show inversion effects for all categories but for clothed bodies (with and without heads). In addition, the magnitude of the inversion effect for face, naked body, and faceless heads was similar. Our findings demonstrate that the perception of faces, faceless heads, and naked bodies relies on holistic processing. Clothed bodies (with and without heads), on the other side, may trigger clothes-sensitive rather than body-sensitive perceptual mechanisms.  相似文献   

2.
Contemporary research literature indicates that eye movements during the learning and testing phases can predict and affect future recognition processes. Nevertheless, only partial information exists regarding eye movements in the various components of recognition processes: Hits, Correct rejections, Misses and False Alarms (FA). In an attempt to address this issue, participants in this study viewed human faces in a yes/no recognition memory paradigm. They were divided into two groups – one group that carried out the testing phase immediately after the learning phase (n?=?30) and another group with a 15-minute delay between phases (n?=?28). The results showed that the Immediate group had a lower FA rate than the Delay group, and that no Hit rate differences were observed between the two groups. Eye movements differed between the recognition processes in the learning and the testing phases, and this pattern interacted with the group type. Hence, eye movement measures seem to track memory accuracy during both learning and testing phases and this pattern also interacts with the length of delay between learning and testing. This pattern of results suggests that eye movements are indicative of present and future recognition processes.  相似文献   

3.
Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /ae/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bae], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context.  相似文献   

4.
Humans often look at other people in natural scenes, and previous research has shown that these looks follow the conversation and that they are sensitive to sound in audiovisual speech perception. In the present experiment, participants viewed video clips of four people involved in a discussion. By removing the sound, we asked whether auditory information would affect when speakers were fixated, how fixations between different observers were synchronized, and whether the eyes or mouth were looked at most often. The results showed that sound changed the timing of looks—by alerting observers to changes in conversation and attracting attention to the speaker. Clips with sound also led to greater attentional synchrony, with more observers fixating the same regions at the same time. However, looks towards the eyes of the people continued to dominate and were unaffected by removing the sound. These findings provide a rich example of multimodal social attention.  相似文献   

5.
Visual scanpath recording was used to investigate the information processing strategies used by a prosopagnosic patient, SC, when viewing faces. Compared to controls, SC showed an aberrant pattern of scanning, directing attention away from the internal configuration of facial features (eyes, nose) towards peripheral regions (hair, forehead) of the face. The results suggest that SC's face recognition deficit can be linked to an inability to assemble an accurate and unified face percept due to an abnormal allocation of attention away from the internal face region. Extraction of stimulus attributes necessary for face identity recognition is compromised by an aberrant face scanning pattern.  相似文献   

6.
Viewing position effects are commonly observed in reading, but they have only rarely been investigated in object perception or in the realistic context of a natural scene. In two experiments, we explored where people fixate within photorealistic objects and the effects of this landing position on recognition and subsequent eye movements. The results demonstrate an optimal viewing position—objects are processed more quickly when fixation is in the centre of the object. Viewers also prefer to saccade to the centre of objects within a natural scene, even when making a large saccade. A central landing position is associated with an increased likelihood of making a refixation, a result that differs from previous reports and suggests that multiple fixations within objects, within scenes, occur for a range of reasons. These results suggest that eye movements within scenes are systematic and are made with reference to an early parsing of the scene into constituent objects.  相似文献   

7.
The present study examined whether cinematographic editing density affects viewers’ perception of time. As a second aim, based on embodied models that conceive time perception as strictly connected to the movement, we tested the hypothesis that the editing density of moving images also affects viewers’ eye movements and that these later mediate the effect of editing density on viewers’ temporal judgments. Seventy participants watched nine video clips edited by manipulating the number of cuts (slow- and fast-paced editing against a master shot, unedited condition). For each editing density, multiple video clips were created, representing three different kinds of routine actions. The participants’ eye movements were recorded while watching the video, and the participants were asked to report duration judgments and subjective passage of time judgments after watching each clip. The results showed that participants subjectively perceived that time flew more while watching fast-paced edited videos than slow-paced or unedited videos; by contrast, concerning duration judgments, participants overestimated the duration of fast-paced videos compared to the master-shot videos. Both the slow- and the fast-paced editing generated shorter fixations than the master shot, and the fast-paced editing led to shorter fixations than the slow-paced editing. Finally, compared to the unedited condition, editing led to an overestimation of durations through increased eye mobility. These findings suggest that the editing density of moving images by increasing the number of cuts effectively altered viewers’ experience of time and add further evidence to prior research showing that performed eye movement is associated with temporal judgments.  相似文献   

8.
    
ABSTRACT

Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.  相似文献   

9.
Infants respond categorically to color. However, the nature of infants' categorical responding to color is unclear. The current study investigated two issues. First, is infants' categorical responding more absolute than adults' categorical responding? That is, can infants discriminate two stimuli from the same color category? Second, is color categorization in infants truly perceptual? Color categorization was tested by recording adults' and infants' eye movements on a target detection task. In Experiment 1, adults were faster at fixating a colored target when it was presented on a colored background from a different color category (between-category) than when it was presented on a colored background from the same color category (within-category), even when within- and between-category chromatic differences were equated in CIE (Committee International d'Eclairage) color space. This category effect was found for two chromatic separation sizes. In Experiment 2, 4-month-olds also responded categorically on the task. Infants were able to fixate the target when the background color was from the same category. However, as with adults, infants were faster at fixating the target when the target background chromatic difference was between-category than when it was within-category. This implies that infant color categorization, like adult color categorization, is truly perceptual.  相似文献   

10.
    
What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.  相似文献   

11.
12.
Gradient effects of within-category phonetic variation on lexical access   总被引:7,自引:0,他引:7  
In order to determine whether small within-category differences in voice onset time (VOT) affect lexical access, eye movements were monitored as participants indicated which of four pictures was named by spoken stimuli that varied along a 0-40 ms VOT continuum. Within-category differences in VOT resulted in gradient increases in fixations to cross-boundary lexical competitors as VOT approached the category boundary. Thus, fine-grained acoustic/phonetic differences are preserved in patterns of lexical activation for competing lexical candidates and could be used to maximize the efficiency of on-line word recognition.  相似文献   

13.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

14.
Visual input is frequently disrupted by eye movements, blinks, and occlusion. The visual system must be able to establish correspondence between objects visible before and after a disruption. Current theories hold that correspondence is established solely on the basis of spatiotemporal information, with no contribution from surface features. In five experiments, we tested the relative contributions of spatiotemporal and surface feature information in establishing object correspondence across saccades. Participants generated a saccade to one of two objects, and the objects were shifted during the saccade so that the eyes landed between them, requiring a corrective saccade to fixate the target. To correct gaze to the appropriate object, correspondence must be established between the remembered saccade target and the target visible after the saccade. Target position and surface feature consistency were manipulated. Contrary to existing theories, surface features and spatiotemporal information both contributed to object correspondence, and the relative weighting of the two sources of information was governed by the demands of the task. These data argue against a special role for spatiotemporal information in object correspondence, indicating instead that the visual system can flexibly use multiple sources of relevant information.  相似文献   

15.
Eye movements of 30 4-month-olds were tracked as infants viewed animals and vehicles in “natural” scenes and, for comparison, in homogeneous “experimental” scenes. Infants showed equivalent looking time preferences for natural and experimental scenes overall, but fixated natural scenes and objects in natural scenes more than experimental scenes and objects in experimental scenes and shifted fixations between objects and contexts more in natural than in experimental scenes. The findings show how infants treat objects and contexts in natural scenes and suggest that they treat more commonly used experimental scenes differently.  相似文献   

16.
The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general “facial response effect” is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.  相似文献   

17.
    
This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e.g., happy face with happy body) and incongruent (e.g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body.  相似文献   

18.
    
PurposeWe examined links between the kinematics of an opponent’s actions and the visual search behaviors of badminton players responding to those actions.MethodA kinematic analysis of international standard badminton players (n = 4) was undertaken as they completed a range of serves. Video of these players serving was used to create a life-size temporal occlusion test to measure anticipation responses. Expert (n = 8) and novice (n = 8) badminton players anticipated serve location while wearing an eye movement registration system.ResultsDuring the execution phase of the opponent’s movement, the kinematic analysis showed between-shot differences in distance traveled and peak acceleration at the shoulder, elbow, wrist and racket. Experts were more accurate at responding to the serves compared to novice players. Expert players fixated on the kinematic locations that were most discriminating between serve types more frequently and for a longer duration compared to novice players. Moreover, players were generally more accurate at responding to serves when they fixated vision upon the discriminating arm and racket kinematics.ConclusionsFindings extend previous literature by providing empirical evidence that expert athletes’ visual search behaviors and anticipatory responses are inextricably linked to the opponent action being observed.  相似文献   

19.
    
The most widely used measurement of holistic face perception, the composite face effect (CFE), is challenged by two apparently contradictory goals: having a defined face part (i.e., the top half), and yet perceiving the face as an integrated unit (i.e., holistically). Here, we investigated the impact of a small gap between top and bottom face halves in the standard composite face paradigm, requiring matching of sequentially presented top face halves. In Experiment 1, the CFE was larger for no-gap than gap stimuli overall, but not for participants who were presented with gap stimuli first, suggesting that the area of the top face half was unknown without a gap. This was confirmed in Experiment 2, in which these two stimulus sets were mixed up: the gap stimuli thus provided information about the area of a top face half and the magnitude of the CFE did not differ between stimulus sets. These observations indicate that the CFE might be artificially inflated in the absence of a stimulus cue that objectively defines a border between the face halves. Finally, in Experiment 3, observers were asked to determine which of two simultaneously presented faces was the composite face. Perceptual judgements for no-gap stimuli approached ceiling; however, with a gap, participants were almost unable to distinguish the composite face from a veridical face. This effect was not only due to low-level segmentation cues at the border of no-gap face halves, because stimulus inversion decreased performance in both conditions. This result indicates that the two halves of different faces may be integrated more naturally with a small gap that eliminates an enhanced contrast border. Collectively, these observations suggest that a small gap between face halves provides an objective definition of the face half to match and is beneficial for valid measurement of the behavioural CFE.  相似文献   

20.
Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14–15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2–4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号