首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Loucks J 《Perception》2011,40(9):1047-1062
Recent evidence indicates that observers' sensitivity to configural information in dynamic human action is disrupted when action is inverted, whereas sensitivity to featural action information is not. The current research involved two experiments that expand upon this basic finding. Experiment 1 revealed that featural and configural action information are processed similarly in static representations of action as in dynamic action. Experiment 2 indicated that configural processing is uniquely sensitive to orientation only in human action as compared to a similar control stimulus. These findings further support the idea that the perception of action recruits specialized orientation-specific configural processing, and parallel similar findings in face perception and visual expertise.  相似文献   

2.
Faces are perceived holistically, a phenomenon best illustrated when the processing of a face feature is affected by the other features. Here, the authors tested the hypothesis that the holistic perception of a face mainly relies on its low spatial frequencies. Holistic face perception was tested in two classical paradigms: the whole-part advantage (Experiment 1) and the composite face effect (Experiments 2-4). Holistic effects were equally large or larger for low-pass filtered faces as compared to full-spectrum faces and significantly larger than for high-pass filtered faces. The disproportionate composite effect found for low-pass filtered faces was not observed when holistic perception was disrupted by inversion (Experiment 3). Experiment 4 showed that the composite face effect was enhanced only for low spatial frequencies, but not for intermediate spatial frequencies known be critical for face recognition. These findings indicate that holistic face perception is largely supported by low spatial frequencies. They also suggest that holistic processing precedes the analysis of local features during face perception.  相似文献   

3.
An attractor network was trained to compute from word form to semantic representations that were based on subject-generated features. The model was driven largely by higher-order semantic structure. The network simulated two recent experiments that employed items included in its training set (McRae and Boisvert, 1998). In Simulation 1, short stimulus onset asynchrony priming was demonstrated for semantically similar items. Simulation 2 reproduced subtle effects obtained by varying degree of similarity. Two predictions from the model were then tested on human subjects. In Simulation 3 and Experiment 1, the items from Simulation 1 were reversed, and both the network and subjects showed minimally different priming effects in the two directions. In Experiment 2, consistent with attractor networks but contrary to a key aspect of hierarchical spreading activation accounts priming was determined by featural similarity rather than shared superordinate category. It is concluded that semantic-similarity priming is due to featural overlap that is a natural consequence of distributed representations of word meaning.  相似文献   

4.
The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect is termed "visual remapping of touch" or the VRT effect. Given the high social value of this mechanism, we investigated whether it might be modulated by specific key information processed in face-to-face interactions: facial emotional expression. In two separate experiments, participants received tactile stimuli, near the perceptual threshold, either on their right, left, or both cheeks. Concurrently, they watched several blocks of movies depicting a face with a neutral, happy, or fearful expression that was touched or just approached by human fingers (Experiment 1). Participants were asked to distinguish between unilateral and bilateral felt tactile stimulation. Tactile perception was enhanced when viewing touch toward a fearful face compared with viewing touch toward the other two expressions. In order to test whether this result can be generalized to other negative emotions or whether it is a fear-specific effect, we ran a second experiment, where participants watched movies of faces-touched or approached by fingers-with either a fearful or an angry expression (Experiment 2). In line with the first experiment, tactile perception was enhanced when subjects viewed touch toward a fearful face and not toward an angry face. Results of the present experiments are interpreted in light of different mechanisms underlying different emotions recognition, with a specific involvement of the somatosensory system when viewing a fearful expression and a resulting fear-specific modulation of the VRT effect. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

5.
One critical component of understanding another’s mind is the perception of “life” in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.  相似文献   

6.
We examined the ability of young infants (3- and 4-month-olds) to detect faces in the two-tone images often referred to as Mooney faces. In Experiment 1, this performance was examined in conditions of high and low visibility of local features and with either the presence or absence of the outer head contour. We found that regardless of the presence of the outer head contour, infants preferred upright over inverted two-tone face images only when local features were highly visible (Experiment 1a). We showed that this upright preference disappeared when the contrast polarity of two-tone images was reversed (Experiment 1b), reflecting operation of face-specific mechanisms. In Experiment 2, we investigated whether motion affects infants' perception of faces in Mooney faces. We found that when the faces appeared to be rigidly moving, infants did show an upright preference in conditions of low visibility of local features (Experiment 2a). Again the preference disappeared when the contrast polarity of the image was reversed (Experiment 2b). Together, these results suggest that young infants have the ability to integrate fragmented image features to perceive faces from two-tone face images, especially if they are moving. This suggests that an interaction between motion and form rather than a purely motion-based process (e.g., structure from motion) facilitates infants' perception of faces in ambiguous two-tone images.  相似文献   

7.
We report seven experiments that investigate the influence that head orientation exerts on the perception of eye-gaze direction. In each of these experiments, participants were asked to decide whether the eyes in a brief and masked presentation were looking directly at them or were averted. In each case, the eyes could be presented alone, or in the context of congruent or incongruent stimuli In Experiment 1A, the congruent and incongruent stimuli were provided by the orientation of face features and head outline. Discrimination of gaze direction was found to be better when face and gaze were congruent than in both of the other conditions, an effect that was not eliminated by inversion of the stimuli (Experiment 1B). In Experiment 2A, the internal face features were removed, but the outline of the head profile was found to produce an identical pattern of effects on gaze discrimination, effects that were again insensitive to inversion (Experiment 2B) and which persisted when lateral displacement of the eyes was controlled (Experiment 2C). Finally, in Experiment 3A, nose angle was also found to influence participants' ability to discriminate direct gaze from averted gaze, but here the effect was eliminated by inversion of the stimuli (Experiment 3B). We concluded that an image-based mechanism is responsible for the influence of head profile on gaze perception, whereas the analysis of nose angle involves the configural processing of face features.  相似文献   

8.
In the present study we considered the two factors that have been advocated for playing a role in emotional attention: perception of gaze direction and facial expression of emotions. Participants performed an oculomotor task in which they had to make a saccade towards one of the two lateral targets, depending on the colour of the fixation dot which appeared at the centre of the computer screen. At different time intervals (stimulus onset asynchronies, SOAs: 50,100,150 ms) following the onset of the dot, a picture of a human face (gazing either to the right or to the left) was presented at the centre of the screen. The gaze direction of the face could be congruent or incongruent with respect to the location of the target, and the expression could be neutral or angry. In Experiment 1 the facial expressions were presented randomly in a single block, whereas in Experiment 2 they were shown in separate blocks. Latencies for correct saccades and percentage of errors (saccade direction errors) were considered in the analyses. Results showed that incongruent trials determined a significantly higher percentage of saccade direction errors with respect to congruent trials, thus confirming that gaze direction, even when task-irrelevant, interferes with the accuracy of the observer’s oculomotor behaviour. The angry expression was found to hold attention for a longer time with respect to the neutral one, producing delayed saccade latencies. This was particularly evident at 100 ms SOA and for incongruent trials. Emotional faces may then exert a modulatory effect on overt attention mechanisms.  相似文献   

9.
In recent years, researchers in computer science and human-computer interaction have become increasingly interested in characterizing perception of facial affect. Ironically, this applied interest comes at a time when the classic findings on perception of human facial affect are being challenged in the psychological research literature, largely on methodological grounds. This paper first describes two experiments that empirically address Russell’s methodological criticisms of the classic work on measuring “basic emotions,” as well as his alternative approach toward modeling “facial affect space.” Finally, a user study on affect in a prototype model of a robot face is reported; these results are compared with the human findings from Experiment 1. This work provides new data on measuring facial affect, while also demonstrating how basic and more applied research can mutually inform one another.  相似文献   

10.
Four experiments are described which investigated the role of the mother's voice in facilitating recognition of the mother's face at birth. Experiment 1 replicated our previous findings (Br. J. Dev. Psychol. 1989; 7: 3–15; The origins of human face perception by very young infants. Ph.D. Thesis, University of Glasgow, Scotland, UK, 1990) indicating a preference for the mother's face when a control for the mother's voice and odours was used only during the testing. A second experiment adopted the same procedures, but controlled for the mother's voice from birth through testing. The neonates were at no time exposed to their mother's voice. Under these conditions, no preference was found. Further, neonates showed only few head turns towards both the mother and the stranger during the testing. Experiment 3 looked at the number of head turns under conditions where the newborn infants were exposed to both the mother's voice and face from birth to 5 to 15 min prior to testing. Again, a strong preference for the mother's face was demonstrated. Such preference, however, vanished in Experiment 4, when neonates had no previous exposure to the mother's voice–face combination. The conclusion drawn is that a prior experience with both the mother's voice and face is necessary for the development of face recognition, and that intermodal perception is evident at birth. The neonates' ability to recognize the face of the mother is most likely to be rooted in prenatal learning of the mother's voice. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces would affect the perception of a target face??s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect would interact with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context would affect the perception of the target face by manipulating flanker faces?? shape and pigmentation, as well as their orientation. The results showed that flanker faces?? shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces?? shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation.  相似文献   

12.
The structure of people's conceptual knowledge of concrete nouns has traditionally been viewed as hierarchical ( Collins & Quillian, 1969 ). For example, superordinate concepts ( vegetable ) are assumed to reside at a higher level than basic-level concepts ( carrot ). A feature-based attractor network with a single layer of semantic features developed representations of both basic-level and superordinate concepts. No hierarchical structure was built into the network. In Experiment and Simulation 1, the graded structure of categories (typicality ratings) is accounted for by the flat attractor network. Experiment and Simulation 2 show that, as with basic-level concepts, such a network predicts feature verification latencies for superordinate concepts ( vegetable ). In Experiment and Simulation 3, counterintuitive results regarding the temporal dynamics of similarity in semantic priming are explained by the model. By treating both types of concepts the same in terms of representation, learning, and computations, the model provides new insights into semantic memory.  相似文献   

13.
The present study investigates the human-specificity of the orienting system that allows neonates to look preferentially at faces. Three experiments were carried out to determine whether the face-perception system that is present at birth is broad enough to include both human and nonhuman primate faces. The results demonstrate that the newborns did not show any spontaneous visual preference for the human face when presented simultaneously with a monkey face that shared the same features, configuration, and low-level perceptual properties (Experiment 1). The newborns were, however, able to discriminate between the 2 faces belonging to the 2 different species (Experiment 2). In Experiment 3, the newborns were found to prefer looking at an upright, compared with an inverted, monkey face, as they do for human faces. Overall, the results demonstrate that newborns perceive monkey and human faces in a similar way. These findings are consistent with the hypothesis that the system underlying face preference at birth is broad enough to bias newborns' attention toward both human and nonhuman primate faces.  相似文献   

14.
ABSTRACT

Research into the visual perception of goal-directed human action indicates that human action perception makes use of specialized processing systems, similar to those that operate in visual expertise. Against this background, the current research investigated whether perception of temporal information in goal-directed human action is enhanced relative to similar motion stimuli. Experiment 1 compared observers’ sensitivity to speed changes in upright human action to a kinematic control (an animation yoked to the motion of the human hand), and also to inverted human action. Experiment 2 compared human action to a non-human motion control (a tool moved the object). In both experiments observers’ sensitivity to detecting the speed changes was higher for the human stimuli relative to the control stimuli, and inversion in Experiment 1 did not alter observers’ sensitivity. Experiment 3 compared observers’ sensitivity to speed changes in goal-directed human and dog actions, in order to determine if enhanced temporal perception is unique to human actions. Results revealed no difference between human and dog stimuli, indicating that enhanced speed perception may exist for any biological motion. Results are discussed with reference to theories of biological motion perception and perception in visual expertise.  相似文献   

15.
In a previous study, it was shown that a 50/50 morph of a typical and an atypical parent face was perceived to be more similar to the atypical parent face than to the typical parent face (Tanaka, Giles, Kremen, & Simon, 1998). Experiments 1 and 2 examine face typicality effects in a same/different discrimination task in which typical or atypical faces and their 80%, 70%, 60%, and 50% morphs were presented sequentially (Experiment 1) or simultaneously (Experiment 2). The main finding was that in both modes of presentation, atypical morphs were more poorly discriminated than their corresponding typical morphs. In Experiment 3, typicality effects were extended to the perception of nonface objects; in this instance, it was found that 50/50 morphs of birds and cars were judged to be more similar to their atypical parents than to their typical parents. These results are consistent with an attractor field model, in which it is proposed that the perception of a face or object stimulus depends not only on its fit to an underlying representation, but also on the representation's location in the similarity space.  相似文献   

16.
Two experiments investigated whether 7-month-old infants attend to the spatial distance measurements relating internal features of the human face. A visual preference paradigm was used, in which two versions of the same female face (one either lengthened or shortened, and one nonmodified) were presented simultaneously. In Experiment 1, infants looked longer at the nonmodified faces, which were determined to match the average distance relationships found in a sample of faces drawn from the same population. Longer looking times for modified faces were found in Experiment 2, in which the nonmodified faces were unusually long and the modified faces conformed to average distance measurements. It is proposed that infants’ attention to the spatial relations of internal face features is an optimal tool for lifelong face recognition.  相似文献   

17.
Anticipating another’s actions is an important ability in social animals. Recent research suggests that in human adults and infants one’s own action experience facilitates understanding and anticipation of others’ actions. We investigated the link between first-person experience and perception of another’s action in adult tufted capuchin monkeys (Sapajus apella spp., formerly Cebus apella spp.). In Experiment 1, the monkeys observed a familiar human (actor) trying to open a container using either a familiar or an unfamiliar action. They looked for longer when the actor tried to open the container using a familiar action. In Experiment 2, the actor performed two novel actions on a new container. The monkeys looked equally at the two actions. In Experiment 3, the monkeys were trained to open the container using one of the novel actions in Experiment 2. After training, we repeated the same procedure as in Experiment 2. The monkeys looked for longer when the actor manipulated the container using the action they had practiced than when she used the unfamiliar action. These results show that knowledge derived from one’s own experience impacts perception of another’s action in these New World monkeys.  相似文献   

18.
The visible movement of a talker's face is an influential component of speech perception. However, the ability of this influence to function when large areas of the face (~50%) are covered by simple substantial occlusions, and so are not visible to the observer, has yet to be fully determined. In Experiment 1, both visual speech identification and the influence of visual speech on identifying congruent and incongruent auditory speech were investigated using displays of a whole (unoccluded) talking face and of the same face occluded vertically so that the entire left or right hemiface was covered. Both the identification of visual speech and its influence on auditory speech perception were identical across all three face displays. Experiment 2 replicated and extended these results, showing that visual and audiovisual speech perception also functioned well with other simple substantial occlusions (horizontal and diagonal). Indeed, displays in which entire upper facial areas were occluded produced performance levels equal to those obtained with unoccluded displays. Occluding entire lower facial areas elicited some impairments in performance, but visual speech perception and visual speech influences on auditory speech perception were still apparent. Finally, implications of these findings for understanding the processes supporting visual and audiovisual speech perception are discussed.  相似文献   

19.
Pairs of similar faces were created from photographs of different people using morphing software. The ability of participants to discriminate between novel pairs of faces and between those to which they had received brief, unsupervised, exposure (5×2 s each) was assessed. In all experiments exposure improved discrimination performance. Overall, discrimination was better when the faces were upright, but exposure produced improved discrimination for both upright and inverted faces (Experiment 1). The improvement produced by exposure was selective to internal face features (Experiment 2) and was evident when there was a change in orientation (three-quarter to full face or vice versa) between exposure and test (Experiment 3). These findings indicate that perceptual learning observed following brief exposure to faces exhibit well-established hallmarks of familiar face processing (i.e., internal feature advantage and insensitivity to a change of viewpoint). Considered in combination with previous studies using the same type of stimuli (Mundy, Honey, & Dwyer, 2007), the current results imply that general perceptual learning mechanisms contribute to the acquisition of face familiarity.  相似文献   

20.
In three experiments with infants and one with adults we explored the generality, limitations, and informational bases of early form perception. In the infant studies we used a habituation-of-looking-time procedure and the method of Kellman (1984), in which responses to three-dimensional (3-D) form were isolated by habituating 16-week-old subjects to a single object in two different axes of rotation in depth, and testing afterward for dishabituation to the same object and to a different object in a novel axis of rotation. In Experiment 1, continuous optical transformations given by moving 16-week-old observers around a stationary 3-D object specified 3-D form to infants. In Experiment 2 we found no evidence of 3-D form perception from multiple, stationary, binocular views of objects by 16- and 24-week-olds. Experiment 3A indicated that perspective transformations of the bounding contours of an object, apart from surface information, can specify form at 16 weeks. Experiment 3B provided a methodological check, showing that adult subjects could neither perceive 3-D forms from the static views of the objects in Experiment 3A nor match views of either object across different rotations by proximal stimulus similarities. The results identify continuous perspective transformations, given by object or observer movement, as the informational bases of early 3-D form perception. Detecting form in stationary views appears to be a later developmental acquisition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号