A highly familiar type of movement occurs whenever a person walks towards you. In the present study, we investigated whether this type of motion has an effect on face processing. We took a range of different 3D head models and placed them on a single, identical 3D body model. The resulting figures were animated to approach the observer. In a first series of experiments, we used a sequential matching task to investigate how the motion of an approaching person affects immediate responses to faces. We compared observers’ responses following approach sequences to their performance with figures walking backwards (receding motion) or remaining still. Observers were significantly faster in responding to a target face that followed an approach sequence, compared to both receding and static primes. In a second series of experiments, we investigated long-term effects of motion using a delayed visual search paradigm. After studying moving or static avatars, observers searched for target faces in static arrays of varying set sizes. Again, observers were faster at responding to faces that had been learned in the context of an approach sequence. Together these results suggest that the context of a moving body influences face processing, and support the hypothesis that our visual system has mechanisms that aid the encoding of behaviourally-relevant and familiar dynamic events. 相似文献
We investigated how the difficulty of detecting a shape change influenced the achievement of object constancy across depth rotations for object identification and categorization tasks. In three sequential matching experiments, people saw pictures of morphs between two everyday, nameable objects (e.g., bath-sink morphs, along a continuum between "bath" and "sink" end-point shapes). In each experiment, both view changes and shape changes influenced performance. Furthermore, the deleterious effects of view changes were strongest when shape discrimination was hardest. In our earlier research, using morphs of novel objects, we found a similar interaction between view sensitivity and shape sensitivity (Lawson, 2004b; Lawson & Bülthoff, 2006; Lawson, Bülthoff, & Dumbell, 2003). The present results extend these findings to familiar-object morphs. They suggest that recognition remains view-sensitive at the basic level of identification for everyday, nameable objects, and that the difficulty of shape discrimination plays a critical role in determining the degree of this view sensitivity. 相似文献
The causal-locus hypothesis (CLH) asserts that persons making internal attributions for failure and external attributions for success experience more negative postoutcome moods than persons making the opposite attributions. Three experiments assessed the CLH. Although outcomes consistently affected moods and attributions, attributions did not affect moods. Significant correlations consistent with the CLH were also infrequently obtained. Another theory, the sanctioned-object hypothesis (SOH), was proposed for understanding how causal attributions lead to mood changes. This hypothesis asserts that the application of positive or negative sanctions to objects in the perceptual field is a central determinant of mood and that attributions affect mood when their content and salience activate sanctioning processes. A fourth experiment evaluated the competing theories. The results supported the SOH but not the CLH. The findings are discussed in terms of their implications for understanding mood variations and the effects that moods have on the construction of attributions and for adopting methodological alternatives that may be valuable for future laboratory research studying mood variations. 相似文献
Shielding visual search against interference from salient distractors becomes more efficient over time for display regions where distractors appear more frequently, rather than only rarely Goschy, Bakos, Müller, & Zehetleitner (Frontiers in Psychology 5: 1195, 2014). We hypothesized that the locus of this learned distractor probability-cueing effect depends on the dimensional relationship of the to-be-inhibited distractor relative to the to-be-attended target. If the distractor and target are defined in different visual dimensions (e.g., a color-defined distractor and orientation-defined target, as in Goschy et al. (Frontiers in Psychology 5: 1195, 2014), distractors may be efficiently suppressed by down-weighting the feature contrast signals in the distractor-defining dimension Zehetleitner, Goschy, & Müller (Journal of Experimental Psychology: Human Perception and Performance 38: 941–957, 2012), with stronger down-weighting being applied to the frequent- than to the rare-distractor region. However, given dimensionally coupled feature contrast signal weighting (cf. Müller J, Heller & Ziegler (Perception & Psychophysics 57:1–17, 1995), this dimension-(down-)weighting strategy would not be effective when the target and the distractors are defined within the same dimension. In this case, suppression may operate differently: by inhibiting the entire frequent-distractor region on the search-guiding master saliency map. The downside of inhibition at this level is that, although it reduces distractor interference in the inhibited (frequent-distractor) region, it also impairs target processing in that region—even when no distractor is actually present in the display. This predicted qualitative difference between same- and different-dimension distractors was confirmed in the present study (with 184 participants), thus furthering our understanding of the functional architecture of search guidance, especially regarding the mechanisms involved in shielding search from the interference of distractors that consistently occur in certain display regions.