首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
以往研究发现眼睛注视方向知觉受面孔表情的影响,愤怒面孔相较于恐惧面孔更倾向被判断为看着观察者。虽然研究者对此提出了不同的解释,但目前尚不清楚愤怒和恐惧表情在注视方向知觉中的这种差异影响到底来自于面孔的结构信息还是物理特征信息。本研究采用注视方向辨别任务,计算直视知觉范围(The Cone of Direct Gaze,CoDG)为因变量,分别以直立,倒置及模糊图片为实验材料,试图通过分离面孔结构信息和物理特征信息,对以上问题进行探讨。结果发现在保留面孔全部信息的情况下(实验1)愤怒面孔的CoDG大于恐惧面孔;在破坏结构信息加工,只保留特征信息加工的情况下(实验2))愤怒和恐惧表情在直视知觉范围上的差异消失了;在削弱物理特征信息加工,保留结构信息加工的情况下(实验3)二者在CoDG上的差异又复现。本研究结果说明不同威胁性面孔表情对眼睛注视知觉的影响主要来自于二者在与情绪意义相关的结构信息加工上的不同,而二者非低级的物理信息上的差异,支持信号共享假说和情绪评价假说对威胁性面孔表情与注视方向整合加工解释的理论基础。  相似文献   

2.
To examine the development of visual short-term memory (VSTM) for location, we presented 6- to 12-month-old infants (N = 199) with two side-by-side stimulus streams. In each stream, arrays of colored circles continually appeared, disappeared, and reappeared. In the changing stream, the location of one or more items changed in each cycle; in the non-changing streams the locations did not change. Eight- and 12.5-month-old infants showed evidence of memory for multiple locations, whereas 6.5-month-old infants showed evidence of memory only for a single location, and only when that location was easily identified by salient landmarks. In the absence of such landmarks, 6.5-month-old infants showed evidence of memory for the overall configuration or shape. This developmental trajectory for spatial VSTM is similar to that previously observed for color VSTM. These results additionally show that infants’ ability to detect changes in location is dependent on their developing sensitivity to spatial reference frames.  相似文献   

3.
ABSTRACT

Whether visual short-term memory can be lost over an unfilled delay, in line with time-dependent forgetting, is controversial and prior work has yielded mixed results. The present study explored time-dependent forgetting in visual short-term memory in relation to other factors. In three experiments, participants compared single target and probe objects over a 2 s or 10 s retention interval. The objects across trials were either similar or dissimilar (Experiment 1) and had to be remembered in the presence of an additional distractor (Experiment 2) or under conditions where the amount of time separating trials varied (Experiment 3). In all experiments, the retention interval manipulation made the biggest contribution to performance, with accuracy decreasing as the retention interval was lengthened from 2 s to 10 s. These results pose problems for interference and temporal distinctiveness models of memory but are compatible with temporal forgetting mechanisms such as decay.  相似文献   

4.
Delvenne JF 《Cognition》2005,96(3):B79-B88
Visual short-term memory (VSTM) and attention are both thought to have a capacity limit of four items [e.g. Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 309, 279-281; Pylyshyn, Z. W., & Storm, R. W. (1988). Tracking multiple independent targets: evidence for a parallel tracking mechanism. Spatial Vision, 3, 179-197.]. Using the multiple object visual tracking paradigm (MOT), it has recently been shown that twice as many items can be simultaneously attended when they are separated between two visual fields compared to when they are all presented within the same hemifield [Alvarez, G. A., & Cavanagh, P. (2004). Independent attention resources for the left and right visual hemifields (Abstract). Journal of Vision, 4(8), 29a.]. Does VSTM capacity also increase when the items to be remembered are distributed between the two visual fields? The current paper investigated this central issue in two different tasks, namely a color and spatial location change detection task, in which the items were displayed either in the two visual fields or in the same hemifield. The data revealed that only memory capacity for spatial locations and not colors increased when the items were separated between the two visual fields. These findings support the view of VSTM as a chain of capacity limited operations where the spatial selection of stimuli, which dominates in both spatial location VSTM and MOT, occupies the first place and shows independence between the two fields.  相似文献   

5.
ABSTRACT

When participants search the same letter display repeatedly for different targets we might expect performance to improve on each subsequent search as they memorize characteristics of the display. However, here we find that search performance improved from a first search to a second search but not for a third search of the same display. This is predicted by a simple model that supports search with only a limited capacity short-term memory for items in the display. To support this model we show that a short-term memory recency effect is present in both the second and the third search. The magnitude of these effects is the same in both searches and as a result there is no additional benefit from the second to the third search.  相似文献   

6.
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.  相似文献   

7.
Visual perception of shape is affected by coding of local convexities and concavities. For instance, a recent study reported that deviations from symmetry carried by convexities were easier to detect than deviations carried by concavities. We removed some confounds and extended this work from a detection of reflection of a contour (i.e., bilateral symmetry), to a detection of repetition of a contour (i.e., translational symmetry). We tested whether any convexity advantage is specific to bilateral symmetry in a two-interval (Experiment 1) and a single-interval (Experiment 2) detection task. In both, we found a convexity advantage only for repetition. When we removed the need to choose which region of the contour to monitor (Experiment 3) the effect disappeared. In a second series of studies, we again used shapes with multiple convex or concave features. Participants performed a change detection task in which only one of the features could change. We did not find any evidence that convexities are special in visual short-term memory, when the to-be-remembered features only changed shape (Experiment 4), when they changed shape and changed from concave to convex and vice versa (Experiment 5), or when these conditions were mixed (Experiment 6). We did find a small advantage for coding convexity as well as concavity over an isolated (and thus ambiguous) contour. The latter is consistent with the known effect of closure on processing of shape. We conclude that convexity plays a role in many perceptual tasks but that it does not have a basic encoding advantage over concavity.  相似文献   

8.
Most previous studies investigating children’s ability to recognize facial expressions used only intense exemplars. Here we compared the sensitivity of 5-, 7-, and 10-year-olds with that of adults (n = 24 per age group) for less intense expressions of happiness, sadness, and fear. The developmental patterns differed across expressions. For happiness, by 5 years of age, children were as sensitive as adults even to low intensities. For sadness, by 5 years of age, children were as accurate as adults in judging that the face was expressive (i.e., not neutral), but even at 10 years of age, children were more likely to misjudge it as fearful. For fear, children’s thresholds were not adult-like until 10 years of age, and children often confused it with sadness at 5 years of age. For all expressions, including even happy expressions, 5- and 7-year-olds were less accurate than adults in judging which of two expressions was more intense. Together, the results indicate that there is slow development of accurate decoding of subtle facial expressions.  相似文献   

9.
Sato W  Yoshikawa S 《Cognition》2007,104(1):1-18
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing (Experiment 1) and videos (Experiment 2). The subjects' facial actions were unobtrusively videotaped and blindly coded using Facial Action Coding System [FACS; Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologist]. In the dynamic presentations common to both experiments, brow lowering, a prototypical action in angry expressions, occurred more frequently in response to angry expressions than to happy expressions. The pulling of lip corners, a prototypical action in happy expressions, occurred more frequently in response to happy expressions than to angry expressions in dynamic presentations. Additionally, the mean latency of these actions was less than 900 ms after the onset of dynamic changes in facial expression. Naive raters recognized the subjects' facial reactions as emotional expressions, with the valence corresponding to the dynamic facial expressions that the subjects were viewing. These results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.  相似文献   

10.
Is visual representation of an object affected by whether surrounding objects are identical to it, different from it, or absent? To address this question, we tested perceptual priming, visual short-term, and long-term memory for objects presented in isolation or with other objects. Experiment 1 used a priming procedure, where the prime display contained a single face, four identical faces, or four different faces. Subjects identified the gender of a subsequent probe face that either matched or mismatched with one of the prime faces. Priming was stronger when the prime was four identical faces than when it was a single face or four different faces. Experiments 2 and 3 asked subjects to encode four different objects presented on four displays. Holding memory load constant, visual memory was better when each of the four displays contained four duplicates of a single object, than when each display contained a single object. These results suggest that an object's perceptual and memory representations are enhanced when presented with identical objects, revealing redundancy effects in visual processing.  相似文献   

11.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

12.
The role of extrafoveal information in visual short-term memory has been investigated relatively little, and, in most existing studies, using verbalisable stimuli susceptible to the recruitment of long-term memory (LTM). In addition, little is known about the impact of extrafoveal information available pre- and posttarget foveation, as it is typical to provide extrafoveal information prior to the foveation of memory targets. In this study, two object-position recognition experiments were conducted (each with two conditions) to establish the impact of extrafoveal information provided before and after the foveation of memory targets. Stimuli comprised 1/f noise discs that minimised the recruitment of LTM by eliminating verbal and semantic cues. Overall, a greater hit rate was found where extrafoveal information was available; however, performance analyses in which extrafoveal information was considered relative to the temporal lag at which target stimuli were foveated reveals both costs and benefits. A beneficial effect arose only where extrafoveal information was provided after the target had been foveated, but not prior to target foveation. Findings are discussed in terms of recency and extrafoveal perception effects, incorporating a postfoveation object-file refresh mechanism.  相似文献   

13.
We investigated the nature of the bandwidth limit in the consolidation of visual information into visual short-term memory. In the first two experiments, we examined whether previous results showing differential consolidation bandwidth for colour and orientation resulted from methodological differences by testing the consolidation of colour information with methods used in prior orientation experiments. We briefly presented two colour patches with masks, either sequentially or simultaneously, followed by a location cue indicating the target. Participants identified the target colour via buttonpress (Experiment 1) or by clicking a location on a colour wheel (Experiment 2). Although these methods have previously demonstrated that two orientations are consolidated in a strictly serial fashion, here we found equivalent performance in the sequential and simultaneous conditions, suggesting that two colours can be consolidated in parallel. To investigate whether this difference resulted from different consolidation mechanisms or a common mechanism with different features consuming different amounts of bandwidth, Experiment 3 presented a colour patch and an oriented grating either sequentially or simultaneously. We found a lower performance in the simultaneous than the sequential condition, with orientation showing a larger impairment than colour. These results suggest that consolidation of both features share common mechanisms. However, it seems that colour requires less information to be encoded than orientation. As a result, two colours can be consolidated in parallel without exceeding the bandwidth limit, whereas two orientations or an orientation and a colour exceed the bandwidth and appear to be consolidated serially.  相似文献   

14.
Recognising identity and emotion conveyed by the face is important for successful social interactions and has thus been the focus of considerable research. Debate has surrounded the extent to which the mechanisms underpinning face emotion and face identity recognition are distinct or share common processes. Here we use an individual differences approach to address this issue. In a well-powered (N?=?605) and age-diverse sample we used structural equation modelling to assess the association between face emotion recognition and face identity recognition ability. We also sought to assess whether this association (if present) reflected visual short-term memory and/or general intelligence (g). We observed a strong positive correlation (r?=?.52) between face emotion recognition ability and face identity recognition ability. This association was reduced in magnitude but still moderate in size (r?=?.28) and highly significant when controlling for measures of g and visual short-term memory. These results indicate that face emotion and face identity recognition abilities in part share a common processing mechanism. We suggest that face processing ability involves multiple functional components and that modelling the sources of individual differences can offer an important perspective on the relationship between these components.  相似文献   

15.
The anger superiority effect shows that an angry face is detected more efficiently than a happy face. However, it is still controversial whether attentional allocation to angry faces is a bottom-up process or not. We investigated whether the anger superiority effect is influenced by top-down control, especially working memory (WM). Participants remembered a colour and then searched for differently coloured facial expressions. Just holding the colour information in WM did not modulate the anger superiority effect. However, when increasing the probabilities of trials in which the colour of a target face matched the colour held in WM, participants were inclined to direct attention to the target face regardless of the facial expression. Moreover, the knowledge of high probability of valid trials eliminated the anger superiority effect. These results suggest that the anger superiority effect is modulated by top-down effects of WM, the probability of events and expectancy about these probabilities.  相似文献   

16.
Emerging evidence suggests that age-related declines in memory may reflect a failure in pattern separation, a process that is believed to reduce the encoding overlap between similar stimulus representations during memory encoding. Indeed, behavioural pattern separation may be indexed by a visual continuous recognition task in which items are presented in sequence and observers report for each whether it is novel, previously viewed (old), or whether it shares features with a previously viewed item (similar). In comparison to young adults, older adults show a decreased pattern separation when the number of items between “old” and “similar” items is increased. Yet the mechanisms of forgetting underpinning this type of recognition task are yet to be explored in a cognitively homogenous group, with careful control over the parameters of the task, including elapsing time (a critical variable in models of forgetting). By extending the inter-item intervals, number of intervening items and overall decay interval, we observed in a young adult sample (N?=?35, Mage?=?19.56 years) that the critical factor governing performance was inter-item interval. We argue that tasks using behavioural continuous recognition to index pattern separation in immediate memory will benefit from generous inter-item spacing, offering protection from inter-item interference.  相似文献   

17.
The present study investigated whether dysphoric individuals have a difficulty in disengaging attention from negative stimuli and/or reduced attention to positive information. Sad, neutral and happy facial stimuli were presented in an attention-shifting task to 18 dysphoric and 18 control participants. Reaction times to neutral shapes (squares and diamonds) and the event-related potentials to emotional faces were recorded. Dysphoric individuals did not show impaired attentional disengagement from sad faces or facilitated disengagement from happy faces. Right occipital lateralisation of P100 was absent in dysphoric individuals, possibly indicating reduced attention-related sensory facilitation for faces. Frontal P200 was largest for sad faces among dysphoric individuals, whereas controls showed larger amplitude to both sad and happy as compared with neutral expressions, suggesting that dysphoric individuals deployed early attention to sad, but not happy, expressions. Importantly, the results were obtained controlling for the participants' trait anxiety. We conclude that at least under some circumstances the presence of depressive symptoms can modulate early, automatic stages of emotional processing.  相似文献   

18.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

19.
We show that perceived size of visual stimuli can be altered by matches between the contents of visual short-term memory and stimuli in the scene. Observers were presented with a colour cue (to hold in working memory or to merely identify) and subsequently had to indicate which of the two different-coloured objects presented simultaneously on the screen appeared bigger (or smaller). One of the two objects for size judgements had the same colour as the cue (matching stimulus) and the other did not (mismatching stimulus). Perceived object size was decreased by the reappearance of the recently seen cue, as there were more size judgement errors on trials where the matching stimulus was physically bigger (relative to the mismatching stimulus) than on trials where the matching stimulus was physically smaller. The effect occurred regardless of whether the visual cue was actively maintained in working memory or was merely identified. The effect was unlikely generated by the allocation of attention, because shifting attention to a visual stimulus actually increased its perceived size. The findings suggest that visual short-term memory, whether explicit or implicit, can decrease the perceived size of subsequent visual stimuli.  相似文献   

20.
The current study aimed to investigate the effect of action on the preservation of stored feature bindings. Prior research suggests that stimuli presented after a memory array can disrupt the feature bindings of memory array items. Here, we conducted three experiments to examine whether response to targets disrupts feature bindings. Two of four letters (A, B, C, D) were presented in a memory array, and were followed by a second array containing a single target letter. After either identifying or localizing the target letter, participants were required to report the identity or location of the memory array items. There was a deficit in memory performance involving spatial repetition when participants were required to localize targets, and involving identity repetition when participants were required to identify targets. We conclude that response codes are fundamentally linked to stimulus representations, and can affect retrieval from visual working memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号