首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dance-like actions are complex visual stimuli involving multiple changes in body posture across time and space. Visual perception research has demonstrated a difference between the processing of dynamic body movement and the processing of static body posture. Yet, it is unclear whether this processing dissociation continues during the retention of body movement and body form in visual working memory (VWM). When observing a dance-like action, it is likely that static snapshot images of body posture will be retained alongside dynamic images of the complete motion. Therefore, we hypothesized that, as in perception, posture and movement would differ in VWM. Additionally, if body posture and body movement are separable in VWM, as form- and motion-based items, respectively, then differential interference from intervening form and motion tasks should occur during recognition. In two experiments, we examined these hypotheses. In Experiment 1, the recognition of postures and movements was tested in conditions in which the formats of the study and test stimuli matched (movement–study to movement–test, posture–study to posture–test) or mismatched (movement–study to posture–test, posture–study to movement–test). In Experiment 2, the recognition of postures and movements was compared after intervening form and motion tasks. These results indicated that (1) the recognition of body movement based only on posture is possible, but it is significantly poorer than recognition based on the entire movement stimulus, and (2) form-based interference does not impair memory for movements, although motion-based interference does. We concluded that, whereas static posture information is encoded during the observation of dance-like actions, body movement and body posture differ in VWM.  相似文献   

2.
A matching advantage for dynamic human faces   总被引:3,自引:0,他引:3  
Thornton IM  Kourtzi Z 《Perception》2002,31(1):113-132
In a series of three experiments, we used a sequential matching task to explore the impact of non-rigid facial motion on the perception of human faces. Dynamic prime images, in the form of short video sequences, facilitated matching responses relative to a single static prime image. This advantage was observed whenever the prime and target showed the same face but an identity match was required across expression (experiment 1) or view (experiment 2). No facilitation was observed for identical dynamic prime sequences when the matching dimension was shifted from identity to expression (experiment 3). We suggest that the observed dynamic advantage, the first reported for non-degraded facial images, arises because the matching task places more emphasis on visual working memory than typical face recognition tasks. More specifically, we believe that representational mechanisms optimised for the processing of motion and/or change-over-time are established and maintained in working memory and that such 'dynamic representations' (Freyd, 1987 Psychological Review 94 427-438) capitalise on the increased information content of the dynamic primes to enhance performance.  相似文献   

3.
In three experiments, we investigated whether the emotional valence of a photograph influenced the amount of time required to initially identify the contents of the image. In Experiment 1, participants saw a slideshow consisting of positive, neutral, and negative photographs that were balanced for arousal. During the slideshow, presentation time was substantially limited (60?ms), and the images were followed by masks. Immediately following the slideshows, participants were given a recognition memory test. Memory performance was best for positive images and worst for negative images (Experiment 1). In Experiment 2, two simultaneous photographs were briefly presented and masked. On a trial-by-trial basis, participants indicated whether the two images were identical or not, thus removing the need for memory storage and retrieval. Again, performance was worst for negative images. The results of Experiment 3 suggested that these valence-based differences were not related attentional effects (Experiment 3). We argue that the valence of an image is detected rapidly and, in the case of negative images, interferes with processing the identity of the scene.  相似文献   

4.
Sit-and-wait strategies in dynamic visual search   总被引:1,自引:0,他引:1  
The role of memory in visual search has lately become a controversial issue. Horowitz and Wolfe (1998) observed that performance in a visual search task was little affected by whether the stimuli were static or randomly relocated every 111 ms. Because a memory-based mechanism, such as inhibition of return, would be of no use in the dynamic condition, Horowitz and Wolfe concluded that memory is likewise not involved in the static condition. However, Horowitz and Wolfe could not effectively rule out the possibility that observers adopted a different strategy in the dynamic condition than in the static condition. That is, in the dynamic condition observers may have attended to a subregion of the display and waited for the target to appear there (sit-and-wait strategy). This hypothesis is supported by experimental data showing that performance in their dynamic condition does not differ from performance in another dynamic condition in which observers are forced to adopt a sit-and-wait strategy by being presented with a limited region of the display only.  相似文献   

5.
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.  相似文献   

6.
Despite the substantial interest in memory for complex pictorial stimuli, there has been virtually no research comparing memory for static scenes with that for their moving counterparts. We report that both monochrome and color moving images are better remembered than static versions of the same stimuli at retention intervals up to one month. When participants studied a sequence of still images, recognition performance was the same as that for single static images. These results are discussed within a theoretical framework which draws upon previous studies of scene memory, face recognition, and representational momentum.  相似文献   

7.
Evidence from a number of sources now suggests that the visuo-spatial sketchpad (VSSP) of working memory may be composed of two subsystems: one for maintaining visual information and the other for spatial information. In this paper we present three experiments that examine this fractionation using a developmental approach. In Experiment 1, 5-, 8-, and 10-year old children were presented with a visuo-spatial working memory task (the matrices task) with two presentation formats (static and dynamic). A developmental dissociation in performance was found for the static and dynamic conditions of both tasks, suggesting that the activation of separable subsystems of the VSSP is dependent upon a static/dynamic distinction in information content rather than a visual/spatial one. A highly similar pattern of performance was found for a mazes task with static and dynamic formats. However, one strategic activity, the use of simple verbal recoding, may also have been responsible for the observed pattern of performance in the matrices task. In Experiments 2 and 3 this was investigated using concurrent articulatory suppression. No evidence to support this notion was found, and it is therefore proposed that static and dynamic visuo-spatial information is maintained in working memory by separable subcomponents of the VSSP.  相似文献   

8.
Long-term recognition memory for some pictures is consistently better than for others (Isola, Xiao, Parikh, Torralba, & Oliva, IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), 36(7), 1469–1482, 2014). Here, we investigated whether pictures found to be memorable in a long-term memory test are also perceived more easily when presented in ultra-rapid RSVP. Participants viewed 6 pictures they had never seen before that were presented for 13 to 360 ms per picture in a rapid serial visual presentation (RSVP) sequence. In half the trials, one of the pictures was a memorable or a nonmemorable picture and perception of this picture was probed by a visual recognition test at the end of the sequence. Recognition for pictures from the memorable set was higher than for those from the nonmemorable set, and this difference increased with increasing duration. Nonmemorable picture recognition was low initially, did not increase until 120 ms, and never caught up with memorable picture recognition performance. Thus, the long-term memorability of an image is associated with initial perceptibility: A picture that is hard to grasp quickly is hard to remember later.  相似文献   

9.
The present study examines the idea that time-based forgetting of outdated information can lead to better memory of currently relevant information. This was done using the visual arrays task, along with a between-subjects manipulation of both the retention interval (1?s vs. 4?s) and the time between two trials (1?s vs. 4?s). Consistent with prior work [Shipstead, Z., &; Engle, R. W. (2013). Interference within the focus of attention: Working memory tasks reflect more than temporary maintenance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 277–289; Experiment 1], longer retention intervals did not lead to diminished memory of currently relevant information. However, we did find that longer periods of time between two trials improved memory for currently relevant information. This replicates findings that indicate proactive interference affects visual arrays performance and extends previous findings to show that reduction of proactive interference can occur in a time-dependent manner.  相似文献   

10.
Abstract

We asked whether body sway would be influenced by visual information about motion of the ground surface. On a ship at sea, standing participants performed a demanding visual search task or a simple visual inspection task. Display content was stationary relative to the ship or relative to the Earth. Participants faced the ship’s bow or its port side. Performance on the visual search task was representative of terrestrial studies. Body sway was greater during viewing of the Earth Stationary displays than during viewing of the Ship Stationary displays. We discuss possible implications of these results for theoretical and applied issues.  相似文献   

11.
The aim of this study was to compare the effect on interval discrimination of the presentation of disgusting mutilation images and the presentation of faces expressing disgust. In Experiments 1 and 2, participants had to say whether the second of two images was presented for a shorter or a longer duration than the first (intervals = 400 ms vs. 482 ms). Although the overall probability of responding “long” was not exactly the same in these two experiments, participants reported that duration was longer more often when disgusting mutilation images were presented than when neutral or disgusted faces were presented. In Experiment 3, in which a single-stimulus method was employed, mutilation images were once again reported to be presented for a longer duration than neutral or disgusted faces. The investigation also reveals that discrimination levels are not higher when mutilation images are presented. It is argued that the effect of mutilation images on perceived duration is not due to attention; it is rather attributed to the increased arousal caused by these images.  相似文献   

12.
Time perception performance was systematically investigated in adolescents with and without attention-deficit/hyperactivity disorder (ADHD). Specifically, the effects of manipulating modality (auditory and visual) and length of duration (200 and 1000 ms) were examined. Forty-six adolescents with ADHD and 44 controls were administered four duration discrimination tasks and two control tasks, and a set of standardized measures. Participants with ADHD had higher thresholds than controls on all of the duration discrimination tasks, with the largest effect size obtained on the visual 1000 ms duration discrimination task. No group differences were observed on the control tasks. Visual–spatial memory was found to be a significant predictor of visual and auditory duration discrimination at longer intervals (1000 ms) in the ADHD sample, whereas auditory verbal working memory predicted auditory discrimination at longer intervals (1000 ms) in the control sample. These group differences suggest impairments in basic timing mechanisms in ADHD.  相似文献   

13.
ABSTRACT

Processing latencies for coherent, high level percepts in vision are at least 100?ms and possibly as much as 500?ms. Processing latencies are less in other modalities, but still significant. This seems to imply that perception lags behind reality by an amount equal to the processing latency. It has been proposed that the brain can compensate for perceptual processing latencies by using the most recent available information to extrapolate forward, thereby constructing a model of what the world beyond the senses is like now. The present paper reviews several lines of evidence relating to this hypothesis, including the flash-lag effect, motion-induced position shifts, representational momentum, static visual illusions, and motion extrapolation at the retina. There are alternative explanations for most of the results but there are some findings for which no competing explanation has yet been proposed. Collectively, the evidence for extrapolation to the present is suggestive but not yet conclusive. An alternative account of compensation for processing latencies, based on the hypothesis of rapid emergence of percepts, is proposed.  相似文献   

14.
Attentional dwell time (AD) defines our inability to perceive spatially separate events when they occur in rapid succession. In the standard AD paradigm, subjects should identify two target stimuli presented briefly at different peripheral locations with a varied stimulus onset asynchrony (SOA). The AD effect is seen as a long-lasting impediment in reporting the second target, culminating at SOAs of 200?C500?ms. Here, we present the first quantitative computational model of the effect??a theory of temporal visual attention. The model is based on the neural theory of visual attention (Bundesen, Habekost, & Kyllingsb?k, Psychological Review, 112, 291?C328 2005) and introduces the novel assumption that a stimulus retained in visual short-term memory takes up visual processing-resources used to encode stimuli into memory. Resources are thus locked and cannot process subsequent stimuli until the stimulus in memory has been recoded, which explains the long-lasting AD effect. The model is used to explain results from two experiments providing detailed individual data from both a standard AD paradigm and an extension with varied exposure duration of the target stimuli. Finally, we discuss new predictions by the model.  相似文献   

15.
动态视觉加工与儿童汉字阅读   总被引:14,自引:0,他引:14  
使用视觉阈限测验、图片命名、字形相似性判断实验和语音意识等测验 ,考察了小学五年级儿童视觉加工技能与汉字阅读之间的关系。结果发现 :动态视觉加工与图片命名错误率、字形判断反应时和错误率、语音意识均有显著相关 ,静态视觉加工只与图片命名错误率相关显著 ;控制识字量后的偏相关分析显示 ,动态视觉加工与其他变量的相关关系不变 ,静态视觉加工与图片命名错误率的相关不再显著 ;回归分析发现动态视觉加工在识字量和语音意识的影响控制后 ,能够分别解释阅读流畅性、字形判断反应时和图片命名错误率 7%、2 5 %和 5 6 %的变化 ;语音意识能够解释识字量和阅读流畅性 9%和 10 %的变化 ;对差读者的动态视觉加工和语音意识分析发现 ,儿童在这两种测验上的个体差异很大。上述结果表明 ,阅读过程受基本知觉技能影响 ,动态视觉加工作用于汉字阅读的特定过程。  相似文献   

16.
The medial temporal and medial superior temporal cortex (MT/MST) is involved in the processing of visual motion, and fMRI experiments indicate that there is greater activation when subjects view static images that imply motion than when they view images that do not imply motion at all. We applied transcranial magnetic stimulation (TMS) to MT/MST in order to assess the functional necessity of this region for the processing of implied motion represented in static images. Area MT/MST was localized by the use of a TMS-induced misperception of visual motion, and its location was verified through the monitored completion of a motion discrimination task. We controlled for possible impairments in general visual processing by having subjects perform an object categorization task with and without TMS. Although MT/MST stimulation impaired performance in a motion discrimination task (and vertex stimulation did not), there was no difference in performance between the two forms of stimulation in the implied motion discrimination task. MT/MST stimulation did, however, improve subjects’ performance in the object categorization task. These results indicate that, within 150 msec of stimulus presentation, MT/MST is not directly involved in the visual processing of static images in which motion is implied. The results do, however, confirm previous findings that disruption of MT/MST may improve efficiency in more ventral visual processing streams.  相似文献   

17.
The present study was designed to investigate the influences of type of psychophysical task (two-alternative forced-choice [2AFC] and reminder tasks), type of interval (filled vs. empty), sensory modality (auditory vs. visual), and base duration (ranging from 100 through 1,000 ms) on performance on duration discrimination. All of these factors were systematically varied in an experiment comprising 192 participants. This approach allowed for obtaining information not only on the general (main) effect of each factor alone, but also on the functional interplay and mutual interactions of some or all of these factors combined. Temporal sensitivity was markedly higher for auditory than for visual intervals, as well as for the reminder relative to the 2AFC task. With regard to base duration, discrimination performance deteriorated with decreasing base durations for intervals below 400 ms, whereas longer intervals were not affected. No indication emerged that overall performance on duration discrimination was influenced by the type of interval, and only two significant interactions were apparent: Base Duration × Type of Interval and Base Duration × Sensory Modality. With filled intervals, the deteriorating effect of base duration was limited to very brief base durations, not exceeding 100 ms, whereas with empty intervals, temporal discriminability was also affected for the 200-ms base duration. Similarly, the performance decrement observed with visual relative to auditory intervals increased with decreasing base durations. These findings suggest that type of task, sensory modality, and base duration represent largely independent sources of variance for performance on duration discrimination that can be accounted for by distinct nontemporal mechanisms.  相似文献   

18.
Horowitz and Wolfe (1998, 2003) have challenged the view that serial visual search involves memory processes that keep track of already inspected locations. The present study used a search paradigm similar to Horowitz and Wolfe's (1998), comparing a standard static search condition with a dynamic condition in which display elements changed locations randomly every 111 ms. In addition to measuring search reaction times, observers' eye movements were recorded. For target-present trials, the search rates were near-identical in the two search conditions, replicating Horowitz and Wolfe's findings. However, the number of fixations and saccade amplitude were larger in the static than in the dynamic condition, whereas fixation duration and the latency of the first saccade were longer in the dynamic condition. These results indicate that an active, memory-guided search strategy was adopted in the static condition, and a passive “sit-and-wait” strategy in the dynamic condition.  相似文献   

19.
The present studies aimed to extend Regulatory Fit Theory in the domain of persuasive communication by (a) using printed advertisement images without any verbal claim, instead of purely or mostly verbal messages; (b) selecting the images to fit the distinct orientations of regulatory mode rather than regulatory focus; and (c) priming regulatory mode orientation instead of relying on chronic prevalence of either locomotion or assessment orientation. We found that recipients primed with a locomotion orientation experienced fit, and were more persuaded, when exposed to “dynamic” versus “static” visual images; conversely, recipients primed with an assessment orientation experienced fit and were more persuaded when exposed to “static” versus “dynamic” images. Our findings show that the experience of fit can be induced by visual messages, resulting in positive effects in terms of attitude toward product advertisement and estimated price of advertised products. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Horowitz and Wolfe (1998, 2003) have challenged the view that serial visual search involves memory processes that keep track of already inspected locations. The present study used a search paradigm similar to Horowitz and Wolfe's (1998), comparing a standard static search condition with a dynamic condition in which display elements changed locations randomly every 111 ms. In addition to measuring search reaction times, observers' eye movements were recorded. For target-present trials, the search rates were near-identical in the two search conditions, replicating Horowitz and Wolfe's findings. However, the number of fixations and saccade amplitude were larger in the static than in the dynamic condition, whereas fixation duration and the latency of the first saccade were longer in the dynamic condition. These results indicate that an active, memory-guided search strategy was adopted in the static condition, and a passive “sit-and-wait” strategy in the dynamic condition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号