首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The objective of the present experiment was to examine the nature of the coding process in a letter-matching task. Letter pairs that were either visually confusable or acoustically confusable or both visually and acoustically confusable were presented tachistoscopically with a variable interval between the first letter and the comparison letter. The dependent measure was RT for the “different” responses to the three types of confusable items which were each assessed at four interstimulus intervals ranging from 0 to 2 sec. The results indicate that a visual code appears to be emphasized for approximately 1 sec, after which an acoustic code seems to be dominant. There is also evidence which indicates that the acoustic code does not immediately replace the visual code and that both may coexist for a brief period of time.  相似文献   

2.
Visual and acoustic confusability between a target item and background items was varied in a visual search task. Visual confusability was a highly significant source of difficulty while acoustic confusability had no effect. The results do not seem to be interpretable within a theory which assumes compulsory auditory encoding of visual information.  相似文献   

3.
Visual and acoustic confusability between a target item and background items was varied in a visual search task. Visual confusability was a highly significant source of difficulty while acoustic confusability had no effect. The results do not seem to be interpretable within a theory which assumes compulsory auditory encoding of visual information.  相似文献   

4.
Studies in change blindness re-enforce the suggestion that veridical, pictorial representations that survive multiple relocations of gaze are unlikely to be generated in the visual system. However, more abstract information may well be extracted and represented by the visual system. In this paper we study the types of information that are retained and the time courses over which these representations are constructed when participants view complex natural scenes. We find that such information is retained and that the resultant abstract representations encode a range of information. Different types of information are extracted and represented over different time courses. After several seconds of viewing natural scenes, our visual system is able to construct a complex information-rich representation.  相似文献   

5.
In visual search tasks, observers can guide their attention towards items in the visual field that share features with the target item. In this series of studies, we examined the time course of guidance toward a subset of items that have the same color as the target item. Landolt Cs were placed on 16 colored disks. Fifteen distractor Cs had gaps facing up or down while one target C had a gap facing left or right. Observers searched for the target C and reported which side contained the gap as quickly as possible. In the absence of other information, observers must search at random through the Cs. However, during the trial, the disks changed colors. Twelve disks were now of one color and four disks were of another color. Observers knew that the target C would always be in the smaller color set. The experimental question was how quickly observers could guide their attention to the smaller color set. Results indicate that observers could not make instantaneous use of color information to guide the search, even when they knew which two colors would be appearing on every trial. In each study, it took participants 200–300 ms to fully utilize the color information once presented. Control studies replicated the finding with more saturated colors and with colored C stimuli (rather than Cs on colored disks). We conclude that segregation of a display by color for the purposes of guidance takes 200–300 ms to fully develop.  相似文献   

6.
Studies on iconic memory demonstrate that rich information from a visual scene quickly becomes unavailable with the passage of time. The decay rate of iconic memory refers to the dynamics of memory availability. The present study investigated the iconic memory decay of different stimulus attributes that comprised an object. Specifically, in Experiment 1, participants were presented with eight coloured numbers (e.g., red 4) and required to remember only one attribute, either colour or number, over different blocks of trials. The participants then reported the cued attribute in which the cue Stimulus Onset Asynchrony (SOA) from the memory array onset was varied (0, 100, 200, 300, 500, and 1000?ms). We found that numerical information became unavailable more quickly than colour information, despite the fact that the memory accuracies at 0 and 1000?ms SOAs were comparable between the two attributes. In Experiment 2, we replicated the finding that a numerical representation was lost more quickly than a colour representation when visual masks followed the target stimulus. These results suggest that the various visual attributes comprising an object are lost over time at different rates in iconic memory. We discuss this finding in relation to how perceptual representation is transferred to the capacity-limited visual working memory.  相似文献   

7.
Given human capacity limitations, to behave adaptively we need to prioritise the order of visual processing to ensure that the most relevant information is available to control action. One way to do this is to prioritise processing at a particular location in space. However, there are many situations where this strategy is not possible and recent studies have shown that, in such circumstances, observers can use time as well as space to prioritise selection. We propose that selection by time can be influenced by a process of visual marking, involving an active bias applied in parallel against old items in the field. Here we describe the properties of visual marking in relation to other mechanisms of visual selection.  相似文献   

8.
Wede J  Francis G 《Perception》2006,35(9):1155-1170
Sequential viewing of two orthogonally related patterns produces an afterimage of the first pattern (Vidyasagar et al, 1999 Nature 399 422-423; Francis and Rothmayer, 2003 Perception and Psychophysics 65 508-522). We investigated how the timing between the first stimulus (a vertical bar grating) and the second stimulus (a horizontal bar grating) affected the visibility of the afterimage (a perceived vertical grating). As the duration from offset of the first stimulus increased, reports of afterimages decreased. Holding fixed the total time from offset of the first stimulus and increasing the duration from offset of the second stimulus while decreasing the time between the first and second stimuli, caused a decrease in afterimage reports. We interpret this finding in terms of Grossberg's BCS - FCS (boundary contour system--feature contour system) theory. In this theory, the afterimage percept is the result of color complement after-responses in the FCS system interacting with orientation after-responses in the BCS system. The two types of after-responses interact at a stage of neural filling-in to produce the afterimage percept. As the duration between the stimuli increases, the color after-responses weaken so that visible filling-in is less likely to occur. A similar effect occurs for the orientation after-responses but at a faster time scale. Simulations of the model match the experimental data.  相似文献   

9.
Response selection takes time. Hick's law (Hick, 1952) predicts that the time course of response selection is a logarithmic function of the number of equally likely response alternatives. However, recent work has shown that oculomotor responses constitute noteworthy exceptions in that the latencies of saccades (Kveraga, Boucher, & Hughes, 2002) and smooth pursuit movements (Berryhill, Kveraga, Boucher, & Hughes, 2004) are completely independent of response uncertainty. This finding extends to the case in which the required response was known in advance (i.e., simple reaction times [RTs] were equivalent to choice RTs). In view of these results, we reevaluated reports that latencies to name visually presented digits (Experiment 1) and/or repeat aurally presented digits (Experiment 2) are similarly independent of the size of the response set. We found that naming latencies were equivalent for response set sizes from one to eight, but simple RTs (response set of one) were faster. Thus, the overlearned task of digit naming is indeed highly automatic but has not reached the level of automaticity characteristic of the oculomotor system.  相似文献   

10.
How long does it take to form a durable representation in visual working memory? Several theorists have proposed that this consolidation process is very slow. Here, we measured the time course of consolidation. Observers performed a change-detection task for colored squares, and shortly after the presentation of the first array, pattern masks were presented at the locations of each of the colored squares to disrupt representations that had not yet been consolidated. Performance on the memory task was impaired when the delay between the colored squares and the masks was short, and this effect became larger when the number of colored squares was increased. The rate of consolidation was approximately 50 ms per item, which is considerably faster than previous proposals.  相似文献   

11.
It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks (e.g., categorization) demonstrate that similarity is dynamic and changes as perceptual information is accumulated (Lamberts, 1998). In three visual search experiments, the time course of target-distractor similarity effects and distractor-distractor similarity effects was examined. A version of the extended generalized context model (EGCM; Lamberts, 1998) provided a good account of the time course of the observed similarity effects, supporting the notion that similarity in search is dynamic. Modeling also indicated that increasing distractor homogeneity influences both perceptual and decision processes by (respectively) increasing the rate at which stimulus features are processed and enabling strategic weighting of stimulus information.  相似文献   

12.
This study builds on a specific characteristic of letters of the Roman alphabet—namely, that each letter name is associated with two visual formats, corresponding to their uppercase and lowercase versions. Participants had to read aloud the names of single letters, and event-related potentials (ERPs) for six pairs of visually dissimilar upper- and lowercase letters were recorded. Assuming that the end product of processing is the same for upper- and lowercase letters sharing the same vocal response, ERPs were compared backward, starting from the onset of articulatory responses, and the first significant divergence was observed 120 ms before response onset. Given that naming responses were produced at around 414 ms, on average, these results suggest that letter processing is influenced by visual information until 294 ms after stimulus onset. This therefore provides new empirical evidence regarding the time course and interactive nature of visual letter perception processes.  相似文献   

13.
Previous work has generated inconsistent results with regard to what extent working memory (WM) content guides visual attention. Some studies found effects of easy to verbalize stimuli, whereas others only found an influence of visual memory content. To resolve this, we compared the time courses of memory-based attentional guidance for different memory types. Participants first memorized a colour, which was either easy or difficult to verbalize. They then looked for an unrelated target in a visual search display and finally completed a memory test. One of the distractors in the search display could have the memorized colour. We varied the time between the to-be-remembered colour and the search display, as well as the ease with which the colours could be verbalized. We found that the influence of easy to verbalize WM content on visual search decreased with increasing time, whereas the influence of visual WM content was sustained. However, visual working memory effects on attention also decreased when the duration of visual encoding was limited by an additional task or when the memory item was presented only briefly. We propose that for working memory effects on visual attention to be sustained, a sufficiently strong visual representation is necessary.  相似文献   

14.
In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation.  相似文献   

15.
16.
Koring L  Mak P  Reuland E 《Cognition》2012,123(3):361-379
Previous research has found that the single argument of unaccusative verbs (such as fall) is reactivated during sentence processing, but the argument of agentive verbs (such as jump) is not (Bever & Sanz, 1997; Friedmann, Taranto, Shapiro, & Swinney, 2008). An open question so far was whether this difference in processing is caused by a difference in thematic roles the verbs assign, or a difference in the underlying syntactic structure. In the present study we tease apart these two potential sources. In order to achieve this, we included a set of verbs (like sparkle) which are equal to unaccusative verbs in the thematic role they assign to their argument, but equal to agentive verbs in the syntactic status of their argument (henceforth mixed verbs). As a method we used the visual world paradigm as this enables us to measure processing of the sentences continuously. This method also allowed us to test another hypothesis, namely the hypothesis that not only the argument of unaccusative verbs is reactivated during processing, but also the argument of agentive verbs. This reactivation is expected as the result of integrating the verb and its argument into one representation. In our experiment, participants listened to sentences including one of the three types of verbs. While listening, they viewed a visual display in which one of four objects was strongly related to the argument of the verb (wood-saw). The gaze record showed that the eyes moved to the related object (saw) upon presentation of the argument (wood). More interestingly, the eyes moved back to the related object upon presentation of the verb (fell). We found that looks to the related object increase only late after verb offset for unaccusative verbs, replicating the findings of previous research. We also found a rise in looks to the related object in agentive verbs, but this rise took place much earlier, starting slightly after verb onset. Finally, we found that mixed verbs pattern in processing with agentive verbs. We conclude that the argument of the verb is always reactivated, independent of verb type. In addition, the timing of integration differs per verb type and depends on the syntactic status of the argument and not on the thematic role that is assigned to the argument.  相似文献   

17.
18.
Capacity limits are a hallmark of visual cognition. The upper boundary of our ability to individuate and remember objects is well known but—despite its central role in visual information processing—not well understood. Here, we investigated the role of temporal limits in the perceptual processes of forming “object files.” Specifically, we examined the two fundamental mechanisms of object file formation—individuation and identification—by selectively interfering with visual processing by using forward and backward masking with variable stimulus onset asynchronies. While target detection was almost unaffected by these two types of masking, they showed distinct effects on the two different stages of object formation. Forward “integration” masking selectively impaired object individuation, whereas backward “interruption” masking only affected identification and the consolidation of information into visual working memory. We therefore conclude that the inherent temporal dynamics of visual information processing are an essential component in creating the capacity limits in object individuation and visual working memory.  相似文献   

19.
Long-lasting interference from an initial visual target on a subsequent one has been measured in two paradigms: rapid serial presentation of targets and nontargets at a single location, and simple presentation of two spatially separated targets. We note that comparisons between these paradigms might be invalid, since interference in each paradigm can be attributed to a different source: demands on selective attention, or demands to switch locations. We use a novel target presentation that both minimizes selection demands and eliminates location switching, yet we still find long-lasting interference. We suggest that all three paradigms discussed tap a common attentional limit. We also examine effects of similarity between targets, and effects of discrimination difficulty on the initial target. We find that similarity effects are more pronounced when nontargets are present, and we find no effect of discrimination difficulty on subsequent interference.  相似文献   

20.
Four experiments examined the Connolly & Jones (1970) model which postulates that translation between modalities in the cross-modal paradigm occurs before storage in short-term memory. In general, the results provided no support for the translation notion. Delaying until the end of the retention interval knowledge of the reproduction mode failed to produce a matching performance decrement. Subjects were able to maintain the code of original presentation through the retention interval even when they did not expect reproduction to be in this mode. In addition, the asymmetry in the cross-modal matching of visual (V) and kinaesthetic (K) information, whereby K-V performance is more accurate than V-K performance, was found to occur only under certain visual display conditions. The implications of these findings for general models of cross-modal translation were discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号