共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous research indicates that visual attention can be automatically captured by sensory inputs that match the contents of visual working memory. However, Woodman and Luck (2007) showed that information in working memory can be used flexibly as a template for either selection or rejection according to task demands. We report two experiments that extend their work. Participants performed a visual search task while maintaining items in visual working memory. Memory items were presented for either a short or long exposure duration immediately prior to the search task. Memory was tested by a change-detection task immediately afterwards. On a random half of trials items in memory matched either one distractor in the search task (Experiment 1) or three (Experiment 2). The main result was that matching distractors speeded or slowed target detection depending on whether memory items were presented for a long or short duration. These effects were more in evidence with three matching distractors than one. We conclude that the influence of visual working memory on visual search is indeed flexible but is not solely a function of task demands. Our results suggest that attentional capture by perceptual inputs matching information in visual working memory involves a fast automatic process that can be overridden by a slower top-down process of attentional avoidance. 相似文献
2.
Jessica L. Bean Jaworski 《Child neuropsychology》2017,23(3):316-331
Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed. 相似文献
3.
Research on aging and visual search often requires older people to search computer screens for target letters or numbers. The aim of this experiment was to investigate age-related differences using an everyday-based visual search task in a large participant sample (n=261) aged 20-88 years. Our results show that: (1) old-old adults have more difficulty with triple conjunction searches with one highly distinctive feature compared to young-old and younger adults; (2) age-related declines in conjunction searches emerge in middle age then progress throughout older age; (3) age-related declines are evident in feature searches on target absent trials, as older people seem to exhaustively and serially search the whole display to determine a target's absence. Together, these findings suggest that declines emerge in middle age then progress throughout older age in feature integration, guided search, perceptual grouping and/or spreading suppression processes. Discussed are implications for enhancing everyday functioning throughout adulthood. 相似文献
4.
Visual working memory and threat monitoring: Spider fearfuls show disorder-specific change detection
Previous studies of biased information processing in anxiety addressed biases of attention and memory, but little is known about the processes taking place between them: visual working memory (VWM) and monitoring of threat. We investigated these processes with a change detection paradigm. In Experiment 1, spider fearfuls (SF) and non-anxious controls (NAC) judged two subsequently presented displays as same or different. The displays consisted of several pictures, one of which could depict a spider. In Experiment 2, SF and NAC, both without snake fear, were tested with displays including either a spider or a snake image to determine the material-specificity of biased VWM. Both groups showed increased change detection for threat images. This effect was significantly stronger in SF, for spider images only, indicating a threat-specific VWM bias. Thus, contrary to the assumptions made by most cognitive models of anxiety, an explicit memory bias was found. 相似文献
5.
《Quarterly journal of experimental psychology (2006)》2013,66(4):793-808
Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., ?ca-vi from cavia “guinea pig” vs. ?ka-vi from kaviaar “caviar”). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-?jec from projector “projector” vs. ?pro-jec from projectiel “projectile”), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress. 相似文献
6.
To assess the role of priming in conjunctive visual search tasks, we systematically varied the consistency of the target and distractor identity between different conditions. Search was fastest in the standard conjunctive search paradigm where identities remained constant. Search was slowest when potential target identity varied predictably for each successive trial (the 'switch' condition). The role of priming was also demonstrated on a trial-by-trial basis in a 'streak' condition where target and distractor identity was unpredictable yet was consistent within streaks. When the target to be found was the same for a few trials in a row, search performance became similar to that when the potential target was the same on all trials. A similar pattern was found for the target absent trials, suggesting that priming is based on the whole search array rather than just the target in each case. Further analysis indicated that the effects of priming are sufficiently strong to account for the advantage seen for the conjunctive search task. We conclude that the role of priming in visual search is underestimated in current theories of visual search and that differences in search times often attributed to top-down guidance may instead reflect the benefits of priming. 相似文献
7.
Oliver J. Mason Helen Booth Christian Olivers 《Personality and individual differences》2004,36(8):123
Deficits in early visual attention and perceptual organisation have frequently been shown to associate with both poor pre-morbid schizophrenia and those at a greater putative risk of psychosis. The nature of the deficit is unclear. The present study investigated the relationship between speed of visual marking and proneness to psychosis. 20 males and 20 females completed several tasks assessing speed of selection and de-selection of visual objects. As predicted, negative schizotypy was associated with poorer marking in males, but socially desirable responding potentially confounded this result. In addition, impulsive non-conformity swas associated with poorer visual marking, more prominently in females. These results are discussed in relation to possible mechanisms by which psychosis-proneness and impulsivity may restrict the top–down influences operating on early visual attention. 相似文献
8.
《Quarterly journal of experimental psychology (2006)》2013,66(6):1053-1073
Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention. 相似文献
9.
Delvenne JF 《Cognition》2005,96(3):B79-B88
Visual short-term memory (VSTM) and attention are both thought to have a capacity limit of four items [e.g. Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 309, 279-281; Pylyshyn, Z. W., & Storm, R. W. (1988). Tracking multiple independent targets: evidence for a parallel tracking mechanism. Spatial Vision, 3, 179-197.]. Using the multiple object visual tracking paradigm (MOT), it has recently been shown that twice as many items can be simultaneously attended when they are separated between two visual fields compared to when they are all presented within the same hemifield [Alvarez, G. A., & Cavanagh, P. (2004). Independent attention resources for the left and right visual hemifields (Abstract). Journal of Vision, 4(8), 29a.]. Does VSTM capacity also increase when the items to be remembered are distributed between the two visual fields? The current paper investigated this central issue in two different tasks, namely a color and spatial location change detection task, in which the items were displayed either in the two visual fields or in the same hemifield. The data revealed that only memory capacity for spatial locations and not colors increased when the items were separated between the two visual fields. These findings support the view of VSTM as a chain of capacity limited operations where the spatial selection of stimuli, which dominates in both spatial location VSTM and MOT, occupies the first place and shows independence between the two fields. 相似文献
10.
We investigated the nature of the bandwidth limit in the consolidation of visual information into visual short-term memory. In the first two experiments, we examined whether previous results showing differential consolidation bandwidth for colour and orientation resulted from methodological differences by testing the consolidation of colour information with methods used in prior orientation experiments. We briefly presented two colour patches with masks, either sequentially or simultaneously, followed by a location cue indicating the target. Participants identified the target colour via buttonpress (Experiment 1) or by clicking a location on a colour wheel (Experiment 2). Although these methods have previously demonstrated that two orientations are consolidated in a strictly serial fashion, here we found equivalent performance in the sequential and simultaneous conditions, suggesting that two colours can be consolidated in parallel. To investigate whether this difference resulted from different consolidation mechanisms or a common mechanism with different features consuming different amounts of bandwidth, Experiment 3 presented a colour patch and an oriented grating either sequentially or simultaneously. We found a lower performance in the simultaneous than the sequential condition, with orientation showing a larger impairment than colour. These results suggest that consolidation of both features share common mechanisms. However, it seems that colour requires less information to be encoded than orientation. As a result, two colours can be consolidated in parallel without exceeding the bandwidth limit, whereas two orientations or an orientation and a colour exceed the bandwidth and appear to be consolidated serially. 相似文献
11.
It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms ( Jäkel, Schölkopf, & Wichmann, 2009 ). We point out that ‘‘String kernels,’’ initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal for how the brain encodes orthographic information during reading. We suggest some reasons for this connection and we derive new ideas for visual word recognition that are successfully put to the test. We argue that the versatility and performance of String kernels makes a compelling case for their implementation in the brain. 相似文献
12.
《Journal of Cognitive Psychology》2013,25(5):531-542
The effects of memory load in visual search (VS) have shown a diversity of results from the absence through beneficial and detrimental effects of a concurrent memory load in VS performance. One of the hypotheses intended to explain the heterogeneity of results follows the idea proposed by certain models in the context of VS that the contents of working memory (WM) can modulate the attentional processes involved in VS (Desimone & Duncan, 1995; Duncan & Humphreys, 1989). In four experiments, we manipulated the similarity of information maintained in WM and those materials playing the role of target and distractors in the VS task. The results showed a beneficial effect in the first two experiments, where the materials in WM matched the target in VS. However, when they matched the distractors in the attentional task there is no effect in the slope of the search function. Present results strengthen those theories supporting that visual working memory is fractionated to allow for maintenance of items not essential to the attentional task (Downing & Dodds, 2004). 相似文献
13.
Social communication in anuran amphibians (frogs and toads) is mediated predominantly by acoustic signals. Unlike most anurans,
the Panamanian golden frog, Atelopus zeteki, lacks a standard tympanic middle ear and appears to have augmented its communicatory repertoire to include rotational limb
motions as visual signals, referred to here as semaphores. The communicatory nature of semaphoring was inferred from experimental
manipulations using mirrored self-image presentations and nonresident introductions. Male frogs semaphored significantly more
when presented with a mirrored self-image than with a nonreflective control. Novel encounters between resident males and nonresident
frogs demonstrated that semaphores were used directionally and were displayed toward target individuals. Females semaphored
frequently and this observation represents a rare case of signaling by females in a typically male-biased communicatory regime.
Semaphore actions were clearly linked to a locomotory gait pattern and appear to have originated as an elaboration of a standard
stepping motion.
Received: 19 March 1998 / Accepted after revision: 25 July 1998 相似文献
14.
《Quarterly journal of experimental psychology (2006)》2013,66(12):1813-1826
The ability of a perceiver–actor to perform a particular behaviour depends on their ability to generate and control the muscular forces required to perform that behaviour. If an intended behaviour is to be successful, perception must be relative to this ability. We investigated whether perceiver–actors were sensitive to how changes in their mass distribution influenced their ability to stand on an inclined surface. Participants reported whether they would be able to stand on an inclined surface while wearing a weighted backpack on their back, while wearing a weighted backpack on their front, and while not wearing a weighted backpack. In addition, participants performed this task by either viewing the surface or exploring it with a hand-held rod (while blindfolded). The results showed that perception of affordances for standing on the inclined surface depended on how the backpack influenced the ability of the participant to stand on the surface. Specifically, perceptual boundaries occurred at steeper inclinations when participants wore the backpack on their front than when they wore it on their back. Moreover, this pattern occurred regardless of the perceptual modality by which the participants explored the inclined surface. 相似文献
15.
《Quarterly journal of experimental psychology (2006)》2013,66(1):151-164
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery. 相似文献
16.
Recent neuroimaging research suggests that early visual processing circuits are activated similarly during visualization and perception but have not demonstrated that the cortical activity is similar in character. We found functional equivalency in cortical activity by recording evoked potentials while color and luminance patterns were viewed and while they were visualized with the eyes closed. Cortical responses were found to be different when imagining a color pattern vs. imagining a checkerboard luminance pattern, but the same when imagining a color pattern (or checkerboard pattern) vs. seeing the same pattern. This suggests that early visual processing stages may play a dynamic role in internal image generation, and further implies that visual imagery may modulate perception. 相似文献
17.
Recent research [e.g., Carrozzo, M., Stratta, F., McIntyre, J., & Lacquaniti, F. (2002). Cognitive allocentric representations of visual space shape pointing errors. Experimental Brain Research 147, 426-436; Lemay, M., Bertrand, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8, 16-32] reported that egocentric and allocentric visual frames of reference can be integrated to facilitate the accuracy of goal-directed reaching movements. In the present investigation, we sought to specifically examine whether or not a visual background can facilitate the online, feedback-based control of visually-guided (VG), open-loop (OL), and memory-guided (i.e. 0 and 1000 ms of delay: D0 and D1000) reaches. Two background conditions were examined in this investigation. In the first background condition, four illuminated LEDs positioned in a square surrounding the target location provided a context for allocentric comparisons (visual background: VB). In the second condition, the target object was singularly presented against an empty visual field (no visual background: NVB). Participants (N=14) completed reaching movements to three midline targets in each background (VB, NVB) and visual condition (VG, OL, D0, D1000) for a total of 240 trials. VB reaches were more accurate and less variable than NVB reaches in each visual condition. Moreover, VB reaches elicited longer movement times and spent a greater proportion of the reaching trajectory in the deceleration phase of the movement. Supporting the benefit of a VB for online control, the proportion of endpoint variability explained by the spatial location of the limb at peak deceleration was less for VB as opposed to NVB reaches. These findings suggest that participants are able to make allocentric comparisons between a VB and target (visible or remembered) in addition to egocentric limb and VB comparisons to facilitate online reaching control. 相似文献
18.
Attention capacity and task difficulty in visual search 总被引:1,自引:0,他引:1
When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of search task: difficult feature search (with a subtle featural difference), difficult conjunction search, and spatial-configuration search. In all 3 tasks, each trial contained sixteen items, divided into two eight-item sets. The two sets were presented either successively or simultaneously. Comparison of accuracy in successive versus simultaneous presentations revealed that attentional capacity limitations are present only in the case of spatial-configuration search. While the other two types of task were inefficient (as reflected in steep search slopes), no capacity limitations were evident. We conclude that the difficulty of a visual search task affects search efficiency but does not necessarily introduce attentional capacity limits. 相似文献
19.
This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one’s own voice was to seeing photograph of one’s own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one’s own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. 相似文献
20.
Processes for perspective-taking can be differentiated on whether or not they require us to mentally rotate ourselves into the position of the other person (Michelon & Zacks, 2006). Until now, only two perspective-taking tasks have been differentiated in this way, showing that judging whether something is to someone’s left or right does require mental rotation, but judging if someone can see something or not does not. These tasks differ firstly on whether the content of the perspective is visual or spatial and secondly on whether the type of the judgement is early-developing (level-1 type) or later-developing (level-2 type). Across two experiments, we tested which of these factors was likely to be most important by using four different perspective-taking tasks which crossed orthogonally the content of judgement (visual vs. spatial) and the type of judgement (level-1 type vs. level-2 type). We found that the level-2 type judgements, of how something looks to someone else and whether it is to their left or right, required egocentric mental rotation. On the other hand, level-1 type judgements, of whether something was in front of or behind someone and of whether someone could see something or not, did not involve mental rotation. We suggest from this that the initial processing strategies employed for perspective-taking are largely independent of whether judgements are visual or spatial in nature. Furthermore, early developing abilities have features that make mental rotation unnecessary. 相似文献