首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is an emerging literature on visual search in natural tasks suggesting that task-relevant goals account for a remarkably high proportion of saccades, including anticipatory eye movements. Moreover, factors such as “visual saliency” that otherwise affect fixations become less important when they are bound to objects that are not relevant to the task at hand. We briefly review this literature and discuss the implications for task-based variants of the visual world paradigm. We argue that the results and their likely interpretation may profoundly affect the “linking hypothesis” between language processing and the location and timing of fixations in task-based visual world studies. We outline a goal-based linking hypothesis and discuss some of the implications for how we conduct visual world studies, including how we interpret and analyze the data. Finally, we outline some avenues of research, including examples of some classes of experiments that might prove fruitful for evaluating the effects of goals in visual world experiments and the viability of a goal-based linking hypothesis.  相似文献   

2.
In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.  相似文献   

3.
Models of attention and context effects in naming performance should be able to account for the time course of color-word Stroop interference revealed by manipulations of the stimulus onset asynchrony (SOA) between color and word. Prominent models of Stroop task performance ( [Cohen et al., 1990], [Cohen and Huston, 1994] and [Phaf et al., 1990]) fail to account for the fact that response time (RT) and Stroop interference peak at zero SOA and diminish with word preexposure. The models may be saved by assuming that the time course of interference is determined by a strategic orienting of attention to color onsets when SOA is predictable. To test this temporal predictability hypothesis, SOA was blocked or randomly mixed in Experiment 1. In addition, the time interval between color onsets was randomly variable in Experiment 2. Although RTs were affected, none of the randomization manipulations influenced the typical shape of the time course of Stroop effects. These findings provide evidence against the temporal predictability hypothesis and thereby against prominent models of the Stroop task.  相似文献   

4.
Four experiments examined how age of acquisition (AoA) and word frequency (WF) interact with manipulations of image quality in a picture-naming task. Experiments 1 and 2 examined the effect of overlaying the to-be-named picture with irrelevant contours. The magnitude of the AoA effect increased when the contours were added (Experiment 1), but the effect of WF remained constant (Experiment 2). Experiments 3 and 4 examined the effects of reducing the contrast of the contours defining the to-be-named picture. Both the effects of AoA (Experiment 3) and WF (Experiment 4) remained constant in the face of contrast reduction. These results provide an empirical dissociation of the effects of AoA and WF. The results are consistent with the idea that both AoA and the addition of irrelevant contours affect the efficiency of object recognition, but WF affects later processes involved in retrieval of object names. The theoretical implications of these findings in relation to accounts of AoA and frequency and their functional localisation in the lexical system are discussed.  相似文献   

5.
The ability to efficiently search the visual environment is a critical function of the visual system, and recent research has shown that experience playing action video games can influence visual selective attention. The present research examined the similarities and differences between video game players (VGPs) and non-video game players (NVGPs) in terms of the ability to inhibit attention from returning to previously attended locations, and the efficiency of visual search in easy and more demanding search environments. Both groups were equally good at inhibiting the return of attention to previously cued locations, although VGPs displayed overall faster reaction times to detect targets. VGPs also showed overall faster response time for easy and difficult visual search tasks compared to NVGPs, largely attributed to faster stimulus-response mapping. The findings suggest that relative to NVGPs, VGPs rely on similar types of visual processing strategies but possess faster stimulus-response mappings in visual attention tasks.  相似文献   

6.
Four experiments tested whether and how initially planned but then abandoned speech can influence the production of a subsequent resumption. Participants named initial pictures, which were sometimes suddenly replaced by target pictures that were related in meaning or word form or were unrelated. They then had to stop and resume with the name of the target picture. Target picture naming latencies were measured separately for trials in which the initial speech was skipped, interrupted, or completed. Semantically related initial pictures helped the production of the target word, although the effect dissipated once the utterance of the initial picture name had been completed. In contrast, phonologically related initial pictures hindered the production of the target word, but only for trials in which the name of the initial picture had at least partly been uttered. This semantic facilitation and phonological interference did not depend on the time interval between the initial and target picture, which was either varied between 200 ms and 400 ms (Experiments 1-2) or was kept constant at 300 ms (Experiments 3-4). We discuss the implications of these results for models of speech self-monitoring and for models of problem-free word production.  相似文献   

7.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

8.
The 'body schema' has traditionally been defined as a passively updated, proprioceptive representation of the body. However, recent work has suggested that body representations are more complex and flexible than previously thought. They may integrate current perceptual information from all sensory modalities, and can be extended to incorporate indirect representations of the body and functional portions of tools. In the present study, we investigate the source of a facilitatory effect of viewing the body on speeded visual discrimination reaction times. Participants responded to identical visual stimuli that varied only in their context: being presented on the participant's own body, on the experimenter's body, or in a neutral context. The stimuli were filmed and viewed in real-time on a projector screen. Careful controls for attention, biological saliency, and attribution confirmed that the facilitatory effect depends critically on participants attributing the context to a real body. An intermediate effect was observed when the stimuli were presented on another person's body, suggesting that the effect of viewing one's own body might represent a conjunction of an interpersonal body effect and an egocentric effect.  相似文献   

9.
Previous studies have demonstrated large errors (over 30 degrees ) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer's location; e.g., Philbeck et al. [Philbeck, J. W., Sargent, J., Arthur, J. C., & Dopkins, S. (2008). Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception, 37, 511-534]). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20-160 degrees azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to -19 degrees for visual targets at 160 degrees ). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.  相似文献   

10.
The role of attention in the occurrence of the affordance effect   总被引:1,自引:0,他引:1  
It has been demonstrated that visual objects activate responses spatially corresponding to the orientation (left or right) of their graspable parts. To investigate the role of attention orienting in the generation of this effect, which we will refer to as affordance effect, we ran three experiments in which the target stimulus could either correspond or not with a dynamic event capturing attention. Participants were required to press a left or right key according to the vertical orientation (upward or inverted) of objects presented with their handles oriented to the right or to the left. In Experiments 1 and 2, the objects were located above or below fixation, while in Experiment 3, to assess the contemporary presence of the affordance and Simon effects, the objects were located to the left or right of fixation. The results showed that while the affordance effect, when evident, was always relative to the target object, irrespective of its attentional capturing properties, the Simon effect occurred relative to the event capturing attention. These findings suggest that automatic and controlled processes of visual attention may play a differential role in the occurrence of the two effects.  相似文献   

11.
The present study combined exogenous spatial cueing with masked repetition priming to study attentional influences on the processing of subliminal stimuli. Participants performed an alphabetic decision task (letter versus pseudo-letter classification) with central targets and briefly presented peripherally located primes that were either cued or not cued by an abrupt onset. A relatively long delay between cue and prime was used to investigate the effect of inhibition of return (IOR) on the processing of subliminal masked primes. Primes presented to the left visual field showed standard effects of Cue Validity and no IOR (significant priming with valid cues only). Primes presented to the right visual field showed no priming from valid cues (an IOR effect), and priming with invalid cues that depended on hand of response to letter targets (right-hand in Experiment 1, left-hand in Experiment 2). The results are interpreted in terms of a differential speed of engagement and disengagement of attention to the right and left visual fields for alphabetic stimuli, coupled with a complex interaction that arises between Prime Relatedness and response-hand.  相似文献   

12.
Durgin FH  Doyle E  Egan L 《Acta psychologica》2008,127(2):428-448
Three experiments with a total of 87 human observers revealed an upper-left spatial bias in the initial movement of gaze during visual search. The bias was present whether or not the explicit control of gaze was required for the task. This bias may be part of a search strategy that competed with the fixed-gaze parallel search strategy hypothesized by Durgin [Durgin, F. H. (2003). Translation and competition among internal representations in a reverse Stroop effect. Perception &Psychophysics, 65, 367-378.] for this task. When the spatial probabilities of the search target were manipulated either in accord with or in opposition to the existing upper-left bias, two orthogonal factors of interference in the latency data were differentially affected. The two factors corresponded to two different forms of representation and search. Target probabilities consistent with the gaze bias encouraged opportunistic serial search (including gaze shifts), while symmetrically opposing target probabilities produced latency patterns more consistent with parallel search based on a sensory code.  相似文献   

13.
Modern theories conceptualize visual selective attention as a competition between objects for the control of cortical receptive fields (RFs). Implicit in this framework is the suggestion that spatially proximal objects, which draw from overlapping pools of RFs, should be more difficult to represent in parallel and with excess capacity than spatially separated objects. The present experiments tested this prediction using analysis of response time distributions in a redundant-targets letter identification task. Data revealed that excess-capacity parallel processing is possible when redundant targets are widely separated within the visual field, but that capacity is near fixed when targets are adjacent. Even at the largest separations tested, however, processing capacity remained strongly limited.  相似文献   

14.
We introduce the special issue on formal models of semantic concepts. After outlining the research questions that motivated the issue, we summarize the rich set of data provided by the Leuven Natural Concepts Database, and provide an overview of the seven research articles in the special issue. Each of these articles applies a formal modeling approach to one or more parts of the database, attempting to further our understanding of how people represent and use semantic concepts.  相似文献   

15.
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.  相似文献   

16.
Theories of shifts of visual attention based on attentional blink or dwell time do not directly address shifts of attention across different levels (global or local) involving multiple objects. Two experiments were conducted employing the attentional dwell time paradigm to investigate the shifts of visual attention between objects selected at same or different levels. Participants were instructed to identify two successive compound stimuli at a pre-specified level (global or local) presented at two different locations with variable SOA. The initial pair of locations in which the stimulus was presented was fixed in Experiment 1 but not in Experiment 2. Experiment 1 results showed very little impairment for second target identification when both the targets were at the global level. Attentional shift was better with both targets at the same level compared to different levels. Experiment 2 results showed that local followed by global target identification is difficult at short SOAs compared to other conditions. The results indicate that scope of attention affects the time course of visual attention. Global processing could be performed with very little capacity limitation simultaneously with distributed attention. The default mode of attention might be distributed and attention becomes focused for target identification. Different mechanisms may underlie shifts in focused attention between different locations and changes in attentional set required by changes in perceptual levels.  相似文献   

17.
An experiment that utilized a 16-element movement sequence was designed to determine the impact of eye movements on sequence learning. The participants were randomly assigned to two experimental groups: a group that was permitted to use eye movements (FREE) and a second group (FIX) that was instructed to fixate on a marker during acquisition (ACQ). A retention test (RET) was designed to provide a measure of learning, and two transfer tests were designed to determine the extent to which eye movements influenced sequence learning. The results demonstrated that both groups decreased the response time to produce the sequence, but the participants in the FREE group performed the sequence more quickly than participants of the FIX group during the ACQ, RET and the two transfer tests. Furthermore, continuous visual control of response execution was reduced over the course of learning. The results of the transfer tests indicated that oculomotor information regarding the sequence can be stored in memory and enhances response production.  相似文献   

18.
Visual cuing is one paradigm often used to study object- and space-based visual selective attention. A primary finding is that shifts of attention within an object can be accomplished faster than equidistant shifts between objects. The present study used a visual cuing paradigm to examine how an object's size (i.e., internal distance) and shape, influences object- and space-based visual selective attention. The first two experiments manipulated object size and compared attentional shift performance with objects where the within-object distance between cued and uncued target locations was either equal to the between-object distance (1:1 ratio condition) or three times the between-object distance (3:1 ratio condition). Within-object shifts took longer for the larger objects, but an advantage over between-object shifts was still evident. Influences associated with the shapes of the larger objects suggested by the results of the first two experiments were tested and rejected in Experiment 3. Overall, the results indicate that within-object shifts of attention become slower as the within-object distance increases, but nevertheless are still accomplished faster than between-object shifts.  相似文献   

19.
This study investigated whether explicit beat induction in the auditory, visual, and audiovisual (bimodal) modalities aided the perception of weakly metrical auditory rhythms, and whether it reinforced attentional entrainment to the beat of these rhythms. The visual beat-inducer was a periodically bouncing point-light figure, which aimed to examine whether an observed rhythmic human movement could induce a beat that would influence auditory rhythm perception. In two tasks, participants listened to three repetitions of an auditory rhythm that were preceded and accompanied by (1) an auditory beat, (2) a bouncing point-light figure, (3) a combination of (1) and (2) synchronously, or (4) a combination of (1) and (2), with the figure moving in anti-phase to the auditory beat. Participants reproduced the auditory rhythm subsequently (Experiment 1), or detected a possible temporal change in the third repetition (Experiment 2). While an explicit beat did not improve rhythm reproduction, possibly due to the syncopated rhythms when a beat was imposed, bimodal beat induction yielded greater sensitivity to a temporal deviant in on-beat than in off-beat positions. Moreover, the beat phase of the figure movement determined where on-beat accents were perceived during bimodal induction. Results are discussed with regard to constrained beat induction in complex auditory rhythms, visual modulation of auditory beat perception, and possible mechanisms underlying the preferred visual beat consisting of rhythmic human motions.  相似文献   

20.
In the present study, we examined the developmental changes in the efficiency of saccadic inhibitory control. More specifically, the contribution of age-related changes in working-memory engagement was investigated. We manipulated the efficiency of inhibitory oculomotor control in antisaccade tasks by using fixation-offset conditions, which are supposed to affect inhibitory demands, and by adding increasing working-memory loads to the antisaccade task. In general, in comparison to antisaccade performance of adults, the antisaccade performance of 8-year-old and 12-year-old children was characterized by an increase in direction errors, and/or longer saccadic onset latencies on correct antisaccades. However, this pattern was not altered by the fixation-offset manipulations. In contrast, increased working-memory demands deteriorated 8-year-olds' antisaccade performance unequally as compared to older children and young adults. These findings suggest that - at least in young children - the available functional working-memory capacity is engaged in oculomotor inhibition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号