首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing—for example, actually hearing “chair” compared to simply thinking about a chair can temporarily make the visual system a better “chair detector”. Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.  相似文献   

2.
以网名为材料, 通过三项视觉搜索实验考察了与自我相关的网络信息可能存在的加工优势。结果发现, 自己的网名在作为靶刺激时可以更快且更准确地被探测出来; 而在作为干扰刺激时, 其对于靶刺激却并未表现出比对照刺激更强的抑制作用。在与真实人名进行的直接比较中, 自己的网名与真实人名的加工成绩未出现显著差异, 且都好于作为对照的名人名字。这些结果证明了与自我相关的网络信息具有和物理世界中的自我信息相似的加工优势, 且与以真实人名为材料的多项实验结果完全一致, 从而表明自己的网名与真实人名可能具有相同的加工机制。  相似文献   

3.
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3‐year‐old children (= 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic‐level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information.  相似文献   

4.
In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the preceding conversation remark; on the remaining trials the target did not name the speech act that the remark performed. In both experiments, lexical decisions were facilitated for targets representing the speech act performed with the prior utterance, but only when the target was presented to the left visual field (and hence initially processed by the RH) and not when presented to the right visual field. This effect occurred at both short (Experiment 1: 250 ms) and long (Experiment 2: 1000 ms) delays. The results demonstrate the critical role played by the RH in conversation processing.  相似文献   

5.
Cancellation tests have been widely used in clinical practice and in research to evaluate visuospatial attention, visual scanning patterns, and neglect problems. The aim of the present work is to present a visualized interface for the visuospatial attentional assessment system that can be employed to monitor and analyze attention performance and the search strategies used during visuospatial processing of target cancellation. We introduce a pattern identification mechanism for visual search patterns and report our findings from examining the visual search performance and patterns. We also present a comparison of results across various cancellation tests and age groups. The present study demonstrates that our system can obtain more processing data about spatiotemporal features of visual search than can conventional tests.  相似文献   

6.
Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name. Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated how scene context and target features affect search performance. We examined two possible sources of information from scene context—the scenes gist and the visual details of the scene—and how they potentially interact with target-feature information. Prior to commencing search, participants were shown a scene and a target cue depicting either a picture or the category name (or no-information control). Using eye movement measures, we investigated how the target features and scene context influenced two components of search: early attentional guidance processes and later verification processes involved in the identification of the target. We found that both scene context and target features improved guidance, but that target features also improved speed of target recognition. Furthermore, we found that a scenes visual details played an important role in improving guidance, much more so than did the scenes gist alone.  相似文献   

7.
Observers can resume a previously interrupted visual search trial significantly more quickly than they can start a new search trial (Lleras, Rensink, & Enns, 2005). This rapid resumption of search is possible because evidence accumulated during the previous exposure, a perceptual hypothesis, can carry over to a subsequent presentation. We present four interrupted visual search experiments in which the content of the perceptual hypotheses used during visual search trials was characterized. These experiments suggest that prior to explicit target identification, observers have accumulated evidence about the locations, but not the identities, of local, task-relevant distractors, as well as preliminary evidence for the identity of the target. Our results characterize the content of perceptual search hypotheses and highlight the utility of interrupted search for studying online search processing prior to target identification.  相似文献   

8.
During visual search, observers hold in mind a search template, which they match against the stimulus. To characterize the content of this template, we trained observers to discriminate a set of artificial objects at an individual level and at a category level. The observers then searched for the objects on backgrounds that camouflaged the features that defined either the object’s identity or the object’s category. Each search stimulus was preceded by the target’s individual name, its category name, or an uninformative cue. The observers’ task was to locate the target, which was always present and always the only figure in the stimulus. The results showed that name cues slowed search when the features associated with the name were camouflaged. Apparently, the observers required a match between their mental representation of the target and the stimulus, even though this was unnecessary for the task. Moreover, this match involved all distinctive features of the target, not just the features necessary for a definitive identification. We conclude that visual search for a specific target involves a verification process that is performed automatically on all of the target’s distinctive features.  相似文献   

9.
We are constantly exposed to our own face and voice, and we identify our own faces and voices as familiar. However, the influence of self-identity upon self-speech perception is still uncertain. Speech perception is a synthesis of both auditory and visual inputs; although we hear our own voice when we speak, we rarely see the dynamic movements of our own face. If visual speech and identity are processed independently, no processing advantage would obtain in viewing one’s own highly familiar face. In the present experiment, the relative contributions of facial and vocal inputs to speech perception were evaluated with an audiovisual illusion. Our results indicate that auditory self-speech conveys a processing advantage, whereas visual self-speech does not. The data thereby support a model of visual speech as dynamic movement processed separately from speaker recognition.  相似文献   

10.
ON THE EFFICIENCY OF VISUAL SELECTIVE ATTENTION:   总被引:2,自引:0,他引:2  
Abstract— The ability to ignore irrelevant peripheral distractors was assessed as a junction of the efficiency in visual search for a target at the center of display. Efficient target search, among dissimilar nontargets, led to greater distraction than inefficient search, among similar nontargets. This seemingly paradoxical result is predicted by the recent proposal (Lavie, 1995a) that irrelevant processing can be prevented only by increasing the load for relevant processing. Varying the set size of similar items in the central search task demonstrated that interference from irrelevant distractors was eliminated only with more than four relevant items. These results demonstrate how capacity limits determine the efficiency of selective attention, and raise questions about some standard assumptions of most visual search models.  相似文献   

11.
Most models assume that response time (RT) comprises the time required for successive processing stages, but they disagree about whether information is transmitted continuously or discretely between stages. We tested these alternative hypotheses by measuring when movement-related activity began in the frontal eye field (FEF) of macaque monkeys performing visual search. Previous work showed that RT was longer when visual neurons in FEF took longer to select the target, a finding consistent with prolonged perceptual processing during less efficient search. We now report that the buildup of saccadic movement-related activity in FEF is delayed in inefficient visual search. Variability in the delay of movement-related activity accounted for the difference in RT between search conditions and for the variability of RT within conditions. These findings provide neurophysiological support for the hypothesis that information is transmitted discretely between perceptual and response stages of processing during visual search.  相似文献   

12.
Phoneme monitoring and word monitoring are two experimental tasks that have frequently been used to assess the processing of fluent speech. Each task is purported to provide an “online” measure of the comprehension process, and each requires listeners to pay conscious attention to some aspect or property of the sound structure of the speech signal. The present study is primarily a methodological one directed at the following question: Does the allocation of processing resources for conscious analysis of the sound structure of the speech signal affect ongoing comprehension processes or the ultimate level of understanding achieved for the content of the linguistic message? Our subjects listened to spoken stories. Then, to measure their comprehension, they answered multiple-choice questions about each story. During some stories, they were required to detect a specific phoneme; during other stories, they were required to detect a specific word; during still other stories, they were not required to monitor the utterance for any target. The monitoring results replicated earlier findings showing longer detection latencies for phoneme monitoring than for word monitoring. Somewhat surprisingly, the ancillary phoneme- and word-monitoring tasks did not adversely affect overall comprehension performance. This result undermines the specific criticism that on-line monitoring paradigms of this kind should not be used to study spoken language understanding because these tasks interfere with normal comprehension.  相似文献   

13.
A single experiment is reported in which we provide a novel analysis of eye movements during visual search to disentangle the contributions of unattended guidance and focal target processing to visual search performance. This technique is used to examine the controversial claim that unattended affective faces can guide attention during search. Results indicated that facial expression influences how efficiently the target was fixated for the first time as a function of set size. However, affective faces did not influence how efficiently the target was identified as a function of set size after it was first fixated. These findings suggest that, in the present context, facial expression can influence search before the target is attended and that the present measures are able to distinguish between the guidance of attention by targets and the processing of targets within the focus of attention.  相似文献   

14.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

15.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

16.
Abstract— Previous research has suggested that a person's own name or emotionally charged stimuli automatically "grab" attention, potentially challenging limited-capacity theories of perceptual processing. In this study, subjects were shown two digits surrounding a word and asked to make a speeded judgment about whether the parity of the two digits matched. When the subject's own name was presented on a few scattered trials, responses were markedly slowed (replicating a previous study). However, in a subsequent block of trials in which half the words were the subject's name, the slowing did not occur. The same slowing occurred (but even more fleetingly) when an emotionally charged word was presented between the digits. When the name was embedded among multiple distractor words, it ceased to slow reaction times. The results suggest that perceptual analysis of high-priority stimuli is subject to the usual capacity limitations of other stimuli, but when enough capacity is available for a high-priority stimulus to be perceived, a transient surprise reaction may interrupt ongoing processing.  相似文献   

17.
Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.  相似文献   

18.
Eye movements depend on cognitive processes related to visual information processing. Much has been learned about the spatial selection of fixation locations, while the principles governing the temporal control (fixation durations) are less clear. Here, we review current theories for the control of fixation durations in tasks like visual search, scanning, scene perception, and reading and propose a new model for the control of fixation durations. We distinguish two local principles from one global principle of control. First, an autonomous saccade timer initiates saccades after random time intervals (local-I). Second, foveal inhibition permits immediate prolongation of fixation durations by ongoing processing (local-II). Third, saccade timing is adaptive, so that the mean timer value depends on task requirements and fixation history (Global). We demonstrate by numerical simulations that our model qualitatively reproduces patterns of mean fixation durations and fixation duration distributions observed in typical experiments. When combined with assumptions of saccade target selection and oculomotor control, the model accounts for both temporal and spatial aspects of eye movement control in two versions of a visual search task. We conclude that the model provides a promising framework for the control of fixation durations in saccadic tasks.  相似文献   

19.
The effects of orthographic and phonological relatedness between distractor word and object name in a picture–word interference task were investigated. In Experiment 1 distractors were presented visually, and consistent with previous findings, priming effects arising from phonological overlap were modulated by the presence or absence of orthographic similarity between distractor and picture name. This pattern is interpreted as providing evidence for cascaded processing in visual word recognition. In Experiment 2 distractors were presented auditorily, and here priming was not affected by orthographic match or mismatch. These findings provide no evidence for orthographic effects in speech perception and production, contrary to a number of previous reports.  相似文献   

20.
After the classic serial/parallel dichotomy of visual search mechanisms has been increasingly doubted, we investigated what search mechanisms are used between the two poles termed "pop-out" and "strictly serial search" in an overt feature search paradigm. Since reaction time slopes do not contain sufficient information for this purpose, we developed a novel technique for analyzing reaction times. Individual reaction times are modeled as sums of the durations of successive search steps. Model parameters are task characteristics (similarity, number and arrangement of target and distractors) and processing characteristics of the participant (e.g., attention dwell and shift durations). In Experiment 1, several model variants were fitted numerically to empirical reaction times. The best fitting model suggested that more than one item can be processed in a single fixation, movement of attention is abrupt and not continuous, and even in pop out search, attention is often explicitly moved to the target. In Experiment 2, we measured the central model parameter, the so-called range of attention, more directly and thereby validated the model. The model provides an explanation for the strong variation in the slope of reaction time functions, which is not based on an explicit distinction between parallel and serial search processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号