首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Cognitive psychology》1987,19(1):63-89
It has been claimed that young children use object names overgenerally and undergenerally because they do not have notions of objects of particular kinds, but rather, complexive notions of objects and their habitual actions or locations. However, for overgeneral uses in particular, it is difficult to differentiate word meaning from word use because communicative functions are not explicitly expressed in single-word speech. In the present paper, we identify three types of overgeneral uses, and argue that two of these reflect communicative functions rather than complexive meanings. We obtained production data from 10 children in the single-word period, using a standardized method of recording utterance contexts. Most uses of object names were for appropriate instances of the adult categories. Of the overgeneral uses, most were attributable to communicative functions rather than complexive meanings, and there was no evidence of undergeneral use. The results provide strong evidence that, from the very start, children's object names, like those of adults, apply to objects of particular kinds.  相似文献   

2.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

3.
To conduct an efficient visual search, visual attention must be guided to a target appropriately. Previous studies have suggested that attention can be quickly guided to a target when the spatial configurations of search objects or the object identities have been repeated. This phenomenon is termed contextual cuing. In this study, we investigated the effect of learning spatial configurations, object identities, and a combination of both configurations and identities on visual search. The results indicated that participants could learn the contexts of spatial configurations, but not of object identities, even when both configurations and identities were completely correlated (Experiment 1). On the other hand, when only object identities were repeated, an effect of identity learning could be observed (Experiment 2). Furthermore, an additive effect of configuration learning and identity learning was observed when, in some trials, each context was the relevant cue for predicting the target (Experiment 3). Participants could learn only the context that was associated with target location (Experiment 4). These findings indicate that when multiple contexts are redundant, contextual learning occurs selectively, depending on the predictability of the target location.  相似文献   

4.
5.
Kazuya Inoue  Yuji Takeda 《Visual cognition》2013,21(9-10):1135-1153
To investigate properties of object representations constructed during a visual search task, we manipulated the proportion of trials/task within a block: In a search-frequent block, 80% of trials were search tasks; remaining trials presented a memory task; in a memory-frequent block, this proportion was reversed. In the search task, participants searched for a toy car (Experiments 1 and 2) or a T-shape object (Experiment 3). In the memory task, participants had to memorize objects in a scene. Memory performance was worse in the search-frequent block than in the memory-frequent block in Experiments 1 and 3, but not in Experiment 2 (token change in Experiment 1; type change in Experiments 2 and 3). Experiment 4 demonstrated that lower performance in the search-frequent block was not due to eye-movement behaviour. Results suggest that object representations constructed during visual search are different from those constructed during memorization and they are modulated by type of target.  相似文献   

6.
The authors examined how visual selection mechanisms may relate to developing cognitive functions in infancy. Twenty-two 3-month-old infants were tested in 2 tasks on the same day: perceptual completion and visual search. In the perceptual completion task, infants were habituated to a partly occluded moving rod and subsequently presented with unoccluded broken and complete rod test stimuli. In the visual search task, infants viewed displays in which single targets of varying levels of salience were cast among homogeneous static vertical distractors. Infants whose posthabituation preference indicated unity perception in the completion task provided evidence of a functional visual selective attention mechanism in the search task. The authors discuss the implications of the efficiency of attentional mechanisms for information processing and learning.  相似文献   

7.
The visual system represents object shapes in terms of intermediate-level parts. The minima rule proposes that the visual system uses negative minima of curvature to define boundaries between parts. We used visual search to test whether part structures consistent with the minima rule are computed preattentively--or at least, rapidly and early in visual processing. The results of Experiments 1 and 2 showed that whereas the search for a non-minima-segmented shape is fast and efficient among minima-segmented shapes, the reverse search is slow and inefficient. This asymmetry is expected if parsing at negative minima occurs obligatorily. The results of Experiments 3 and 4 showed that although both minima- and non-minima-segmented shapes pop out among unsegmented shapes, the search for minima-segmented shapes is significantly slower. Together, these results demonstrate that the visual system segments shapes into parts, using negative minima of curvature, and that it does so rapidly in early stages of visual processing.  相似文献   

8.
《Cognitive development》1994,9(3):293-309
Many objects have multiple names. Nonlinguistic context may help to constrain object-name selection to a single alternative for use in speaking tasks. Children at three ages (5, 7, and 9 years old) named objects with multiple names in two contexts. In the neutral context, the intended object could be designated unambiguously with any of its alternative names. In the biased context, the intended referent could be clearly designated only with a name from a subset of possible alternatives. Children selected names in accord with nonlinguistic constraints, but at the cost of longer naming times. Both name selection success and associated cost were more evident in older than in younger children. The results are consistent with the hypotheses that name selection involves inhibition of competing alternative names and that efficient use of these inhibitory processes develops gradually.  相似文献   

9.
The Area Activation Model (Pomplun, Reingold, Shen, & Williams, 2000) is a computational model predicting the statistical distribution of saccadic endpoints in visual search tasks. Its basic assumption is that saccades in visual search tend to foveate display areas that provide a maximum amount of task‐relevant information for processing during the subsequent fixation. In the present study, a counterintuitive prediction by the model is empirically tested, namely that saccadic selectivity towards stimulus features depends on the spatial arrangement of search items. We find good correspondence between simulated and empirically observed selectivity patterns, providing strong support for the Area Activation Model.  相似文献   

10.
11.
12.
Several studies have shown that people can selectively attend to stimulus colour, e.g., in visual search, and that preknowledge of a target colour can improve response speed/accuracy. The purpose was to use a form-identification task to determine whether valid colour precues can produce benefits and invalid cues costs. The subject had to identify the orientation of a "T"-shaped element in a ring of randomly-oriented "L"s when either two or four of the elements were differently coloured. Contrary to Moore and Egeth's (1998) recent findings, colour-based attention did affect performance under data-limited conditions: Colour cues produced benefits when processing load was high; when the load was reduced, they incurred only costs. Surprisingly, a valid colour cue succeeded in improving performance in the high-load condition even when its validity was reduced to the chance level. Overall, the results suggest that knowledge of a target colour does not facilitate the processing of the target, but makes it possible to prioritize it.  相似文献   

13.
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.  相似文献   

14.
Ten Wernicke's and ten Broca's aphasics were compared with normal controls and brain-damaged nonaphasics with respect to the time required for the auditory decoding of object names. This value was obtained by using a subtraction method with two reaction time determinations, one of which included an auditory processing phase while the other did not. The overall mean of approximately 200 msec for Broca's aphasics did not differ significantly from the normal, while the mean of 650 msec for the Wernicke aphasics was much slower. All groups responded more quickly to high-frequency than low-frequency words and all but the Wernicke aphasics improved in the second trial block over their performance in the first trial block.  相似文献   

15.
We contrasted visual search for targets presented in prototypical views and targets presented in nonprototypical views, when targets were defined by their names and when they were defined by the action that would normally be performed on them. The likelihood of the first fixation falling on the target was increased for prototypical-view targets falling in the lower visual field. When targets were defined by actions, the durations of fixations were reduced for targets in the lower field. The results are consistent with eye movements in search being affected by representations within the dorsal visual stream, where there is strong representation of the lower visual field. These representations are sensitive to the familiarity or the affordance offered by objects in prototypical views, and they are influenced by action-based templates for targets.  相似文献   

16.
A number of recent studies have found that objects are named more slowly in the context of same-category items than in the context of items from various semantic categories. Several experiments reported here indicated that this semantic effect is relatively persistent because it was essentially unaffected by the presence of interspersed filler items. The authors suggest that the effect is specific to the retrieval of lexical-semantic codes and characterize mechanisms that could support the effect at this processing level, such as incremental learning in the links between conceptual and lexical codes and the temporary increase of lexical resting levels. The results underscore the necessity of incorporating mechanisms of long-term adaptation into current models of spoken production.  相似文献   

17.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

18.
Data from visual-search tasks are typically interpreted to mean that searching for targets defined by feature differences does not require attention and thus can be performed in parallel, whereas searching for other targets requires serial allocation of attention. The question addressed here was whether a parallel-serial dichotomy would be obtained if data were collected using a variety of targets representing each of several kinds of defining features. Data analyses included several computations in addition to search rate: (1) target-absent to target-present slope ratios; (2) two separate data transformations to control for errors; (3) minimum reaction time; and (4) slopes of standard deviation as a function of set size. Some targets showed strongly parallel or strongly serial search, but there was evidence for several intermediate search classes. Sometimes, for a given target-distractor pair, the results depended strongly on which character was the target and which was the distractor. Implications from theories of visual search are discussed.  相似文献   

19.
Top-down inhibition of search distractors in parallel visual search   总被引:2,自引:0,他引:2  
In three experiments, we examined distractor inhibition in parallel ("pop-out") visual search. Distractor inhibition was measured in terms of reaction time (RT) to a simple luminance increment probe presented, after the search task response, at display locations that either contained a search distractor (on-probe) or were blank (off-probe). When the search stimuli remained in view, the on-probe (relative to off-probe) RT cost was larger than in a baseline condition in which observers had only to passively view, rather than search, the display. This differential on-probe RT cost, which discounts effects of masking, was interpreted as a measure of distractor inhibition associated with target selection in parallel visual search. Taken together, the results argue that the distractor inhibition is an object-based and local phenomenon that affects all distractors (of a particular type) in an equal manner.  相似文献   

20.
Two experiments using the interference paradigm are reported. In the first experiment, the participants spoke aloud the names of celebrities and the names of objects when presented with pictures while hearing distractors. In the case of proper names, we replicated the data obtained by Izaute and Bonin (2001) using the interference paradigm with a proper name written naming task. In the case of common names, the results replicated those obtained by Shriefers, Meyer, and Levelt (1990). In the second experiment, the participants produced the names of celebrities when presented with their faces while hearing distractors that were either proper names associated with the celebrities (associate condition), that belonged to a different professional category (different condition), or that corresponded to the proper names of the celebrities (identical condition). For negative SOAs, "associate" distractors were found to increase latencies compared to the "different category" condition. The implications of the findings for proper name retrieval are briefly discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号