首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Theories of object recognition that are based purely on part decomposition do not take into account the role of textural, shading, and color information, nor do they differentiate between stylistic factors in the preparation of line-drawn pictorial stimuli. To investigate these factors, naming and verification experiments were performed using line drawings, monochrome photographs, and color photographs of common objects. For line drawings, it was shown that line width, exposure, and contrast affected naming latency, which increased for lines of narrow width and extremes of exposure. Naming latencies were compared for objects drawn by a professional artist, with varying degrees of surface detail, and objects produced by a computer-aided design (CAD) system, with no surface detail. The mean naming latencies for the artist set were faster than for the CAD set, though not significantly, with a significant degree of object correlation being observed. However, in certain cases there were significant differences between objects. These were investigated in a further experiment in which subsets with common properties of present or absent surface detail were selected from the artist-drawn stimuli. It was found that the presence of surface features resulted in lower response latencies even for those objects that intuitively could be recognized by parts alone. The time to name photographic and line-drawn stimuli was compared, and a progressive decrease in naming latency from line to monochrome to color stimuli was observed. In a verification task, no significant advantage for color or monochrome photographs over line drawings was found, either when comparing stimuli of equivalent or of different mode. However, there was a tendency for the comparison of different modes to take longer than the comparison of same modes. The results are discussed in terms of theories of human visual processing and cognitive and computational models of object recognition.  相似文献   

2.
In three experiments, we explored how pigeons use edges, corresponding to orientation and depth discontinuities, in visual recognition tasks. In experiment 1, we compared the pigeon's ability to recognize line drawings of four different geons when trained with shaded images. The birds were trained with either a single view or five different views of each object. Because the five training views had markedly different appearances and locations of shaded surfaces, reflectance edges, etc, the pigeons might have been expected to rely more on the orientation and depth discontinuities that were preserved over rotation and in the line drawings. In neither condition, however, was there any transfer from the rendered images to the outline drawings. In experiment 2, some pigeons were trained with line drawings and shaded images of the same objects associated with the same response (consistent condition), whereas other pigeons were trained with a line drawing and a shaded image of two different objects associated with the same response (inconsistent condition). If the pigeons perceived any correspondence between the stimulus types, then birds in the consistent condition should have learned the discrimination more quickly than birds in the inconsistent condition. But, there was no difference in performance between birds in the consistent and inconsistent conditions. In experiment 3, we explored pigeons' processing of edges by comparing their discrimination of shaded images or line drawings of four objects. Once trained, the pigeons were tested with planar rotations of those objects. The pigeons exhibited different patterns of generalization depending on whether they were trained with line drawings or shaded images. The results of these three experiments suggest that pigeons may place greater importance on surface features indicating materials, such as food or water. Such substances do not have definite boundaries cued by edges which are thought to be central to human recognition.  相似文献   

3.
In two experiments, subjects made timed decisions about the second of two sequentially presented rotated drawings of objects. When the two objects were physically identical, response times to decide whether the two drawings depicted the same object varied as a function of the shortest distance between the orientation of the second drawing and either the orientation of the previous drawing or the upright. This was found for both short (250-msec) and long (2-sec) interstimulus-intervals. The result was also obtained when subjects named the second drawing after deciding whether the first drawing faced left or right. Following repeated experience with the drawings in the left/right task over four blocks of trials, time to name the second drawing in the same-object sequences was independent of orientation. These results suggest that, initially, object- and orientation-specific representations can be formed following a single presentation of a rotated object and subsequently used to identify drawings of the same object at either the same or different orientations. Alignment of the second drawing with either the canonical representation or the new representation at the previous orientation is achieved by normalization through the shortest path. Following experience with the objects, orientation-invariant representations are formed.  相似文献   

4.
Two experiments are reported in which subjects had to match pairs of pictures of objects. "Same" pairs could be either identical (Ps), pictures of different views of the same object (Pv), or pictures of different objects having the same name (Pd). With line drawings as stimuli, RTs for Condition Ps were shorter than for Condition Pv, which in turn were shorter than for Condition Pd. Visual similarity had no effect on Pd RTs. However, in Experiment II, where photographs of objects with high-frequency (HF) and low-frequency (LF) names were used, no difference was found between Conditions Ps(HF), Ps(LF) and Condition Pv(HF); and no difference occurred between Conditions Pd(HF), Pd(LF) and Condition Pv(LF), the latter set of conditions being associated with longer RTs than the former. This pattern of results was found with both a .25-sec and a 2-sec ISI. The results are discussed in terms of the levels of coding involved in processing information from picture stimuli. It is concluded that at least two levels are involved in matching photographs of real objects (an object-code level and a nonvisual semantic code level), while a third level may be used in matching tasks involving stylized line drawings (a picture-code level).  相似文献   

5.
We examined whether the expression of visual statistical learning (VSL) is flexible at the superordinate-categorical level. In the familiarization phase, participants viewed a sequence of line drawings. In the test phase, participants observed two test sequences (statistically related triplets versus unrelated foils) that consisted of the same objects as those presented during the familiarization phase (the same condition) or different objects that shared the same categorical information with drawings during the familiarization (the different condition). They then decided whether the first or the second sequence was more familiar. The results of Experiment 1 showed greater familiarity above chance levels for statistically related triplets only in the same condition. In Experiment 2, even where the word stimuli representing each superordinate-level category were included in the test phase (the categorical condition), the results showed VSL only in the same condition. Our findings suggest that the semantic flexibility of VSL is limited to the basic-level category.  相似文献   

6.
Two experiments explored how people create novel sentences referring to given entities presented either in line drawings or in nouns. The line drawings yielded more creative sentences than the words, both as rated by judges and objectively by a measure of the amount of information that the sentences conveyed. A hypothesis about the cognitive processes of creation predicted this result: Creativity depends on constraints. Line drawings of entities present more information about them than nouns denoting the same entities, and so the pictures provide more constraints than the nouns. Hence, line drawings yield more creative sentences than words.  相似文献   

7.
Subjects attempted to recognize simple line drawings of common objects using either touch or vision. In the touch condition, subjects explored raised line drawings using the distal pad of the index finger or the distal pads both of the index and of the middle fingers. In the visual condition, a computer-driven display was used to simulate tactual exploration. By moving an electronic pen over a digitizing tablet, the subject could explore a line drawing stored in memory; on the display screen a portion of the drawing appeared to move behind a stationary aperture, in concert with the movement of the pen. This aperture was varied in width, thus simulating the use of one or two fingers. In terms of average recognition accuracy and average response latency, recognition performance was virtually the same in the one-finger touch condition and the simulated one-finger vision condition. Visual recognition performance improved considerably when the visual field size was doubled (simulating two fingers), but tactual performance showed little improvement, suggesting that the effective tactual field of view for this task is approximately equal to one finger pad. This latter result agrees with other reports in the literature indicating that integration of two-dimensional pattern information extending over multiple fingers on the same hand is quite poor. The near equivalence of tactual picture perception and narrow-field vision suggests that the difficulties of tactual picture recognition must be largely due to the narrowness of the effective field of view.  相似文献   

8.
Four experiments examined the effect of visual similarity on immediate memory for order. Experiments 1 and 2 used easily nameable line drawings. Following a sequential presentation in either silent or suppression conditions, participants were presented with the drawings in a new, random order and were required to remember their original serial position. In Experiment 3, participants first learned to associate a verbal label with an abstract matrix pattern. Then they completed an immediate memory task in which they had to name the matrices aloud during presentation. At recall, the task required remembering either the order of the matrices or the order of their names. In Experiment 4, participants learned to associate nonword labels with schematic line drawings of faces; the phonemic similarity of the verbal labels was also manipulated. All four experiments indicate that the representations supporting performance comprise both verbal and visual features. The results are consistent with a multiattribute encoding view.  相似文献   

9.
The role of 'visual similarity' has been emphasised in object recognition and in particular, for category-specific agnosias. [Laws and Gale, 2002] recently described a measure of pixel-level visual overlap for line drawings (Euclidean Overlap: EO[line]) that distinguished living and nonliving things and predicted normal naming errors and latencies ( [Laws et al., 2002]). Nevertheless, it is important to extend such analyses to stimuli other than line drawings. We therefore developed the same measure for greyscale versions of the same stimuli (EO[grey]), i.e., that contain shading and texture information. EO[grey], however, failed to differentiate living from nonliving things and failed to correlate with naming latencies to the greyscale images. By contrast, EO[line] did correlate with the naming latencies. This suggests that similarity of edge information is more influential than similarity of surface characteristics for naming and for categorically separating living and nonliving things (be they line drawings or greyscale images).  相似文献   

10.
Six experiments investigated the nature of the object-file representation supporting object continuity. Participants viewed preview displays consisting of 2 stimuli (either line drawings or words) presented within square frames, followed by a target display consisting of a single stimulus (either a word or a picture) presented within 1 of the frames. The relationship between the target and preview stimuli was manipulated. The first 2 experiments found that participants responded more quickly when the target was identical to the preview stimulus in the same frame (object-specific priming). In Experiments 3, 4, 5, and 6, the physical form of the target stimulus (a word or picture in 1 frame) was changed completely from that of either preview stimulus (pictures or words in both frames). Despite this physical change, object-specific priming was observed. It is suggested that object files encode postcategorical information, rather than precise physical information.  相似文献   

11.
A prerequisite for comparative work on object recognition is a method for identifying the features actually extracted from the form. The method introduced here with pigeons is discrimination training between two simple line drawings, followed by a generalization test in which contour is deleted from the reinforced drawing. In Condition 1, the line drawings were a square (S+) versus a triangle (S-); for Condition 2, the line drawings were planar projections of a cube (S+) versus a truncated pyramid (S-). The generalization decrement between responses to S+ and responses to test stimuli provides a quantitative index of the weight assigned to each feature. Contour deletion at either vertices or midsegments produced a decrement in the rate of responding, showing that each contour was represented as a feature. The generalization decrement to forms containing vertices with midsegments deleted was larger than the generalization decrement to forms containing midsegments with vertices deleted. Therefore, it appears that midsegments are weighted more strongly as features than vertices. Contour deletion provides a direct method for identifying the visual features underlying object recognition and lays a foundation for the development of comparative theories of object recognition.  相似文献   

12.
《Cognitive development》2000,15(3):263-280
Preschool children's use of novel predicates to make inferences about people was examined in three studies. In a procedure adapted from Gelman and Markman [Cognition 23 (1986) 183.], participants (ages 3 years 5 months–4 years 11 months) saw line drawings of three different faces. In Study 1 (N=16), the drawings were described as depicting children, and participants were asked to predict whether one of the children would share properties with a child who has the same novel predicate (e.g., “is zav,” which is never defined for participants) but is dissimilar in appearance, or with a child who has a different novel predicate but is similar in appearance. Participants tended to use the novel predicates rather than superficial resemblance to guide their inferences about people. In Study 2 (N=16), in which the line drawings were described as depicting dolls rather than children, participants showed no such emphasis on the novel predicate information. Study 3 (N=38) replicated the results of the first two studies. The results suggest that children have a general assumption that unfamiliar words hold rich inductive potential when applied to people but not when applied to dolls.  相似文献   

13.
Representational drawings of solid objects by young children   总被引:1,自引:0,他引:1  
M J Chen  M Cook 《Perception》1984,13(4):377-385
Two groups of children aged 6 and 8 years were given three tasks requiring graphical representations of solid geometric forms. These tasks were drawing from life models, copying from photographs, and copying from line drawings of these objects. Performance was assessed on the basis of level of approximation to correct perspective. Older children used more perspective features than younger children in their drawings. At all ages, the drawings from life were most difficult. Results on the two copying tasks were not consistent. Drawings made by copying photographs were either as advanced as or poorer than copies of line drawings. The results are explained in terms of the difficulties exhibited by young children in translating the three-dimensional scene to a two-dimensional picture plane and strategies adopted by them to cope with these problems.  相似文献   

14.
Studies of intellectual realism have shown that children aged 7 to 9 copy a line drawing of a cube less accurately than a non‐object pattern composed of the same lines ( Phillips, Hobbs, & Pratt, 1978 ). However, it remains unclear whether performance is worse on the cube because it is a three‐dimensional representation, or because it is a meaningful object, or both. The accuracy with which twenty 7‐year‐old and twenty 9‐year‐old children reproduced 16 line drawings of two‐dimensional and three‐dimensional objects and non‐objects was assessed. Older children copied all types of drawing more accurately than younger participants, and children of all ages copied two‐dimensional drawings more accurately than three‐dimensional. Meaningfulness interacted with dimensionality for ratings of drawing accuracy, assisting the copying of two‐dimensional drawings, but having no impact on the copying of three‐dimensional drawings. For an objective measure based on position, length, and orientation of line, meaningfulness interacted with age group, being beneficial for 7‐ but not 9‐year‐olds. Overall, the results imply that, contrary to previous suggestions, meaningfulness can actually be beneficial to copying.  相似文献   

15.
We used fMRI to investigate competition during language production in two word production tasks: object naming and color naming of achromatic line drawings. Generally, fMRI activation was higher for color naming. The line drawings were followed by a word (the distractor word) that referred to either the object, a related object, or an unrelated object. The effect of the distractor word on the BOLD response was qualitatively different for the two tasks. The activation pattern suggests two different kinds of competition during lexical retrieval: (1) Task-relevant responses (e.g., red in color naming) compete with task-irrelevant responses (i.e., the object’s name). This competition effect was dominant in prefrontal cortex. (2) Multiple task-relevant responses (i.e., target word and distractor word) compete for selection. This competition effect was dominant in ventral temporal cortex. This study provides further evidence for the distinct roles of frontal and temporal cortex in language production, while highlighting the effects of competition, albeit from different sources, in both regions.  相似文献   

16.
Various authors have suggested that learning associations between objects and pictures is a necessary condition for the perception of pictures. If so, then animals, never exposed to situations in which those associations might be learned, should not show transfer between objects and pictures. In this investigation, pigeons trained to discriminate between two solid objects were retrained with reinforcement reversal on either the objects themselves or photographs, line drawings, or silhouettes of the objects. Significant negative transfer indicated object-photograph and object-silhouette equivalence, but no transfer was found to line drawings. Positive transfer to photographs was also demonstrated. Transfer did not appear to be a function of object-picture confusability.  相似文献   

17.
Functional magnetic resonance imaging (fMRI) was used to examine neuronal activation in relation to increasing working memory load in an n-back task, using schematic drawings of facial expressions and scrambled drawings of the same facial features as stimuli. The main objective was to investigate whether working memory for drawings of facial features would yield specific activations compared to memory for scrambled drawings based on the same visual features as those making up the face drawings. fMRI-BOLD responses were acquired with a 1.5 T Siemens MR scanner while subjects watched the facial drawings alternated with the scrambled drawings, in a block-design. Subjects had to hold either 1 or 2 items in working memory. We found that the main effect of increasing memory load from one to two items yielded significant activations in a bilaterally distributed cortical network consisting of regions in the occipitotemporal cortex, the inferior parietal lobule, the dorsolateral prefrontal cortex, supplementary motor area and the cerebellum. In addition, we found a memory load x drawings interaction in the right inferior frontal gyrus in favor of the facial drawings. These findings show that working memory is specific for facial features which interact with a general cognitive load component to produce significant activations in prefrontal regions of the brain.  相似文献   

18.
19.
Matthews WJ  Adams A 《Perception》2008,37(4):628-630
Misperception of objects is a major cause of inaccuracies in adults' drawing. It has previously been established that participants' drawings are biased by their knowledge of the drawn object. We hypothesised that additional inaccuracy arises because drawings are biased towards participants' idiosyncratic canonical representations of the object. We report that participants' free drawings of a cylinder are correlated with their observational drawings of the same shape, providing evidence that people's observational drawings are distorted by their individual schematic representations of the objects in question. It is unclear whether this reflects a perceptual distortion or a bias in drawing production; in either case, this result provides a further explanation why people are poor at drawing from observation.  相似文献   

20.
Memory for pictures was investigated under conditions of difficult foil discriminability and lengthy retention intervals. The foils preserved the theme of the studied stimulus, but differed in the number and quality of nonessential physical details. In each experiment, subjects viewed colored photographs, black-and-white photographs, elaborated line drawings, and unelaborated line drawings, followed by an old/new (Experiment 1) or a four-alternative forced-choice (Experiment 2) test given either immediately, 1 day, 1 week, or 4 weeks following study; Experiment 3 replicated Experiment 1 but with a 12-week delay. For the old/new procedure, performance was best on colored photographs, with performance differences among the four stimulus types still significant after 4 weeks. For the forced-choice test, performance on colored photographs and unelaborated line drawings was best, with performance differences among stimulus types also still significant after 4 weeks. A confusion analysis indicated that errors were based on physical similarity, even after 12 weeks. These results refute the hypothesis that the memorial representations for pictorial variations converge to a common, thematic code after lengthy delays; instead, non-thematic, analogue information is encoded and preserved for lengthy time periods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号