首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.  相似文献   

2.
Viewpoint dependence in visual and haptic object recognition   总被引:5,自引:0,他引:5  
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back.  相似文献   

3.
Long-term memory of haptic, visual, and cross-modality information was investigated. In Experiment 1, subjects briefly explored 40 commonplace objects visually or haptically and then received a recognition test with categorically similar foils in the same or the alternative modality both immediately and after 1 week. Recognition was best for visual input and test, with haptic memory still apparent after a week's delay. Recognition was poorest in the cross-modality conditions, with performance on the haptic-visual and visual-haptic cross-modal conditions being nearly identical. Visual and haptic information decayed at similar rates across a week delay. In Experiment 2, subjects simultaneously viewed and handled the same objects, and transfer was tested in a successive cue-modality paradigm. Performance with the visual modality again exceeded that with the haptic modality. Furthermore, initial errors on the haptic test were often corrected when followed by the visual presentation, both immediately and after 1 week. However, visual test errors were corrected by haptic cuing on the immediate test only. These results are discussed in terms of shared information between the haptic and visual modalities, and the ease of transfer between these modalities immediately and after a substantial delay.  相似文献   

4.
Five-year-old children explored multidimensional objects either haptically or visually and then were tested for recognition with target and distractor items in either the same or the alternative modality. In Experiments 1 and 2, haptic, visual, and cross-modal recognition were all nearly with familiar objects; haptic and visual recognition were also excellent with unfamiliar objects, but cross-modal recognition was less accurate. In Experiment 3, cross-modal recognition was also less accurate than within-mode recognition with familiar objects that were members of the same basic-level category. The results indicate that children's haptic recognition is remarkably good, that cross-modal recognition is otherwise constrained, and that cross-modal recognition may be accomplished differently for familiar and unfamiliar objects.  相似文献   

5.
Explicit memory tests such as recognition typically access semantic, modality-independent representations, while perceptual implicit memory tests typically access presemantic, modality-specific representations. By demonstrating comparable cross- and within-modal priming using vision and haptics with verbal materials (Easton, Srinivas, & Greene, 1997), we recently questioned whether the representations underlying perceptual implicit tests were modality specific. Unlike vision and audition, with vision and haptics verbal information can be presented in geometric terms to both modalities. The present experiments extend this line of research by assessing implicit and explicit memory within and between vision and haptics in the nonverbal domain, using both 2-D patterns and 3-D objects. Implicit test results revealed robust cross-modal priming for both 2-D patterns and 3-D objects, indicating that vision and haptics shared abstract representations of object shape and structure. Explicit test results for 3-D objects revealed modality specificity, indicating that the recognition system keeps track of the modality through which an object is experienced.  相似文献   

6.
In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.  相似文献   

7.
Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8–9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children’s word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.  相似文献   

8.
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.  相似文献   

9.
Object recognition is a complex adaptive process that can be impaired in children with neurodevelopmental disabilities. Recently, we found a significant effect of time on the development of unimodal and crossmodal recognition skills for common objects in typical children and this was a starting point for the study of visuo-haptic object recognition skills in impaired populations. In this study, we investigated unimodal visual information, unimodal haptic information and visuo-haptic information transfer in 30 children, from 4.0 to 10.11 years of age, with bilateral Periventricular Leukomalacia (PVL) and bilateral cerebral palsy. Results were matched with those of 116 controls. Participants were tested using a clinical protocol, adopted in the previous study, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects and visuo-haptic transfer of these two types of information. Results show that in the PVL group as in controls, there is an age-dependent development of object recognition abilities for visual, haptic and visuo-haptic modalities, even if PVL children perform worse in all the three conditions, in comparison with the typical group. Furthermore, PVL children have a specific deficit both in visual and haptic information processing, that improves with age, probably thanks to everyday experience, but the visual modality shows a better and more rapid maturation, remaining more salient compared to the haptic one. However, multisensory processes partially facilitate recognition of common objects also in PVL children and this finding could be useful for planning early intervention in children with brain lesion.  相似文献   

10.
In an earlier report (Harman, Humphrey, & Goodale, 1999), we demonstrated that observers who actively rotated three-dimensional novel objects on a computer screen later showed faster visual recognition of these objects than did observers who had passively viewed exactly the same sequence of images of these virtual objects. In Experiment 1 of the present study we showed that compared to passive viewing, active exploration of three-dimensional object structure led to faster performance on a "mental rotation" task involving the studied objects. In addition, we examined how much time observers concentrated on particular views during active exploration. As we found in the previous report, they spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the three-quarter or intermediate views. This strong preference for the plan views of an object led us to examine the possibility in Experiment 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. We found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. Taken together, these experiments demonstrate (1) that active exploration facilitates learning of the three-dimensional structure of objects, and (2) that the superior performance following active exploration may be a direct result of the opportunity to spend more time on plan views of the object.  相似文献   

11.
Julia Mayas 《心理学报》2009,41(11):1063-1074
通道内重复启动的研究提示老年人内隐记忆未受损, 这不只体现在视觉通道上还包括其他感觉通道(例如触觉、听觉和嗅觉)。然而很少有研究考察启动任务是否具有通道特异性。在以年轻人为被试的研究中发现跨通道迁移(视觉到触觉和触觉到视觉)和通道内迁移(视觉到视觉, 触觉到触觉)具有相似性。一项最近的研究进一步探索老年人在跨通道启动任务上是否受损。结果显示视觉和触觉间的跨通道启动在年轻被试和老年被试上都是保留的且具有对称性。并且, 对于自然声响、图片的通道内和跨通道启动任务随着年老化发展仍旧保留。这些行为结果和其它最近神经科学结果显示跨通道启动发生于枕叶后纹状皮层区, 而这一区域在老年人中未损坏。这一领域未来的研究方向包括利用不同知觉通道间、利用熟悉的和新异的刺激并结合行为的和脑成像的方法, 通过设计完善的跨通道启动来研究正常老人与阿尔兹海默病人, 还包括将设计得完善的启动任务包括在用于改善老年人记忆功能的项目中。  相似文献   

12.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

13.
Although some studies have shown that haptic and visual identification seem to rely on similar processes, few studies have directly compared the two. We investigated haptic and visual object identification by asking participants to learn to recognize (Experiments 1, and 3), or to match (Experiment 2) novel objects that varied only in shape. Participants explored objects haptically, visually, or bimodally, and were then asked to identify objects haptically and/or visually. We demonstrated that patterns of identification errors were similar across identification modality, independently of learning and testing condition, suggesting that the haptic and visual representations in memory were similar. We also demonstrated that identification performance depended on both learning and testing conditions: visual identification surpassed haptic identification only when participants explored the objects visually or bimodally. When participants explored the objects haptically, haptic and visual identification were equivalent. Interestingly, when participants were simultaneously presented with two objects (one was presented haptically, and one was presented visually), object similarity only influenced performance when participants were asked to indicate whether the two objects were the same, or when participants had learned about the objects visually—without any haptic input. The results suggest that haptic and visual object representations rely on similar processes, that they may be shared, and that visual processing may not always lead to the best performance.  相似文献   

14.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

15.
We used a fully immersive virtual reality environment to study whether actively interacting with objects would effect subsequent recognition, when compared with passively observing the same objects. We found that when participants learned object structure by actively rotating the objects, the objects were recognized faster during a subsequent recognition task than when object structure was learned through passive observation. We also found that participants focused their study time during active exploration on a limited number of object views, while ignoring other views. Overall, our results suggest that allowing active exploration of an object during initial learning can facilitate recognition of that object, perhaps owing to the control that the participant has over the object views upon which they can focus. The virtual reality environment is ideal for studying such processes, allowing realistic interaction with objects while maintaining experimenter control.  相似文献   

16.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

17.
C Hulme  A Smart  G Moran  A Raine 《Perception》1983,12(4):477-483
The ability of children between the ages of 5 and 10 years to match the length of lines within and between the modalities of vision and kinaesthesis was studied. No evidence was found for specific increases in cross-modal skill which could not be explained in terms of within-modal development. Performance in the perceptual task was related to measures of developing motor skill in the children. Substantial relationships were found between performance on the within-modal tasks and motor skill, but no significant relationships were found between cross-modal measures and motor skill development. It is concluded that the development of cross-modal integration is not a major determinant of motor skill development.  相似文献   

18.
Lawson R  Bracken S 《Perception》2011,40(5):576-597
Raised-line drawings of familiar objects are very difficult to identify with active touch only. In contrast, haptically explored real 3-D objects are usually recognised efficiently, albeit slower and less accurately than with vision. Real 3-D objects have more depth information than outline drawings, but also extra information about identity (eg texture, hardness, temperature). Previous studies have not manipulated the availability of depth information in haptic object recognition whilst controlling for other information sources, so the importance of depth cues has not been assessed. In the present experiments, people named plastic small-scale models of familiar objects. Five versions of bilaterally symmetrical objects were produced. Versions varied only in the amount of depth information: minimal for cookie-cutter and filled-in outlines, partial for squashed and half objects, and full for 3-D models. Recognition was faster and much more accurate when more depth information was available, whether exploration was with both hands or just one finger. Novices found it almost impossible to recognise objects explored with two hand-held probes whereas experts succeeded using probes regardless of the amount of depth information. Surprisingly, plane misorientation did not impair recognition. Unlike with vision, depth information, but not object orientation, is extremely important for haptic object recognition.  相似文献   

19.
Learning to perceive differences in solid shape through vision and touch   总被引:1,自引:0,他引:1  
A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.  相似文献   

20.
Studying a list of items related to an item that is not presented (lure item) produces a false memory. We investigated whether a haptic study/test results in false recognition and, if so, whether congruency of presentation modality between study and test reduces the false recognition. After haptic or visual study of lists of real objects that are related to a lure object, participants were asked to recognise whether the objects were presented haptically or visually. We obtained false recognition results with haptic study and/or test. False recognition was reduced when presentation and study modalities were congruent. After haptic study, false recognition was reduced in the haptic test, as compared to the visual test. In contrast, visual study always reduced visual false recognition. These results indicate that there is a general effect of retrieval cues that will reduce false recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号