首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

2.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

3.
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.  相似文献   

4.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

5.
We investigated, in two experiments, the discrimination of bilateral symmetry in vision and touch using four sets of unfamiliar displays. They varied in complexity from 3 to 30 turns. Two sets were 2-D flat forms (raised-line shapes and raised surfaces) while the other two were 3-D objects constructed by extending the 2-D shapes in height (short and tall objects). Experiment 1 showed that visual accuracy was excellent but latencies increased for raised-line shapes compared with 3-D objects. Experiment 2 showed that unimanual exploration was more accurate for asymmetric than for symmetric judgments, but only for 2-D shapes and short objects. Bimanual exploration at the body midline facilitated the discrimination of symmetric shapes without changing performance with asymmetric ones. Accuracy for haptically explored symmetric stimuli improved as the stimuli were extended in the third dimension, while no such a trend appeared for asymmetric stimuli. Unlike vision, haptic response latency decreased for 2-D shapes compared with 3-D objects. The present results are relevant to the understanding of symmetry discrimination in vision and touch.  相似文献   

6.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

7.
The correspondence problem is a classic issue in vision and cognition. Frequent perceptual disruptions, such as saccades and brief occlusion, create gaps in perceptual input. How does the visual system establish correspondence between objects visible before and after the disruption? Current theories hold that object correspondence is established solely on the basis of an object’s spatiotemporal properties and that an object’s surface feature properties (such as color or shape) are not consulted in correspondence operations. In five experiments, we tested the relative contributions of spatiotemporal and surface feature properties to establishing object correspondence across brief occlusion. Correspondence operations were strongly influenced both by the consistency of an object’s spatiotemporal properties across occlusion and by the consistency of an object’s surface feature properties across occlusion. These data argue against the claim that spatiotemporal cues dominate the computation of object correspondence. Instead, the visual system consults multiple sources of relevant information to establish continuity across perceptual disruption.  相似文献   

8.
The present study investigated the human ability to discriminate the size of 3-D objects by touch. Experiment 1 measured the just noticeable differences (JNDs) for three tasks: (1) discrimination of volume without availability of weight information, (2) discrimination of volume with weight information available, and (3) discrimination of surface area. Stimuli consisted of spheres, cubes, and tetrahedrons. For all shapes, two reference sizes were used (3.5 and 12 cm(3)). No significant effect of task on the discriminability of objects was found, but the effects of shape and size were significant, as well as the interaction between these two factors. Post hoc analysis revealed that for the small reference, the Weber fractions for the tetrahedron were significantly larger than the fractions for the cube and the sphere. In Experiment 2, the JNDs for haptic perception of weight were measured for the same objects as those used in Experiment 1. The shape of objects had no significant effect on the Weber fractions for weight, but the Weber fractions for the small stimuli were larger than the fractions for the large stimuli. Surprisingly, a comparison between the two experiments showed that the Weber fractions for weight were significantly larger than the fractions for volume with availability of weight information. Taken together, the results reveal that volume and weight information are not effectively combined in discrimination tasks. This study provides detailed insight into the accuracy of the haptic system in discriminating objects' size. This substantial set of data satisfies the need for more fundamental knowledge on haptic size perception, necessary for a greater understanding of the perception of related properties, as well as of more general perceptual processes.  相似文献   

9.
Slowed processing of sequential perceptual information is related to developmental dyslexia. We investigated this unimodally and crossmodally in developmentally dyslexic children and controls ages 8-12 years. The participants judged whether two spatially separate trains of brief stimuli, presented at various stimulus onset asynchronies (SOA) in one or two senses, were synchronous or not. The stimulus trains consisted of light flashes in vision, clicks in audition, and indentations of the skin in the tactile sense. The dyslexic readers required longer SOAs than controls for successful performance in all six comparisons. The crossmodal spatiotemporal resolution of the groups differed more than unimodal performance. The dyslexic readers' segregation performance was also less differentiated than that of the controls. Our results show that not only sensory but also polysensory nonverbal information processing is temporally impaired in dyslexic children.  相似文献   

10.
ABSTRACT. While many studies find women self-report higher disgust sensitivity than men, few studies have examined gender differences with behavioral tasks in senses other than vision. On a haptic task, we tested the hypothesis that women would report greater disgust but not greater unpleasantness than men. Forty-four undergraduates (29 women) touched 8 out-of-sight stimuli with sensory (unpleasantness) and emotional (disgust) responses recorded. The stimuli consisted of 2 neutral, 2 pleasant, and 4 unpleasant (3 disgust-evoking) objects. No gender differences were found for reporting stimuli unpleasantness. In contrast, women rated their disgust significantly higher than men when touching the high disgust-evoking objects. Unpleasantness of the stimuli correlated with disgust to the objects, but disgust sensitivity (Disgust Scale-Revised) was not a strong predictor of disgust responses. Besides differentiating unpleasantness from disgust, this was also the first study to show gender differences in a disgust-evoking haptic task.  相似文献   

11.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

12.
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.  相似文献   

13.
This aim of this paper was twofold: (1) to display the various competencies of the infant's hands for processing information about the shape of objects; and (2) to show that the infant's haptic mode shares some common mechanisms with the visual mode. Several experiments on infants from birth and up to five months of age using a habituation/dishabituation procedure, intermodal transfer task between touch and vision, and various cognitive tasks revealed that infants may perceive and understand the physical world through their hands without visual control. From birth, infants can habituate to shape and detect discrepancies between shapes. But information exchanges between vision and touch are partial in cross-modal transfer tasks. Plausibly, modal specificities such as discrepancies in information gathering between the two modalities and the different functions of the hands (perceptual and instrumental) limit the links between the visual and haptic modes. In contrast, when infants abstract information from an event not totally felt or seen, amodal mechanisms underlie haptic and visual knowledge in early infancy. Despite various discrepancies between the sensory modes, conceiving the world is possible with hands as with eyes.  相似文献   

14.
ABSTRACT

According to the ecological theory of perception–action, perception is primarily of affordances, which are directly perceivable opportunities for behavior. The current study evaluated participants’ ability to use vision and haptic sensory-substitution devices to support perceptual judgments of affordances involving the task of passing through apertures. Sighted participants made perceptual judgments about whether they could walk through apertures of various widths and their level of confidence in each judgment, using unrestricted vision and, when blindfolded, using two haptic sensory-substitution instruments: a cane-like wooden rod and the Enactive Torch, a device that converts distance information into vibrotactile stimuli. The boundary between aperture widths that were judged as pass-through-able versus non-pass-through-able was statistically equivalent across sensory modalities. However, participants were not as confident in their judgments using the rod or Enactive Torch as they were using vision. Additionally, participants’ judgments with the haptic instruments were significantly more accurate than with vision. The results underscore the need to assess sensory-substitution devices in the context of functional behaviors.  相似文献   

15.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

16.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

17.
Object recognition is a complex adaptive process that can be impaired in children with neurodevelopmental disabilities. Recently, we found a significant effect of time on the development of unimodal and crossmodal recognition skills for common objects in typical children and this was a starting point for the study of visuo-haptic object recognition skills in impaired populations. In this study, we investigated unimodal visual information, unimodal haptic information and visuo-haptic information transfer in 30 children, from 4.0 to 10.11 years of age, with bilateral Periventricular Leukomalacia (PVL) and bilateral cerebral palsy. Results were matched with those of 116 controls. Participants were tested using a clinical protocol, adopted in the previous study, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects and visuo-haptic transfer of these two types of information. Results show that in the PVL group as in controls, there is an age-dependent development of object recognition abilities for visual, haptic and visuo-haptic modalities, even if PVL children perform worse in all the three conditions, in comparison with the typical group. Furthermore, PVL children have a specific deficit both in visual and haptic information processing, that improves with age, probably thanks to everyday experience, but the visual modality shows a better and more rapid maturation, remaining more salient compared to the haptic one. However, multisensory processes partially facilitate recognition of common objects also in PVL children and this finding could be useful for planning early intervention in children with brain lesion.  相似文献   

18.
Lawson R  Bracken S 《Perception》2011,40(5):576-597
Raised-line drawings of familiar objects are very difficult to identify with active touch only. In contrast, haptically explored real 3-D objects are usually recognised efficiently, albeit slower and less accurately than with vision. Real 3-D objects have more depth information than outline drawings, but also extra information about identity (eg texture, hardness, temperature). Previous studies have not manipulated the availability of depth information in haptic object recognition whilst controlling for other information sources, so the importance of depth cues has not been assessed. In the present experiments, people named plastic small-scale models of familiar objects. Five versions of bilaterally symmetrical objects were produced. Versions varied only in the amount of depth information: minimal for cookie-cutter and filled-in outlines, partial for squashed and half objects, and full for 3-D models. Recognition was faster and much more accurate when more depth information was available, whether exploration was with both hands or just one finger. Novices found it almost impossible to recognise objects explored with two hand-held probes whereas experts succeeded using probes regardless of the amount of depth information. Surprisingly, plane misorientation did not impair recognition. Unlike with vision, depth information, but not object orientation, is extremely important for haptic object recognition.  相似文献   

19.
Solving a map task requires transferring information acquired in one spatial context to another context, an ability that marks an important step in cognitive development. This study investigated how preschoolers’ mapping performance was affected by the extent of similarity between spaces. Whereas prior work examined effects of similarity in tasks involving matching individual objects, our tasks required considering spatial relations among objects. We found that the accuracy of mapping between two spaces with somewhat different perceptual features was higher than the accuracy of mapping between spaces with identical features. Yet, a further increase in differences between the two spaces had a detrimental effect on mapping. The results suggest that some degree of similarity between spaces is beneficial to children’s ability to transfer relational information. However, when the spaces have the same surface features, it may draw children’s attention to individual objects and inhibit their ability to focus on common relations across contexts.  相似文献   

20.
Does the magical number four characterize our visual working memory (VWM) capacity for all kinds of objects, or is the capacity of VWM inversely related to the perceptual complexity of those objects? To find out how perceptual complexity affects VWM, we used a change detection task to measure VWM capacity for six types of stimuli of different complexity: colors, letters, polygons, squiggles, cubes, and faces. We found that the estimated capacity decreased for more complex stimuli, suggesting that perceptual complexity was an important factor in determining VWM capacity. However, the considerable correlation between perceptual complexity and VWM capacity declined significantly if subjects were allowed to view the sample memory display longer. We conclude that when encoding limitations are minimized, perceptual complexity affects, but does not determine, VWM capacity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号