首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In two size-conflict experiments, children viewed various squares through a reducing (1/2) lens while manually grasping them through a hand-concealing cloth. Then, using either vision or touch, they selected a match from a set of comparison squares. Forty 6-, 9-, and 12-year-olds participated in Experiment 1. Vision dominated the visual estimates of all three age groups; however, for the haptic estimates, the dominant modality varied developmentally: Vision dominated the 6-year-olds' haptic estimates, whereas neither modality dominated the 9-year-olds' haptic estimates, and touch dominated the 12-year-olds' haptic estimates. In Experiment 2, 24 six-year-olds were tested, as in Experiment 1; however, half of them were shown the size-distorting effects of the lens just prior to testing. Although this reduced the visual dominance of their haptic estimates, the effect was weak and short-lived. The haptic dominance seen in the data of the 12-year-olds was conspicuously absent.  相似文献   

2.
In two size-conflict experiments, children viewed various squares through a reducing (1/2) lens while manually grasping them through a hand-concealing cloth. Then, using either vision or touch, they selected a match from a set of comparison squares. Forty 6-, 9-, and 12-year-olds participated in Experiment 1. Vision dominated the visual estimates of all three age groups; however, for the haptic estimates, the dominant modality varied developmentally: Vision dominated the 6-year-olds’ haptic estimates, whereas neither modality dominated the 9-year-olds’ haptic estimates, and touch dominated the 12-ear-olds’ haptic estimates. In Experiment 2, 24 six-year-olds were tested, as in Experiment 1; however, half of them were shown the size-distorting effects of the lens just prior to testing. Although this reduced the visual dominance of their haptic estimates, the effect was weak and short-lived. The haptic dominance seen in the data of the 12-year-olds was conspicuously absent.  相似文献   

3.
Observers (72 college students) estimated the size of plastic squares that they held in their fingers and simultaneously viewed through a reducing lens that halved the squares’ visual size. The squares were grasped from below through a cloth that prevented direct sight of the hand. Each estimate was a match selected later, either haptically or visually, from a set of comparison squares. Vision dominated the visual estimates and touch dominated the haptic estimates, whether or not the observers knew in advance which type of estimate they would be asked to make. Neither modality inherently dominates perceived size.  相似文献   

4.
An account of intersensory integration is premised on knowing that different sensory inputs arise from the same object. Could, however, the combination of the inputs be impaired although the "unity assumption" holds? Forty observers viewed a square through a minifying (50%) lens while they simultaneously touched the square. Half could see and half could not see their haptic explorations of the square. Both groups, however, had reason to believe that they were touching and viewing the same square. Subsequent matches of the inspected square were mutually biased by touch and vision when the exploratory movements were visible. However, the matches were biased in the direction of the square's haptic size when observers could not see their exploratory movements. This impaired integration without the visible haptic explorations suggests that the unity assumption alone is not enough to promote intersensory integration.  相似文献   

5.
We investigated participants’ ability to identify and represent faces by hand. In Experiment 1, participants proved surprisingly capable of identifying unfamiliar live human faces using only their sense of touch. To evaluate the contribution of geometric and material information more directly, we biased participants toward encoding faces more in terms of geometric than material properties, by varying the exploration condition. When participants explored the faces both visually and tactually, identification accuracy did not improve relative to touch alone. When participants explored masks of the faces, thereby eliminating material cues, matching accuracy declined substantially relative to tactual identification of live faces. In Experiment 2, we explored intersensory transfer of face information between vision and touch. The findings are discussed in terms of their relevance to haptic object processing and to the faceprocessing literature in general.  相似文献   

6.
We investigated participants' ability to identify and represent faces by hand. In Experiment 1, participants proved surprisingly capable of identifying unfamiliar live human faces using only their sense of touch. To evaluate the contribution of geometric and material information more directly, we biased participants toward encoding faces more in terms of geometric than material properties, by varying the exploration condition. When participants explored the faces both visually and tactually, identification accuracy did not improve relative to touch alone. When participants explored masks of the faces, thereby eliminating material cues, matching accuracy declined substantially relative to tactual identification of live faces. In Experiment 2, we explored intersensory transfer of face information between vision and touch. The findings are discussed in terms of their relevance to haptic object processing and to the face-processing literature in general.  相似文献   

7.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

8.
In a number of experiments, blindfolded subjects traced convex curves whose verticals were equal to their horizontal extent at the base. Overestimation of verticals, as compared with horizontals, was found, indicating the presence of a horizontal-vertical illusion with haptic curves, as well as with visible curves. Experiment 1 showed that the illusion occurred with stimuli in the frontal plane and with stimuli that were flat on the table surface in vision and touch. In the second experiment, the stimuli were rotated, and differences between vision and touch were revealed, with a stronger illusion in touch. The haptic horizontal-vertical illusion was virtually eliminated when the stimuli were bimanually touched using free exploration at the body midline, but a strong illusion was obtained when curves were felt with two index fingers or with a single hand at the midline. Bimanual exploration eliminated the illusion for smaller 2.5- through 10.2-cm stimuli, but a weakened illusion remained for the largest 12.7-cm patterns. The illusion was present when the stimuli were bimanually explored in the left and right hemispace. Thus, the benefits of bimanual exploration derived from the use of the two hands at the body midline combined with free exploration, rather than from bimanual free exploration per se. The results indicate the importance of haptic exploration at the body midline, where the body can serve as a familiar reference metric for size judgments. Alternative interpretations of the results are discussed, including the impact of movement-based heuristics as a causal factor for the illusion. It was suggested that tracing the curve’s peak served to bisect the curve in haptics, because of the change in direction.  相似文献   

9.
The aim of the present study was to clarify the mechanisms underlying body understanding by examining the impact of visual experience (magnification and reduction) on perception of hand size and neutral external objects (squares). Independent groups of participants were asked to look through a 2× magnification lens, a ½-× reduction lens, or a control UV filter and to make visual size judgments about square stimuli and their hands. In Experiment 1, participants used a measuring device with unmarked wooden slats orientated in horizontal and radial/vertical space for their visual judgments. In Experiment 2, participants used an upright frontal slat for visual length judgments of their hands to eliminate any potential foreshortening in viewing the measurement apparatus. The results from the two experiments demonstrate that participants significantly underestimated both the square stimuli and their hands when they viewed them under a reduction lens. While overestimation and underestimation of squares was found for females in Experiment 2, males generally underestimated the squares. However, overestimation was not seen when the participants viewed their hands under a magnification lens. Implications of these findings are discussed.  相似文献   

10.
M A Heller 《Perception》1992,21(5):655-660
An experiment placed vision and touch in conflict by the use of a mirror placed perpendicular to a letter display. The mirror induced a discrepancy in direction and form. Subjects touched the embossed tangible letters p, q, b, d, W, and M, while looking at them in a mirror, and were asked to identify the letters. The upright mirror produced a vertical inversion of the letters, and visual inversion of the direction of finger movement. Thus, subjects touched the letter p, but saw themselves touching the letter b in the mirror. There were large individual differences in reliance on the senses. The majority of the subjects depended on touch, and only one showed visual dominance. Others showed a compromise between the senses. The results were consistent with an attentional explanation of intersensory dominance.  相似文献   

11.
The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.  相似文献   

12.
It is still unclear how the visual system perceives accurately the size of objects at different distances. One suggestion, dating back to Berkeley’s famous essay, is that vision is calibrated by touch. If so, we may expect different mechanisms involved for near, reachable distances and far, unreachable distances. To study how the haptic system calibrates vision we measured size constancy in children (from 6 to 16 years of age) and adults, at various distances. At all ages, accuracy of the visual size perception changes with distance, and is almost veridical inside the haptic workspace, in agreement with the idea that the haptic system acts to calibrate visual size perception. Outside this space, systematic errors occurred, which varied with age. Adults tended to overestimate visual size of distant objects (over‐compensation for distance), while children younger than 14 underestimated their size (under‐compensation). At 16 years of age there seemed to be a transition point, with veridical perception of distant objects. When young subjects were allowed to touch the object inside the haptic workspace, the visual biases disappeared, while older subjects showed multisensory integration. All results are consistent with the idea that the haptic system can be used to calibrate visual size perception during development, more effectively within than outside the haptic workspace, and that the calibration mechanisms are different in children than in adults.  相似文献   

13.
This aim of this paper was twofold: (1) to display the various competencies of the infant's hands for processing information about the shape of objects; and (2) to show that the infant's haptic mode shares some common mechanisms with the visual mode. Several experiments on infants from birth and up to five months of age using a habituation/dishabituation procedure, intermodal transfer task between touch and vision, and various cognitive tasks revealed that infants may perceive and understand the physical world through their hands without visual control. From birth, infants can habituate to shape and detect discrepancies between shapes. But information exchanges between vision and touch are partial in cross-modal transfer tasks. Plausibly, modal specificities such as discrepancies in information gathering between the two modalities and the different functions of the hands (perceptual and instrumental) limit the links between the visual and haptic modes. In contrast, when infants abstract information from an event not totally felt or seen, amodal mechanisms underlie haptic and visual knowledge in early infancy. Despite various discrepancies between the sensory modes, conceiving the world is possible with hands as with eyes.  相似文献   

14.
The preschool years are an important time during which children gain proficiency using the hands for both performatory and perceptual functions that involve dynamic (kinesthetic) touch. We evaluated dynamic touch perception of object extent and found that preschool children are able to discriminate length by dynamic touch early, but perception is not very fine-tuned and perceptual attunement to inertial characteristics increased with age. An analysis comparing the performatory and perceptual functions of the hands showed links between performance and perception in dynamic touch tasks that did not require haptic–visual correspondence. We concluded that whereas dynamic touch is functional early in the preschool years, perceptual acuity is not very precise and haptic–visual correspondence remains immature. In addition, reliance on inertial properties as information to make judgments of length emerges between 3 and 5 years and attunement to inertial properties likely continues to develop throughout childhood because perceptual judgments of 5-year-olds did not reach adult levels. Tight links between the performatory and the perceptual functions of the hand suggest this is an important avenue for future research.  相似文献   

15.
A series of six experiments offers converging evidence that there is no fixed dominance hierarchy for the perception of textured patterns, and in doing so, highlights the importance of recognizing the multidimensionality of texture perception. The relative bias between vision and touch was reversed or considerably altered using both discrepancy and nondiscrepancy paradigms. This shift was achieved merely by directing observers to judge different dimensions of the same textured surface. Experiments 1, 4, and 5 showed relatively strong emphasis on visual as opposed to tactual cues regarding the spatial density of raised dot patterns. In contrast, Experiments 2, 3, and 6 demonstrated considerably greater emphasis on the tactual as opposed to visual cues when observers were instructed to judge the roughness of the same surfaces. The results of the experiments were discussed in terms of a modality appropriateness interpretation of intersensory bias. A weighted averaging model appeared to describe the nature of the intersensory integration process for both spatial density and roughness perception.  相似文献   

16.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

17.
A series of experiments was carried out to examine the effect of curvature on haptic judgments of extent in sighted and blind individuals. Experiment 1 showed that diameters connecting the endpoints of semicircular lines were underestimated with respect to straight lines, but failed to show an effect of visual experience on length judgments. In experiment 2 we tested are lengths. The effects of curvature on perceived path length were weaker, but were still present in this experiment. Visual experience had no effect on path length judgments. Another experiment was performed to examine the effect of repeated tracing (1, 5, 9, or unlimited number of traces) on judgments of the lengths of straight lines and diameters of semicircles. Judgments of extent were more accurate when subjects engaged in larger numbers of traces. There was no effect of number of traces on curve-height judgments, suggesting that subjects were not using height estimates to judge diameters of semicircles. In a further experiment we tested the effect of number of traces on curves that varied in height. Restricting subjects to a single trace magnified the effect of path length on judgments of the distance between the endpoints of curves. Additional experiments showed that curvature effects on diameter judgments were not eliminated when stimuli were in the frontal plane or when the curves were explored with the use of two hands. Arm support had no effect on judged length in experiment 7. A final experiment showed a robust horizontal vertical illusion in haptic perception of convex curves, with overestimation of the heights of the curves compared with their widths. The practical and theoretical implications of the results are discussed.  相似文献   

18.
In traditional theories of perceptual learning, sensory modalities support one another. A good example comes from research on dynamic touch, the wielding of an unseen object to perceive its properties. Wielding provides the haptic system with mechanical information related to the length of the object. Visual feedback can improve the accuracy of subsequent length judgments; visual perception supports haptic perception. Such cross-modal support is not the only route to perceptual learning. We present a dynamic touch task in which we replaced visual feedback with the instruction to strike the unseen object against an unseen surface following length judgment. This additional mechanical information improved subsequent length judgments. We propose a self-organizing perspective in which a single modality trains itself.  相似文献   

19.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

20.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identificationand localization ofobject features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a "what" task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a "where" task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a "where" interference task was embedded in a "what" primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号