首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

2.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

3.
4.
IntroductionMost research to date on human categorization ability has concentrated on the visual and auditory domains. However, a limited – but non-negligible – range of studies has also examined the categorization of familiar or unfamiliar (i.e., novel) objects in the haptic (i.e., tactile-kinesthetic) modality.ObjectiveIn this paper, we describe how we developed a new set of parametrically defined objects, called widgets, that can be used as 3D (or 2D) materials for haptic (or visual) categorization purposes.MethodWidgets are unfamiliar complex 3D shapes with an ovoid body and four types of elements attached to it (eyes, tail, crest, and legs). The stimulus set comprises 24 objects divided into four categories of six exemplars each (the files used for 3D printing are provided as Supplementary Material).ResultsWe also assessed and demonstrated the validity of our stimulus set by conducting two separate studies of haptic and visual categorization, involving participants of different ages: young adults (Study 1), and children and adolescents (Study 2). Results showed that humans can categorize our 3D complex shapes on the basis of both haptically and visually perceived similarities in shape attributes.ConclusionWidgets are very useful new experimental stimuli for categorization studies using 3D printing technology.  相似文献   

5.
Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5?C5?s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.  相似文献   

6.
Models of duration bisection have focused on the effects of stimulus spacing and stimulus modality. However, interactions between stimulus spacing and stimulus modality have not been examined systematically. Two duration bisection experiments that address this issue are reported. Experiment 1 showed that stimulus spacing influenced the classification of auditory, but not visual, stimuli. Experiment 2 used a wider stimulus range, and showed stimulus spacing effects for both visual and auditory stimuli, although the effects were larger for auditory stimuli. A version of Temporal Range Frequency Theory was applied to the data, and was used to demonstrate that the qualitative pattern of results can be captured with the single assumption that the durations of visual stimuli are less discriminable from one another than are the durations of auditory stimuli.  相似文献   

7.
In traditional theories of perceptual learning, sensory modalities support one another. A good example comes from research on dynamic touch, the wielding of an unseen object to perceive its properties. Wielding provides the haptic system with mechanical information related to the length of the object. Visual feedback can improve the accuracy of subsequent length judgments; visual perception supports haptic perception. Such cross-modal support is not the only route to perceptual learning. We present a dynamic touch task in which we replaced visual feedback with the instruction to strike the unseen object against an unseen surface following length judgment. This additional mechanical information improved subsequent length judgments. We propose a self-organizing perspective in which a single modality trains itself.  相似文献   

8.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

9.
Here, we used functional magnetic resonance imaging to investigate the multisensory processing of object shape in the human cerebral cortex and explored the role of mental imagery in such processing. Regions active bilaterally during both visual and haptic shape perception, relative to texture perception in the respective modality, included parts of the superior parietal gyrus, the anterior intraparietal sulcus, and the lateral occipital complex. Of these bimodal regions, the lateral occipital complexes preferred visual over haptic stimuli, whereas the parietal areas preferred haptic over visual stimuli. Whereas most subjects reported little haptic imagery during visual shape perception, experiences of visual imagery during haptic shape perception were common. Across subjects, ratings of the vividness of visual imagery strongly predicted the amount of haptic shape-selective activity in the right, but not in the left, lateral occipital complex. Thus, visual imagery appears to contribute to activation of some, but not all, visual cortical areas during haptic perception.  相似文献   

10.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identificationand localization ofobject features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a "what" task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a "where" task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a "where" interference task was embedded in a "what" primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

11.
Multisensory prior entry.   总被引:16,自引:0,他引:16  
Despite 2 centuries of research, the question of whether attending to a sensory modality speeds the perception of stimuli in that modality has yet to be resolved. The authors highlight weaknesses inherent in this previous research and report the results of 4 experiments in which a novel methodology was used to investigate the effects on temporal order judgments (TOJs) of attending to a particular sensory modality or spatial location. Participants were presented with pairs of visual and tactile stimuli from the left and/or right at varying stimulus onset asynchronies and were required to make unspeeded TOJs regarding which stimulus appeared first. The results provide the strongest evidence to date for the existence of multisensory prior entry and support previous claims for attentional biases toward the visual modality and toward the right side of space. These findings have important implications for studies in many areas of human and animal cognition.  相似文献   

12.
S Appelle  F Gravetter 《Perception》1985,14(6):763-773
Although the 'oblique effect' (poorer performance on oblique orientations as compared to performance on vertical and horizontal orientations) is generally understood as a strictly visual phenomenon, a haptic oblique effect occurs for blindfolded subjects required to set a stimulus rod by hand. Because oblique effects are often attributed to the observer's experience with a predominantly horizontal and vertical environment, we assessed the effect of visual and haptic experience by providing subjects with modality-specific inspection periods to familiarize them with the more poorly judged obliques. Oblique error was significantly reduced in magnitude for judgments made by the modality of experience, and for judgments made across modalities. Rate of improvement, consistency of transfer, and the subjective reports of subjects indicate that this haptic oblique effect is more strongly influenced by visual experience and imagery than by haptic experience. It need not be interpreted as an effect based on factors intrinsic to the haptic modality.  相似文献   

13.
A number of experiments have demonstrated that the learning of braille is affected by a variety of factors. The present experiment was carried out to determine the relative importance of these variables for braille learning. The variables were stimulus set discriminability (high, low), study modality (visual, haptic), test modality (visual, haptic), study size (large vs standard braille cell), test size (large vs standard braille cell), study rate (5 or 10 seconds per item), and test rate (5 or 10 seconds per item). The results showed that study modality, stimulus set discriminability, and test modality were the variables mainly responsible for differences in performance during acquisition. Some practical and theoretical implications of these results are considered.  相似文献   

14.
A consistent finding in the literature concerning visual selection is that Ss will spend more time viewing unfamiliar stimuli than stimuli with which they have been familiarized. In the present experiment, the relationship between the magnitude of this familiarity effect and the level of stimulus incongruity was examined and found to be monotonic and increasing. In addition, amount of stimulus preexposure had no significant effect on the magnitude of the familiarity effect. Furthermore, there was no overall difference in Ss' preference for familiar and unfamiliar stimuli. Results are interpreted as supporting a theory of visual selection based on information-conflict resolution.  相似文献   

15.
Summary Braille-like patterns were presented unilaterally to both tactual and visual modalities. The subject's task was to identify the location of three dots in a 2 × 3 six-dot pattern. Specifically, tactual versus visual presentation, dynamic versus static presentation of tactual stimuli, learning, and gender were examined in relation to cerebral hemisheric differences. Data were analyzed in terms of both the number of individual stimulus dots and the number of complete three-dot patterns correctly identified with regard to their spatial location. Although no reliable laterality differences were obtained with the tactual-static condition, owing to a significant interaction between learning and side of stimulus presentation, dot positions were reported reliably more accurately when presented in a dynamic fashion, i. e., scanned by the subject, to the right hand. For the visual modality, both correct reports of individual dot positions as well as correct reports of the entire patterns were reliably more accurate for stimulus presentations to the right visual field. Increased familiarity with the task, i. e., learning accross trials, generally increased report accuracy, particularly for static presentations to the left hand. The effect of gender was negligible. The results are dicussed in terms of their theoretical implications for differential cerebral hemispheric specializations in terms of differential processing strategies.The research reported here was supported by a Grant (A8621) from the Natural Sciences and Engineering Research Council of Canada to the second author. Offprint requests should be sent to Eugene C. Lechelt, Department of Psychology, University of Alberta, Edmonton, Alberta, Canada T6G 2E9  相似文献   

16.
Five-year-old children explored multidimensional objects either haptically or visually and then were tested for recognition with target and distractor items in either the same or the alternative modality. In Experiments 1 and 2, haptic, visual, and cross-modal recognition were all nearly with familiar objects; haptic and visual recognition were also excellent with unfamiliar objects, but cross-modal recognition was less accurate. In Experiment 3, cross-modal recognition was also less accurate than within-mode recognition with familiar objects that were members of the same basic-level category. The results indicate that children's haptic recognition is remarkably good, that cross-modal recognition is otherwise constrained, and that cross-modal recognition may be accomplished differently for familiar and unfamiliar objects.  相似文献   

17.
Structurally, bodies of organisms can be described as tensegrity systems, fractally self-similar from whole-body to cellular levels. Sensory receptors embedded within such somatic tensegrity systems comprise haptic perceptual systems. Because the elements of the organismic tensegrity system are all interconnected, that system becomes the medium for haptic perception. Forces acting on any element of a somatic tensegrity system radiate throughout the entire system and thereby affect the entire haptic medium. All perception, in the ecological view, requires active sampling of stimulus arrays. Such active perception always involves overt body movements, orienting responses, and sensory organ adjustments (e.g., eye movements). Any and all movements occasioned in active perception affect the organismic tensegrity system, and therefore the haptic medium. A surprising consequence is that all active perception necessarily entails tensegrity-based haptic medium involvement, with implications for perceptual research.  相似文献   

18.
The present study examined the role of vision and haptics in memory for stimulus objects that vary along the dimension of curvature. Experiment 1 measured haptic‐haptic (T‐T) and haptic‐visual (T‐V) discrimination of curvature in a short‐term memory paradigm, using 30‐second retention intervals containing five different interpolated tasks. Results showed poorest performance when the interpolated tasks required spatial processing or movement, thereby suggesting that haptic information about shape is encoded in a spatial‐motor representation. Experiment 2 compared visual‐visual (V‐V) and visual‐haptic (V‐T) short‐term memory, again using 30‐second delay intervals. The results of the ANOVA failed to show a significant effect of intervening activity. Intra‐modal visual performance and cross‐modal performance were similar. Comparing the four modality conditions (inter‐modal V‐T, T‐V; intra‐modal V‐V, T‐T, by combining the data of Experiments 1 and 2), in a global analysis, showed a reliable interaction between intervening activity and experiment (modality). Although there appears to be a general tendency for spatial and movement activities to exert the most deleterious effects overall, the patterns are not identical when the initial stimulus is encoded haptically (Experiment 1) and visually (Experiment 2).  相似文献   

19.
A series of six experiments offers converging evidence that there is no fixed dominance hierarchy for the perception of textured patterns, and in doing so, highlights the importance of recognizing the multidimensionality of texture perception. The relative bias between vision and touch was reversed or considerably altered using both discrepancy and nondiscrepancy paradigms. This shift was achieved merely by directing observers to judge different dimensions of the same textured surface. Experiments 1, 4, and 5 showed relatively strong emphasis on visual as opposed to tactual cues regarding the spatial density of raised dot patterns. In contrast, Experiments 2, 3, and 6 demonstrated considerably greater emphasis on the tactual as opposed to visual cues when observers were instructed to judge the roughness of the same surfaces. The results of the experiments were discussed in terms of a modality appropriateness interpretation of intersensory bias. A weighted averaging model appeared to describe the nature of the intersensory integration process for both spatial density and roughness perception.  相似文献   

20.
In this paper, results of a free sorting task of 124 different material samples are analysed using multidimensional scaling. The relevant number of dimensions for haptic perception of materials is estimated to be 4. In addition, the haptic material space is calibrated by means of physical measurements of compressibility and roughness. The relation between objective and perceived compressibility and that between objective and perceived roughness could be described by an exponential function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号