首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

2.
The modality effect occurs when audio/visual instructions are superior to visual only instructions. The effect was explored in two experiments conducted within a cognitive load theory framework. In Experiment 1, two groups of primary school students (N = 24) were presented with either audio/visual or visual only instructions on how to read a temperature graph. The group presented with visual text and a diagram rather than audio text and a diagram was superior, reversing most previous data on the modality effect. It was hypothesized that the reason for the reversal was that the transitory auditory text component was too long to be processed easily in working memory compared to more permanent written information. Experiment 2 (N = 64) replicated the experiment with the variation of a reduced length of both auditory and visual text instructions. Results indicated a reinstatement of the modality effect with audio/visual instructions proving superior to visual only instructions. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
In four experiments, reducing lenses were used to minify vision and generate intersensory size conflicts between vision and touch. Subjects made size judgments, using either visual matching or haptic matching. In visual matching, the subjects chose from a set of visible squares that progressively increased in size. In haptic matching, the subjects selected matches from an array of tangible wooden squares. In Experiment 1, it was found that neither sense dominated when subjects exposed to an intersensory discrepancy made their size estimates by using either visual matching or haptic matching. Size judgments were nearly indentical for conflict subjects making visual or haptic matches. Thus, matching modality did not matter in Experiment 1. In Experiment 2, it was found that subjects were influenced by the sight of their hands, which led to increases in the magnitude of their size judgments. Sight of the hands produced more accurate judgments, with subjects being better able to compensate for the illusory effects of the reducing lens. In two additional experiments, it was found that when more precise judgments were required and subjects had to generate their own size estimates, the response modality dominated. Thus, vision dominated in Experiment 3, where size judgments derived from viewing a metric ruler, whereas touch dominated in Experiment 4, where subjects made size estimates with a pincers posture of their hands. It is suggested that matching procedures are inadequate for assessing intersensory dominance relations. These results qualify the position (Hershberger & Misceo, 1996) that the modality of size estimates influences the resolution of intersensory conflicts. Only when required to self-generate more precise judgments did subjects rely on one sense, either vision or touch. Thus, task and attentional requirements influence dominance relations, and vision does not invariably prevail over touch.  相似文献   

4.
To investigate the relationship between visual acuity and cognitive function with aging, we compared low-vision and normally-sighted young and elderly individuals on a spatial working memory (WM) task. The task required subjects to memorise target locations on different matrices after perceiving them visually or haptically. The haptic modality was included as a control to look at the effect of aging on memory without the confounding effect of visual deficit. Overall, age and visual status did not interact to affect WM accuracy, suggesting that age does not exaggerate the effects of visual deprivation. Young participants performed better than the elderly only when the task required more operational processes (i.e., integration of information). Sighted participants outperformed the visually impaired regardless of testing modality suggesting that the effect of the visual deficit is not confined to only the most peripheral levels of information processing. These findings suggest that vision, being the primary sensory modality, tends to shape the general supramodal mechanisms of memory.  相似文献   

5.
A number of new psycholinguistic variables has been proposed during the last years within embodied cognition framework: modality experience rating (i.e., relationship between words and images of a particular perceptive modality—visual, auditory, haptic etc.), manipulability (the necessity for an object to interact with human hands in order to perform its function), vertical spatial localization. However, it is not clear how these new variables are related to each other and to such traditional variables as imageability, AoA and word frequency. In this article, normative data on the modality (visual, auditory, haptic, olfactory, and gustatory) ratings, vertical spatial localization of the object, manipulability, imageability, age of acquisition, and subjective frequency for 506 Russian nouns are presented. Strongest correlations were observed between olfactory and gustatory modalities (.81), visual modality and imageability (.78), haptic modality and manipulability (.7). Other modalities also significantly correlate with imageability: olfactory (.35), gustatory (.24), and haptic (.67). Factor analysis divided variables into four groups where visual and haptic modality ratings were combined with imageability, manipulability and AoA (the first factor); word length, frequency and AoA formed the second factor; olfactory modality was united with gustatory (the third factor); spatial localization only is included in the fourth factor. Present norms of imageability and AoA are consistent with previous as correlation analysis has revealed. The complete database can be downloaded from supplementary material.  相似文献   

6.
ABSTRACT

To investigate the relationship between visual acuity and cognitive function with aging, we compared low-vision and normally-sighted young and elderly individuals on a spatial working memory (WM) task. The task required subjects to memorise target locations on different matrices after perceiving them visually or haptically. The haptic modality was included as a control to look at the effect of aging on memory without the confounding effect of visual deficit. Overall, age and visual status did not interact to affect WM accuracy, suggesting that age does not exaggerate the effects of visual deprivation. Young participants performed better than the elderly only when the task required more operational processes (i.e., integration of information). Sighted participants outperformed the visually impaired regardless of testing modality suggesting that the effect of the visual deficit is not confined to only the most peripheral levels of information processing. These findings suggest that vision, being the primary sensory modality, tends to shape the general supramodal mechanisms of memory.  相似文献   

7.
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament questionnaires (S. C. McDevitt & W. B. Carey, 1995), the authors classified participants according to temperament composites of reactivity and task orientation. In a preliminary study, tasks were equated across age and modality using d' matching procedures. In the main experiment, 48 children and 48 adults performed these calibrated tasks. The auditory task proved more difficult for both children and adults. Intermodal relations changed with age: Performance across modality was significantly correlated for children but not for adults. Although temperament did not significantly predict performance in adults, it did for children. The temperament effects observed in children--specifically in those with the composite of reactivity--occurred in connection with the auditory task and in a manner consistent with theoretical predictions derived from optimal stimulation theory.  相似文献   

8.
Maintaining balance is fundamentally a multisensory process, with visual, haptic, and proprioceptive information all playing an important role in postural control. The current project examined the interaction between such sensory inputs, manipulating visual (presence versus absence), haptic (presence versus absence of contact with a stable or unstable finger support surface), and proprioceptive (varying stance widths, including shoulder width stance, Chaplin [heels together, feet splayed at approximately 60°] stance, feet together stance, and tandem stance) information. Analyses of mean velocity of the Centre of Pressure (CoP) revealed significant interactions between these factors, with stability gains observed as a function of increasing sensory information (e.g., visual, haptic, visual + haptic), although the nature of these gains was modulated by the proprioceptive information and the reliability of the haptic support surface (i.e., unstable versus stable finger supports). Subsequent analyses on individual difference parameters (e.g., height, leg length, weight, and areas of base of support) revealed that these variables were significantly related to postural measures across experimental conditions. These findings are discussed relative to their implications for multisensory postural control, and with respect to inverted pendulum models of balance. (185 words).  相似文献   

9.
The present study investigated whether memory for a room-sized spatial layout learned through auditory localization of sounds exhibits orientation dependence similar to that observed for spatial memory acquired from stationary viewing of the environment. Participants learned spatial layouts by viewing objects or localizing sounds and then performed judgments of relative direction among remembered locations. The results showed that direction judgments following auditory learning were performed most accurately at a particular orientation in the same way as were those following visual learning, indicating that auditorily encoded spatial memory is orientation dependent. In combination with previous findings that spatial memories derived from haptic and proprioceptive experiences are also orientation dependent, the present finding suggests that orientation dependence is a general functional property of human spatial memory independent of learning modality.  相似文献   

10.
False recognition of an item that is not presented (the lure) can occur when participants study and are tested on their recognition of items related to the lure. False recognition is reduced when the study and test modalities are congruent (e.g., both visual) rather than different (e.g., visual study and auditory test). The present study examined whether such a congruency effect occurs for haptic and auditory modalities. After studying items presented haptically or auditorily, participants took a haptic or auditory recognition test. False recognition was reduced when both the study and test were haptic, but not when the study was auditory and the test was haptic. These results indicate that cues encoded through the haptic modality can reduce false recognition.  相似文献   

11.
Perceptual learning was used to study potential transfer effects in a duration discrimination task. Subjects were trained to discriminate between two empty temporal intervals marked with auditory beeps, using a twoalternative forced choice paradigm. The major goal was to examine whether perceptual learning would generalize to empty intervals that have the same duration but are marked by visual flashes. The experiment also included longer intervals marked with auditory beeps and filled auditory intervals of the same duration as the trained interval, in order to examine whether perceptual learning would generalize to these conditions within the same sensory modality. In contrast to previous findings showing a transfer from the haptic to the auditory modality, the present results do not indicate a transfer from the auditory to the visual modality; but they do show transfers within the auditory modality.  相似文献   

12.
《Acta psychologica》2013,142(2):184-194
Older adults are known to have reduced inhibitory control and therefore to be more distractible than young adults. Recently, we have proposed that sensory modality plays a crucial role in age-related distractibility. In this study, we examined age differences in vulnerability to unimodal and cross-modal visual and auditory distraction. A group of 24 younger (mean age = 21.7 years) and 22 older adults (mean age = 65.4 years) performed visual and auditory n-back tasks while ignoring visual and auditory distraction. Whereas reaction time data indicated that both young and older adults are particularly affected by unimodal distraction, accuracy data revealed that older adults, but not younger adults, are vulnerable to cross-modal visual distraction. These results support the notion that age-related distractibility is modality dependent.  相似文献   

13.
It is still unclear how the visual system perceives accurately the size of objects at different distances. One suggestion, dating back to Berkeley’s famous essay, is that vision is calibrated by touch. If so, we may expect different mechanisms involved for near, reachable distances and far, unreachable distances. To study how the haptic system calibrates vision we measured size constancy in children (from 6 to 16 years of age) and adults, at various distances. At all ages, accuracy of the visual size perception changes with distance, and is almost veridical inside the haptic workspace, in agreement with the idea that the haptic system acts to calibrate visual size perception. Outside this space, systematic errors occurred, which varied with age. Adults tended to overestimate visual size of distant objects (over‐compensation for distance), while children younger than 14 underestimated their size (under‐compensation). At 16 years of age there seemed to be a transition point, with veridical perception of distant objects. When young subjects were allowed to touch the object inside the haptic workspace, the visual biases disappeared, while older subjects showed multisensory integration. All results are consistent with the idea that the haptic system can be used to calibrate visual size perception during development, more effectively within than outside the haptic workspace, and that the calibration mechanisms are different in children than in adults.  相似文献   

14.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

15.
It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were tested with a measure of their global versus local visual attention; the content of this measure was unrelated to the inductions. In a different set of 4 studies, the effect of local versus global visual processing on the way people listen to a poem or touch, taste, and smell objects was examined. In all experiments, global/local processing in 1 modality shifted to global/local processing in the other modality. A final study found more pronounced shifts when compatible processing styles were induced in 2 rather than 1 modality. Moreover, the study explored mediation by relative right versus left hemisphere activation as measured with the line bisection task and accessibility of semantic associations. It is concluded that the effects reflect procedural rather than semantic priming effects that occurred out of participants' awareness. Because global/local processing has been shown to affect higher order processing, future research may activate processing styles in other sensory modalities to produce similar effects. Furthermore, because global/local processing is triggered by a variety of real world variables, one may explore effects on other sensory modalities than vision. The results are consistent with the global versus local processing model, a systems account (GLOMOsys; F?rster & Dannenberg, 2010).  相似文献   

16.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

17.
Statistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children.  相似文献   

18.
IntroductionMost research to date on human categorization ability has concentrated on the visual and auditory domains. However, a limited – but non-negligible – range of studies has also examined the categorization of familiar or unfamiliar (i.e., novel) objects in the haptic (i.e., tactile-kinesthetic) modality.ObjectiveIn this paper, we describe how we developed a new set of parametrically defined objects, called widgets, that can be used as 3D (or 2D) materials for haptic (or visual) categorization purposes.MethodWidgets are unfamiliar complex 3D shapes with an ovoid body and four types of elements attached to it (eyes, tail, crest, and legs). The stimulus set comprises 24 objects divided into four categories of six exemplars each (the files used for 3D printing are provided as Supplementary Material).ResultsWe also assessed and demonstrated the validity of our stimulus set by conducting two separate studies of haptic and visual categorization, involving participants of different ages: young adults (Study 1), and children and adolescents (Study 2). Results showed that humans can categorize our 3D complex shapes on the basis of both haptically and visually perceived similarities in shape attributes.ConclusionWidgets are very useful new experimental stimuli for categorization studies using 3D printing technology.  相似文献   

19.
In adults, decisions based on multisensory information can be faster and/or more accurate than those relying on a single sense. However, this finding varies significantly across development. Here we studied speeded responding to audio‐visual targets, a key multisensory function whose development remains unclear. We found that when judging the locations of targets, children aged 4 to 12 years and adults had faster and less variable response times given auditory and visual information together compared with either alone. Comparison of response time distributions with model predictions indicated that children at all ages were integrating (pooling) sensory information to make decisions but that both the overall speed and the efficiency of sensory integration improved with age. The evidence for pooling comes from comparison with the predictions of Miller's seminal ‘race model’, as well as with a major recent extension of this model and a comparable ‘pooling’ (coactivation) model. The findings and analyses can reconcile results from previous audio‐visual studies, in which infants showed speed gains exceeding race model predictions in a spatial orienting task (Neil et al., 2006) but children below 7 years did not in speeded reaction time tasks (e.g. Barutchu et al., 2009). Our results provide new evidence for early and sustained abilities to integrate visual and auditory signals for spatial localization from a young age.  相似文献   

20.
The phenomena of prismatically induced “visual capture” and adaptation of the hand were compared. In Experiment 1, it was demonstrated that when the subject’s hand was transported for him by the experimenter (passive movement) immediately preceding the measure of visual capture, the magnitude of the immediate shift in felt limb position (visual capture) was enhanced relative to when the subject moved the hand himself (active movement). In Experiment 2, where the dependent measure was adaptation of the prism-exposed hand, the opposite effect was produced by the active/passive manipulation. It appears, then, that different processes operate to produce visual capture and adaptation. It was speculated that visual capture represents an immediate weighting of visual over proprioceptive input as a result of the greater precision of vision and/or the subject’s tendency to direct his attention more heavily to this modality. In contrast, prism adaptation is probably a recalibration of felt limb position in the direction of vision, induced by the presence of a registered discordance between visual and proprioceptive inputs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号