首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Augmented feedback, provided by coaches or displays, is a well-established strategy to accelerate motor learning. Frequent terminal feedback and concurrent feedback have been shown to be detrimental for simple motor task learning but supportive for complex motor task learning. However, conclusions on optimal feedback strategies have been mainly drawn from studies on artificial laboratory tasks with visual feedback only. Therefore, the authors compared the effectiveness of learning a complex, 3-dimensional rowing-type task with either concurrent visual, auditory, or haptic feedback to self-controlled terminal visual feedback. Results revealed that terminal visual feedback was most effective because it emphasized the internalization of task-relevant aspects. In contrast, concurrent feedback fostered the correction of task-irrelevant errors, which hindered learning. The concurrent visual and haptic feedback group performed much better during training with the feedback than in nonfeedback trials. Auditory feedback based on sonification of the movement error was not practical for training the 3-dimensional movement for most participants. Concurrent multimodal feedback in combination with terminal feedback may be most effective, especially if the feedback strategy is adapted to individual preferences and skill level.  相似文献   

2.
A substantial body of research has examined the speed-accuracy tradeoff captured by Fitts’ law, demonstrating increases in movement time that occur as aiming tasks are made more difficult by decreasing target width and/or increasing the distance between targets. Yet, serial aiming movements guided by internal spatial representations, rather than by visual views of targets have not been examined in this manner, and the value of confirmatory feedback via different sensory modalities within this paradigm is unknown. Here we examined goal-directed serial aiming movements (tapping back and forth between two targets), wherein targets were visually unavailable during the task. However, confirmatory feedback (auditory, haptic, visual, and bimodal combinations of each) was delivered upon each target acquisition, in a counterbalanced, within-subjects design. Each participant performed the aiming task with their pointer finger, represented within an immersive virtual environment as a 1 cm white sphere, while wearing a head-mounted display. Despite visual target occlusion, movement times increased in accordance with Fitts’ law. Though Fitts’ law captured performance for each of the sensory feedback conditions, the slopes differed. The effect of increasing difficulty on movement times was least influential in the haptic condition, suggesting more efficient processing of confirmatory haptic feedback during aiming movements guided by internal spatial representations.  相似文献   

3.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.  相似文献   

4.
The presence of complementary information across multiple sensory or motor modalities during learning, referred to as multimodal enrichment, can markedly benefit learning outcomes. Why is this? Here, we integrate cognitive, neuroscientific, and computational approaches to understanding the effectiveness of enrichment and discuss recent neuroscience findings indicating that crossmodal responses in sensory and motor brain regions causally contribute to the behavioral benefits of enrichment. The findings provide novel evidence for multimodal theories of enriched learning, challenge assumptions of longstanding cognitive theories, and provide counterevidence to unimodal neurobiologically inspired theories. Enriched educational methods are likely effective not only because they may engage greater levels of attention or deeper levels of processing, but also because multimodal interactions in the brain can enhance learning and memory.  相似文献   

5.
False recognition of an item that is not presented (the lure) can occur when participants study and are tested on their recognition of items related to the lure. False recognition is reduced when the study and test modalities are congruent (e.g., both visual) rather than different (e.g., visual study and auditory test). The present study examined whether such a congruency effect occurs for haptic and auditory modalities. After studying items presented haptically or auditorily, participants took a haptic or auditory recognition test. False recognition was reduced when both the study and test were haptic, but not when the study was auditory and the test was haptic. These results indicate that cues encoded through the haptic modality can reduce false recognition.  相似文献   

6.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

7.
The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking, may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short-term memory is fundamentally shaped by long-term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word-like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test–retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two-alternative forced-choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.  相似文献   

8.
The retention of discrete movements was examined under augmented and minimal feedback conditions. The augmented condition was presented for both the criterion and recall movements and consisted of providing visual, auditory, and heightened proprioceptive cues with each movement. Under minimal conditions, no visual, auditory or heightened proprioceptive cues were provided. Absolute and constant error revealed that under augmented conditions recall accuracy was improved. The retention interval x feedback condition interaction failed significance for both sources of error indicating that there was no evidence of differential decay rates. Variable error appeared to be an informative index of forgetting. The results were interpreted to be in support of the view that a memory trace is imprinted with feedback from all modalities and that the amount of such feedback determines memory trace strength.  相似文献   

9.
A number of new psycholinguistic variables has been proposed during the last years within embodied cognition framework: modality experience rating (i.e., relationship between words and images of a particular perceptive modality—visual, auditory, haptic etc.), manipulability (the necessity for an object to interact with human hands in order to perform its function), vertical spatial localization. However, it is not clear how these new variables are related to each other and to such traditional variables as imageability, AoA and word frequency. In this article, normative data on the modality (visual, auditory, haptic, olfactory, and gustatory) ratings, vertical spatial localization of the object, manipulability, imageability, age of acquisition, and subjective frequency for 506 Russian nouns are presented. Strongest correlations were observed between olfactory and gustatory modalities (.81), visual modality and imageability (.78), haptic modality and manipulability (.7). Other modalities also significantly correlate with imageability: olfactory (.35), gustatory (.24), and haptic (.67). Factor analysis divided variables into four groups where visual and haptic modality ratings were combined with imageability, manipulability and AoA (the first factor); word length, frequency and AoA formed the second factor; olfactory modality was united with gustatory (the third factor); spatial localization only is included in the fourth factor. Present norms of imageability and AoA are consistent with previous as correlation analysis has revealed. The complete database can be downloaded from supplementary material.  相似文献   

10.
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.  相似文献   

11.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

12.
In traditional theories of perceptual learning, sensory modalities support one another. A good example comes from research on dynamic touch, the wielding of an unseen object to perceive its properties. Wielding provides the haptic system with mechanical information related to the length of the object. Visual feedback can improve the accuracy of subsequent length judgments; visual perception supports haptic perception. Such cross-modal support is not the only route to perceptual learning. We present a dynamic touch task in which we replaced visual feedback with the instruction to strike the unseen object against an unseen surface following length judgment. This additional mechanical information improved subsequent length judgments. We propose a self-organizing perspective in which a single modality trains itself.  相似文献   

13.
Tasks that require tracking visual information reveal the severe limitations of our capacity to attend to multiple objects that vary in time and space. Although these limitations have been extensively characterized in the visual domain, very little is known about tracking information in other sensory domains. Does tracking auditory information exhibit characteristics similar to those of tracking visual information, and to what extent do these two tracking tasks draw on the same attention resources? We addressed these questions by asking participants to perform either single or dual tracking tasks from the same (visual–visual) or different (visual–auditory) perceptual modalities, with the difficulty of the tracking tasks being manipulated across trials. The results revealed that performing two concurrent tracking tasks, whether they were in the same or different modalities, affected tracking performance as compared to performing each task alone (concurrence costs). Moreover, increasing task difficulty also led to increased costs in both the single-task and dual-task conditions (load-dependent costs). The comparison of concurrence costs between visual–visual and visual–auditory dual-task performance revealed slightly greater interference when two visual tracking tasks were paired. Interestingly, however, increasing task difficulty led to equivalent costs for visual–visual and visual–auditory pairings. We concluded that visual and auditory tracking draw largely, though not exclusively, on common central attentional resources.  相似文献   

14.

How do people automatize their dual-task performance through bottleneck bypassing (i.e., accomplish parallel processing of the central stages of two tasks)? In the present work we addressed this question, evaluating the impact of sensory–motor modality compatibility—the similarity in modality between the stimulus and the consequences of the response. We hypothesized that incompatible sensory–motor modalities (e.g., visual–vocal) create conflicts within modality-specific working memory subsystems, and therefore predicted that tasks producing such conflicts would be performed less automatically after practice. To probe for automaticity, we used a transfer psychological refractory period (PRP) procedure: Participants were first trained on a visual task (Exp. 1) or an auditory task (Exp. 2) by itself, which was later presented as Task 2, along with an unpracticed Task 1. The Task 1–Task 2 sensory–motor modality pairings were either compatible (visual–manual and auditory–vocal) or incompatible (visual–vocal and auditory–manual). In both experiments we found converging indicators of bottleneck bypassing (small dual-task interference and a high rate of response reversals) for compatible sensory–motor modalities, but indicators of bottlenecking (large dual-task interference and few response reversals) for incompatible sensory–motor modalities. Relatedly, the proportion of individuals able to bypass the bottleneck was high for compatible modalities but very low for incompatible modalities. We propose that dual-task automatization is within reach when the tasks rely on codes that do not compete within a working memory subsystem.

  相似文献   

15.
Nabeta T  Ono F  Kawahara J 《Perception》2003,32(11):1351-1358
Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.  相似文献   

16.
Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.  相似文献   

17.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

18.
Perceptual learning was used to study potential transfer effects in a duration discrimination task. Subjects were trained to discriminate between two empty temporal intervals marked with auditory beeps, using a twoalternative forced choice paradigm. The major goal was to examine whether perceptual learning would generalize to empty intervals that have the same duration but are marked by visual flashes. The experiment also included longer intervals marked with auditory beeps and filled auditory intervals of the same duration as the trained interval, in order to examine whether perceptual learning would generalize to these conditions within the same sensory modality. In contrast to previous findings showing a transfer from the haptic to the auditory modality, the present results do not indicate a transfer from the auditory to the visual modality; but they do show transfers within the auditory modality.  相似文献   

19.
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10–12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.  相似文献   

20.
It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号