首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Gallace A  Tan HZ  Spence C 《Perception》2006,35(2):247-266
A large body of research now supports the claim that two different and dissociable processes are involved in making numerosity judgments regarding visual stimuli: subitising (fast and nearly errorless) for up to 4 stimuli, and counting (slow and error-prone) when more than 4 stimuli are presented. We studied tactile numerosity judgments for combinations of 1-7 vibrotactile stimuli presented simultaneously over the body surface. In experiment 1, the stimuli were presented once, while in experiment 2 conditions of single presentation and repeated presentation of the stimulus were compared. Neither experiment provided any evidence for a discontinuity in the slope of either the RT or error data suggesting that subitisation does not occur for tactile stimuli. By systematically varying the intensity of the vibrotactile stimuli in experiment 3, we were able to demonstrate that participants were not simply using the 'global intensity' of the whole tactile display to make their tactile numerosity judgments, but were, instead, using information concerning the number of tactors activated. The results of the three experiments reported here are discussed in relation to current theories of counting and subitising, and potential implications for the design of tactile user interfaces are highlighted.  相似文献   

2.
Multisensory integration of nonspatial features between vision and touch was investigated by examining the effects of redundant signals of visual and tactile inputs. In the present experiments, visual letter stimuli and/or tactile letter stimuli were presented, which participants were asked to identify as quickly as possible. The results of Experiment 1 demonstrated faster reaction times for bimodal stimuli than for unimodal stimuli (the redundant signals effect (RSE)). The RSE was due to coactivation of figural representations from the visual and tactile modalities. This coactivation did not occur for a simple stimulus detection task (Experiment 2) or for bimodal stimuli with the same semantic information but different physical stimulus features (Experiment 3). The findings suggest that the integration process might occur at a relatively early stage of object-identification prior to the decision level.  相似文献   

3.
The aim of this study was to investigate the extent to which tactile information that is unavailable for full conscious report can be accessed using partial-report procedures. In Experiment 1, participants reported the total number of tactile stimuli (up to six) presented simultaneously to their fingertips (numerosity judgment task). In another condition, after being presented with the tactile display, they had to detect whether or not the position indicated by a (visual or tactile) probe had previously contained a tactile stimulus (partial-report task). Participants correctly reported up to three stimuli in the numerosity judgment task, but their performance was far better in the partial-report task: Up to six stimuli were perceived at the shortest target-probe intervals. A similar pattern of results was observed when the participants performed a concurrent articulatory suppression task (Exp. 2). The results of a final experiment revealed that performance in the partial-report task was overall better for stimuli presented on the fingertips than for stimuli presented across the rest of the body surface. These results demonstrate that tactile information that is unavailable for report in a numerosity task can nevertheless sometimes still be accessed when a partial-report procedure is used instead.  相似文献   

4.
Gallace A  Tan HZ  Spence C 《Perception》2008,37(5):782-800
There is a growing interest in the question whether the phenomenon of subitising (fast and accurate detection of fewer than 4-5 stimuli presented simultaneously), widely thought to affect numerosity judgments in vision, can also affect the processing of tactile stimuli. In a recent study, in which multiple tactile stimuli were simultaneously presented across the body surface, Gallace et al (2006 Perception 35 247-266) concluded that tactile stimuli cannot be subitised. By contrast, Riggs et al (2006 Psychological Science 17 271 275), who presented tactile stimuli to participants' fingertips, came to precisely the opposite conclusion, arguing instead that subitising does occur in touch. Here, we re-analyse the data from both studies using more powerful statistical procedures. We show that Riggs et al's error data do not offer strong support for the subitising account and, what is more, Gallace et al's data are not entirely compatible with a linear model account of numerosity judgments in humans either. We then report an experiment in which we compare numerosity judgments for stimuli presented on the fingertips with those for stimuli presented on the rest of the body surface. The results show no major differences between the fingers and the rest of the body, and an absence of subitising in either condition. On the basis of these observations, we discuss whether the purported existence of subitisation in touch reflects a genuine cognitive phenomenon, or whether, instead, it may reflect a bias in the interpretation of the particular psychometric functions that happen to have been chosen by researchers to fit their data.  相似文献   

5.
We assessed the influence of multisensory interactions on the exogenous orienting of spatial attention by comparing the ability of auditory, tactile, and audiotactile exogenous cues to capture visuospatial attention under conditions of no perceptual load versus high perceptual load. In Experiment 1, participants discriminated the elevation of visual targets preceded by either unimodal or bimodal cues under conditions of either a high perceptual load (involving the monitoring of a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (when the central stream was replaced by a fixation point). All of the cues captured spatial attention in the no-load condition, whereas only the bimodal cues captured visuospatial attention in the highload condition. In Experiment 2, we ruled out the possibility that the presentation of any changing stimulus at fixation (i.e., a passively monitored stream of letters) would eliminate exogenous orienting, which instead appears to be a consequence of high perceptual load conditions (Experiment 1). These results demonstrate that multisensory cues capture spatial attention more effectively than unimodal cues under conditions of concurrent perceptual load.  相似文献   

6.
This study examined tactile and visual temporal processing in adults with early loss of hearing. The tactile task consisted of punctate stimulations that were delivered to one or both hands by a mechanical tactile stimulator. Pairs of light emitting diodes were presented on a display for visual stimulation. Responses consisted of YES or NO judgments as to whether the onset of the pairs of stimuli was perceived simultaneously or non-simultaneously. Tactile and visual temporal thresholds were significantly higher for the deaf group when compared to controls. In contrast to controls, tactile and visual temporal thresholds for the deaf group did not differ when presentation locations were examined. Overall findings of this study support the notion that temporal processing is compromised following early deafness regardless of the spatial location in which the stimuli are presented.  相似文献   

7.
Manual reaction times to visual, auditory, and tactile stimuli presented simultaneously, or with a delay, were measured to test for multisensory interaction effects in a simple detection task with redundant signals. Responses to trimodal stimulus combinations were faster than those to bimodal combinations, which in turn were faster than reactions to unimodal stimuli. Response enhancement increased with decreasing auditory and tactile stimulus intensity and was a U-shaped function of stimulus onset asynchrony. Distribution inequality tests indicated that the multisensory interaction effects were larger than predicted by separate activation models, including the difference between bimodal and trimodal response facilitation. The results are discussed with respect to previous findings in a focused attention task and are compared with multisensory integration rules observed in bimodal and trimodal superior colliculus neurons in the cat and monkey.  相似文献   

8.
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar‐like rules (e.g. ABA) enhanced 5‐month‐olds’ capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle‐triangle‐circle) or auditory presentation of the syllables (la‐ba‐la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio‐visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8‐ to 10‐month‐old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio‐visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio‐visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ  相似文献   

9.
In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.  相似文献   

10.
The participants in this study discriminated the position of tactile target stimuli presented at the tip or the base of the forefinger of one of the participants’ hands, while ignoring visual distractor stimuli. The visual distractor stimuli were presented from two circles on a display aligned with the tactile targets in Experiment 1 or orthogonal to them in Experiment 2. Tactile discrimination performance was slower and less accurate when the visual distractor stimuli were presented from incongruent locations relative to the tactile target stimuli (e.g., tactile target at the base of the finger with top visual distractor) highlighting a cross-modal congruency effect. We examined whether the presence and orientation of a simple line drawing of a hand, which was superimposed on the visual distractor stimuli, would modulate the cross-modal congruency effects. When the tactile targets and the visual distractors were spatially aligned, the modulatory effects of the hand picture were small (Experiment 1). However, when they were spatially misaligned, the effects were much larger, and the direction of the cross-modal congruency effects changed in accordance with the orientation of the picture of the hand, as if the hand picture corresponded to the participants’ own stimulated hand (Experiment 2). The results suggest that the two-dimensional picture of a hand can modulate processes maintaining our internal body representation. We also observed that the cross-modal congruency effects were influenced by the postures of the stimulated and the responding hands. These results reveal the complex nature of spatial interactions among vision, touch, and proprioception.  相似文献   

11.
Recent developments in the study of tactile attention.   总被引:1,自引:0,他引:1  
The last few years have seen a rapid growth of research on the topic of tactile attention. We review the evidence showing that attention can be directed to the tactile modality, or to the region of space where tactile stimuli are presented, in either an endogenous or exogenous (top-down or bottom-up) manner. We highlight the latest findings on the interaction between these two forms of attentional orienting in touch. We also review the latest research on tactile numerosity judgments and change detection highlighting the severe cognitive (attentional) limitations that constrain people's ability to process more complex tactile information displays. These findings are particularly important given that tactile interfaces are currently being developed for a number of different application domains.  相似文献   

12.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left-right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactile capture of audition.  相似文献   

13.
Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.  相似文献   

14.
Two studies were conducted to examine the effects of unimodal and multimodal cueing techniques for indicating the location of threats on target acquisition, the recall of information from concurrent communications, and perceived workload. One visual, two auditory (i.e., nonspatial speech and spatial tones [3-D]), and one tactile cue were assessed in Experiment 1. Experiment 2 examined the effects of combinations of the cues assessed in the first investigation: visual + nonspatial speech, visual + spatial tones, visual + tactile, and nonspatial speech + tactile. A unimodal, “visual only” condition was included as a baseline to determine the extent to which a supplementary cue might influence changes in performance and workload. The results of the studies indicated that time to first shot and the percentage of hits can be improved and workload reduced by providing cues about target location. The multimodal cues did not yield significant improvements in performance or workload beyond that achieved by the unimodal visual cue.  相似文献   

15.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left—right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactilecapture of audition.  相似文献   

16.
Participants made visual temporal order judgments (TOJs) about which of two lights appeared first while taskirrelevant vibrotactile stimuli delivered to the index finger were presented before the first and after the second light. Temporally misaligned tactile stimuli captured the onsets of the lights, thereby improving sensitivity on the visual TOJ task, indicative of tactile-visual (TV) temporal ventriloquism (Experiment 1). The size of this effect was comparable to auditory-visual (AV) temporal ventriloquism (Experiment 2). Spatial discordance between the TV stimuli, as in the AV case, did not harm the effect (Experiments 3 and 4). TV stimuli thus behaved like AV stimuli, demonstrating that spatial co-occurrence is not a necessary constraint for intersensory pairing to occur.  相似文献   

17.
Participants made visual temporal order judgments (TOJs) about which of two lights appeared first while task-irrelevant vibrotactile stimuli delivered to the index finger were presented before the first and after the second light. Temporally misaligned tactile stimuli captured the onsets of the lights, thereby improving sensitivity on the visual TOJ task, indicative of tactile-visual (TV) temporal ventriloquism (Experiment 1). The size of this effect was comparable to auditory-visual (AV) temporal ventriloquism (Experiment 2). Spatial discordance between the TV stimuli, as in the AV case, did not harm the effect (Experiments 3 and 4). TV stimuli thus behaved like AV stimuli, demonstrating that spatial co-occurrence is not a necessary constraint for intersensory pairing to occur.  相似文献   

18.
Responses are typically faster and more accurate when both auditory and visual modalities are stimulated than when only one is. This bimodal advantage is generally attributed to a speeding of responding on bimodal trials, relative to unimodal trials. It remains possible that this effect might be due to a performance decrement on unimodal ones. To investigate this, two levels of auditory and visual signal intensities were combined in a double-factorial paradigm. Responses to the onset of the imperative signal were measured under go/no-go conditions. Mean reaction times to the four types of bimodal stimuli exhibited a superadditive interaction. This is evidence for the parallel self-terminating processing of the two signal components. Violations of the race model inequality also occurred, and measures of processing capacity showed that efficiency was greater on the bimodal than on the unimodal trials. These data are discussed in terms of a possible underlying neural substrate.  相似文献   

19.
In Experiment 1, participants were presented with pairs of stimuli (one visual and the other tactile) from the left and/or right of fixation at varying stimulus onset asynchronies and were required to make unspeeded temporal order judgments (TOJs) regarding which modality was presented first. When the participants adopted an uncrossed-hands posture, just noticeable differences (JNDs) were lower (i.e., multisensory TOJs were more precise) when stimuli were presented from different positions, rather than from the same position. This spatial redundancy benefit was reduced when the participants adopted a crossed-hands posture, suggesting a failure to remap visuotactile space appropriately. In Experiment 2, JNDs were also lower when pairs of auditory and visual stimuli were presented from different positions, rather than from the same position. Taken together, these results demonstrate that people can use redundant spatial cues to facilitate their performance on multisensory TOJ tasks and suggest that previous studies may have systematically overestimated the precision with which people can make such judgments. These results highlight the intimate link between spatial and temporal factors in determining our perception of the multimodal objects and events in the world around us.  相似文献   

20.
When attempting to detect a near-threshold signal, participants often incorrectly report the presence of a signal, particularly when a stimulus in a different modality is presented. Here we investigated the effect of prior experience of bimodal visuotactile stimuli on the rate of falsely reported touches in the presence of a light. In Experiment 1, participants made more false alarms in light-present than light-absent trials, despite having no experience of the experimental visuotactile pairing. This suggests that light-evoked false alarms are a consequence of an existing association, rather than one learned during the experiment. In Experiment 2, we sought to manipulate the strength of the association through prior training, using supra-threshold tactile stimuli that were given a high or low association with the light. Both groups still exhibited an increased number of false alarms during light-present trials, however, the low association group made significantly fewer false alarms across conditions, and there was no corresponding group difference in the number of tactile stimuli correctly identified. Thus, while training did not affect the boosting of the tactile signal by the visual stimulus, the low association training affected perceptual decision-making more generally, leading to a lower number of illusory touch reports, independent of the light.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号