首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Cognition》2014,130(2):227-235
The sense of control over the consequences of one’s actions depends on predictions about these consequences. According to an influential computational model, consistency between predicted and observed action consequences attenuates perceived stimulus intensity, which might provide a marker of agentic control. An important assumption of this model is that these predictions are generated within the motor system. However, previous studies of sensory attenuation have typically confounded motor-specific perceptual modulation with perceptual effects of stimulus predictability that are not specific to motor action. As a result, these studies cannot unambiguously attribute sensory attenuation to a motor locus. We present a psychophysical experiment on auditory attenuation that avoids this pitfall. Subliminal masked priming of motor actions with compatible prime–target pairs has previously been shown to modulate both reaction times and the explicit feeling of control over action consequences. Here, we demonstrate reduced perceived loudness of tones caused by compatibly primed actions. Importantly, this modulation results from a manipulation of motor processing and is not confounded by stimulus predictability. We discuss our results with respect to theoretical models of the mechanisms underlying sensory attenuation and subliminal motor priming.  相似文献   

2.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

3.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

4.
How does the attentional system coordinate the processing of stimuli presented simultaneously to different sensory modalities? We investigated this question with individuals with neurological damage who suffered from deficits of attention. In these individuals, we examined how the processing of tactile stimuli is affected by the simultaneous presentation of visual or auditory stimuli. The investigation demonstrated that two stimuli from different modalities are in competition when attention is directed to the perceptual attributes of both, but not when attention is directed to the perceptual attributes of one and the semantic attributes of the other. These findings reveal a differentiated attentional system in which competition is modulated by the level of stimulus representation to which attention is directed.  相似文献   

5.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.  相似文献   

6.
Statistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children.  相似文献   

7.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

8.
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.  相似文献   

9.
Previous research on dual-tasks has shown that, under some circumstances, actions impair the perception of action-consistent stimuli, whereas, under other conditions, actions facilitate the perception of action-consistent stimuli. We propose a new model to reconcile these contrasting findings. The planning and control model (PCM) of motorvisual priming proposes that action planning binds categorical representations of action features so that their availability for perceptual processing is inhibited. Thus, the perception of categorically action-consistent stimuli is impaired during action planning. Movement control processes, on the other hand, integrate multi-sensory spatial information about the movement and, therefore, facilitate perceptual processing of spatially movement-consistent stimuli. We show that the PCM is consistent with a wider range of empirical data than previous models on motorvisual priming. Furthermore, the model yields previously untested empirical predictions. We also discuss how the PCM relates to motorvisual research paradigms other than dual-tasks.  相似文献   

10.
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones—both containing two varying features—were presented simultaneously. In Experiment 2, two gratings and two tones—each containing only one varying feature—were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.  相似文献   

11.
We can easily discriminate self-produced from externally generated sensory signals. Recent studies suggest that the prediction of the sensory consequences of one’s own actions made by forward model can be used to attenuate the sensory effects of self-produced movements, thereby enabling a differentiation of the self-produced sensation from the externally generated one. The present study showed that attenuation of sensation occurred both when participants themselves performed a goal-directed action and when they observed experimenter performing the same action, although they clearly reported that the tones were produced by other during action observation and by themselves during their own action. These results suggest that sensory prediction of action modulates ongoing auditory processing irrespective of who produces the sounds and that the explicit judgment of agency does not necessarily rely on the same mechanisms on which implicit perceptual measures such as sensory attenuation rely.  相似文献   

12.
Motion information available to different sensory modalities can interact at both perceptual and post-perceptual (i.e., decisional) stages of processing. However, to date, researchers have only been able to demonstrate the influence of one of these components at any given time, hence the relationship between them remains uncertain. We addressed the interplay between the perceptual and post-perceptual components of information processing by assessing their influence on performance within the same experimental paradigm. We used signal detection theory to discriminate changes in perceptual sensitivity (d') from shifts in response criterion (c) in performance on a detection (Experiment 1) and a classification (Experiment 2) task regarding the direction of auditory apparent motion streams presented in noise. In the critical conditions, a visual motion distractor moving either leftward or rightward was presented together with the auditory motion. The results demonstrated a significant decrease in sensitivity to the direction of the auditory targets in the crossmodal conditions as compared to the unimodal baseline conditions that was independent of the relative direction of the visual distractor. In addition, we also observed significant shifts in response criterion, which were dependent on the relative direction of the distractor apparent motion. These results support the view that the perceptual and decisional components involved in audiovisual interactions in motion processing can coexist but are largely independent of one another.  相似文献   

13.
Subjects were required to perform perceptual tasks when stimuli were presented simultaneously in the auditory and tactile modalities and when they were presented in one of the modalities alone. The results indicated that when the demands on cognitive processes are small, auditory and tactile stimuli presented simultaneously can be processed as well as when stimuli are presented in only one modality. In a task which required a large amount of cognitive processing, it became difficult for subjects to maintain high levels of performance in both modalities and the distribution of attention became an important determinant of performance. The data were consistent with a theory that cognitive, but not perceptual, processing is disrupted when subjects have difficulty performing two perceptual tasks simultaneously.  相似文献   

14.
ABSTRACT

This study aimed to investigate whether aging results in an increased attentional blink effect in older adults as compared to young adults. A rapid serial visual presentation (RSVP) paradigm was employed in which participants were asked to identify two targets (dual-task condition) presented in rapid succession. These targets were separated by various intervals in a stream of stimuli. The performance for identifying the second target was normally diminished as compared to identification of a single-task target. Various combinations of tasks, such as two perceptual tasks or one perceptual and one action task, as well as different types of pointing action, such as pointing to a displaced target, pointing to a stationary target or pointing to a disappeared target, were manipulated in this study to see if aging may further impact these variables. The results of this study showed that in young adults, successful identification of the first target interfered with identifying the second target, as well as the initiation time (action planning) of pointing to the second target. However, identification of the first target did not interfere with pointing movement time and pointing accuracy, even when the target was displaced, which required online control of action. Conversely, for older adults, successful central identification not only interfered with identifying the second target and with the pointing initiation time, but also interfered with pointing movement time for a displaced target. This suggests that older adults seem to be unable to concurrently identify the first target and correct their already-initiated pointing movement compared to young adults.  相似文献   

15.
Understanding how the human brain integrates features of perceived events calls for the examination of binding processes within and across different modalities and domains. Recent studies of feature-repetition effects have demonstrated interactions between shape, color, and location in the visual modality and between pitch, loudness, and location in the auditory modality: repeating one feature is beneficial if other features are also repeated, but detrimental if not. These partial-repetition costs suggest that co-occurring features are spontaneously bound into temporary event files. Here, we investigated whether these observations can be extended to features from different sensory modalities, combining visual and auditory features in Experiment 1 and auditory and tactile features in Experiment 2. The same types of interactions, as for unimodal feature combinations, were obtained including interactions between stimulus and response features. However, the size of the interactions varied with the particular combination of features, suggesting that the salience of features and the temporal overlap between feature-code activations plays a mediating role.  相似文献   

16.
Object-based auditory and visual attention   总被引:5,自引:0,他引:5  
Theories of visual attention argue that attention operates on perceptual objects, and thus that interactions between object formation and selective attention determine how competing sources interfere with perception. In auditory perception, theories of attention are less mature and no comprehensive framework exists to explain how attention influences perceptual abilities. However, the same principles that govern visual perception can explain many seemingly disparate auditory phenomena. In particular, many recent studies of 'informational masking' can be explained by failures of either auditory object formation or auditory object selection. This similarity suggests that the same neural mechanisms control attention and influence perception across different sensory modalities.  相似文献   

17.
Working memory, deafness and sign language   总被引:2,自引:2,他引:0  
Working memory (WM) for sign language has an architecture similar to that for speech-based languages at both functional and neural levels. However, there are some processing differences between language modalities that are not yet fully explained, although a number of hypotheses have been mooted. This article reviews some of the literature on differences in sensory, perceptual and cognitive processing systems induced by auditory deprivation and sign language use and discusses how these differences may contribute to differences in WM architecture for signed and speech-based languages. In conclusion, it is suggested that left-hemisphere reorganization of the motion-processing system as a result of native sign-language use may interfere with the development of the order processing system in WM.  相似文献   

18.
Beyond perceiving the features of individual objects, we also have the intriguing ability to efficiently perceive average values of collections of objects across various dimensions. Over what features can perceptual averaging occur? Work to date has been limited to visual properties, but perceptual experience is intrinsically multimodal. In an initial exploration of how this process operates in multimodal environments, we explored statistical summarizing in audition (averaging pitch from a sequence of tones) and vision (averaging size from a sequence of discs), and their interaction. We observed two primary results. First, not only was auditory averaging robust, but if anything, it was more accurate than visual averaging in the present study. Second, when uncorrelated visual and auditory information were simultaneously present, observers showed little cost for averaging in either modality when they did not know until the end of each trial which average they had to report. These results illustrate that perceptual averaging can span different sensory modalities, and they also illustrate how vision and audition can both cooperate and compete for resources.  相似文献   

19.
Traditional approaches to human information processing tend to deal with perception and action planning in isolation, so that an adequate account of the perception-action interface is still missing. On the perceptual side, the dominant cognitive view largely underestimates, and thus fails to account for, the impact of action-related processes on both the processing of perceptual information and on perceptual learning. On the action side, most approaches conceive of action planning as a mere continuation of stimulus processing, thus failing to account for the goal-directedness of even the simplest reaction in an experimental task. We propose a new framework for a more adequate theoretical treatment of perception and action planning, in which perceptual contents and action plans are coded in a common representational medium by feature codes with distal reference. Perceived events (perceptions) and to-be-produced events (actions) are equally represented by integrated, task-tuned networks of feature codes--cognitive structures we call event codes. We give an overview of evidence from a wide variety of empirical domains, such as spatial stimulus-response compatibility, sensorimotor synchronization, and ideomotor action, showing that our main assumptions are well supported by the data.  相似文献   

20.
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号