首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In the present review, we focus on how commonalities in the ontogenetic development of the auditory and tactile sensory systems may inform the interplay between these signals in the temporal domain. In particular, we describe the results of behavioral studies that have investigated temporal resolution (in temporal order, synchrony/asynchrony, and simultaneity judgment tasks), as well as temporal numerosity perception, and similarities in the perception of frequency across touch and hearing. The evidence reviewed here highlights features of audiotactile temporal perception that are distinctive from those seen for other pairings of sensory modalities. For instance, audiotactile interactions are characterized in certain tasks (e.g., temporal numerosity judgments) by a more balanced reciprocal influence than are other modality pairings. Moreover, relative spatial position plays a different role in the temporal order and temporal recalibration processes for audiotactile stimulus pairings than for other modality pairings. The effect exerted by both the spatial arrangement of stimuli and attention on temporal order judgments is described. Moreover, a number of audiotactile interactions occurring during sensory-motor synchronization are highlighted. We also look at the audiotactile perception of rhythm and how it may be affected by musical training. The differences emerging from this body of research highlight the need for more extensive investigation into audiotactile temporal interactions. We conclude with a brief overview of some of the key issues deserving of further research in this area.  相似文献   

3.
《Acta psychologica》1986,63(2):175-210
Image processing in the visual system is described utilizing some basic neurophysiological data. We propose that both sensory and cognitive operations address features already conjoined in critical receptive fields. As both sensory perception and further processing stages are critically dependent upon movement, the theory emphasizes sensory-motor reciprocity in imaging and in object perception.  相似文献   

4.
Auvray M  Myin E 《Cognitive Science》2009,33(6):1036-1058
Sensory substitution devices provide through an unusual sensory modality (the substituting modality, e.g., audition) access to features of the world that are normally accessed through another sensory modality (the substituted modality, e.g., vision). In this article, we address the question of which sensory modality the acquired perception belongs to. We have recourse to the four traditional criteria that have been used to define sensory modalities: sensory organ, stimuli, properties, and qualitative experience ( Grice, 1962 ), to which we have added the criteria of behavioral equivalence ( Morgan, 1977 ), dedication ( Keeley, 2002 ), and sensorimotor equivalence ( O'Regan & Noë, 2001 ). We discuss which of them are fulfilled by perception through sensory substitution devices and whether this favors the view that perception belongs to the substituting or to the substituted modality. Though the application of a number of criteria might be taken to point to the conclusion that perception with a sensory substitution device belongs to the substituted modality, we argue that the evidence leads to an alternative view on sensory substitution. According to this view, the experience after sensory substitution is a transformation, extension, or augmentation of our perceptual capacities, rather than being something equivalent or reducible to an already existing sensory modality. We develop this view by comparing sensory substitution devices to other "mind-enhancing tools" such as pen and paper, sketchpads, or calculators. An analysis of sensory substitution in terms of mind-enhancing tools unveils it as a thoroughly transforming perceptual experience and as giving rise to a novel form of perceptual interaction with the environment.  相似文献   

5.
Cortical operational synchrony during audio-visual speech integration   总被引:3,自引:0,他引:3  
Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McGurk effect it was shown in the present study that audio-visual synthesis takes place within a distributed and dynamic cortical networks with emergent properties. Various cortical sites within these networks interact with each other by means of so-called operational synchrony (Kaplan, Fingelkurts, Fingelkurts, & Darkhovsky, 1997). The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration.  相似文献   

6.
Extensive research has investigated societal and behavioral consequences of social group affiliation and identification but has been relatively silent on the role of perception in intergroup relations. We propose the perceptual model of intergroup relations to conceptualize how intergroup relations are grounded in perception. We review the growing literature on how intergroup dynamics shape perception across different sensory modalities and argue that these perceptual processes mediate intergroup relations. The model provides a starting point for social psychologists to study perception as a function of social group dynamics and for perception researchers to consider social influences. We highlight several gaps in the literature and outline areas for future research. Uncovering the role of perception in intergroup relations offers novel insights into the construction of shared reality and may help devise new and unique interventions targeted at the perceptual level.  相似文献   

7.
We describe an account of lexically guided tuning of speech perception based on interactive processing and Hebbian learning. Interactive feedback provides lexical information to prelexical levels, and Hebbian learning uses that information to retune the mapping from auditory input to prelexical representations of speech. Simulations of an extension of the TRACE model of speech perception are presented that demonstrate the efficacy of this mechanism. Further simulations show that acoustic similarity can account for the patterns of speaker generalization. This account addresses the role of lexical information in guiding both perception and learning with a single set of principles of information propagation.  相似文献   

8.
The own-race bias in memory for faces has been a rich source of empirical work on the mechanisms of person perception. This effect is thought to arise because the face-perception system differentially encodes the relevant structural dimensions of features and their configuration based on experiences with different groups of faces. However, the effects of sociocultural experiences on person perception abilities in other identity-conveying modalities like audition have not been explored. Investigating an own-race bias in the auditory domain provides a unique opportunity for studying whether person identification is a modality-independent construct and how it is sensitive to asymmetric cultural experiences. Here we show that an own-race bias in talker identification arises from asymmetric experience with different spoken dialects. When listeners categorized voices by race (White or Black), a subset of the Black voices were categorized as sounding White, while the opposite case was unattested. Acoustic analyses indicated listeners’ perceptions about race were consistent with differences in specific phonetic and phonological features. In a subsequent person-identification experiment, the Black voices initially categorized as sounding White elicited an own-race bias from White listeners, but not from Black listeners. These effects are inconsistent with person-perception models that strictly analogize faces and voices based on recognition from only structural features. Our results demonstrate that asymmetric exposure to spoken dialect, independent from talkers’ physical characteristics, affects auditory perceptual expertise for talker identification. Person perception thus additionally relies on socioculturally-acquired dynamic information, which may be represented by different mechanisms in different sensory modalities.  相似文献   

9.
A basic question in cognition is how visual information obtained in separate glances can produce a stable, continuous percept. Previous explanations have included theories such as integration in a trans-saccadic buffer or storage in visual memory, or even that perception begins anew with each fixation. Converging evidence from primate neurophysiology, human psychophysics and neuroimaging indicate an additional explanation: the intention to make a saccadic eye movement leads to a fundamental alteration in visual processing itself before and after the saccadic eye movement. We outline five principles of 'trans-saccadic perception' that could help to explain how it is possible - despite discrete sensory input and limited memory - that conscious perception across saccades seems smooth and predictable.  相似文献   

10.
Bisection tasks are used in research on normal space and time perception and to assess the perceptual distortions accompanying neurological disorders. Several variants of the bisection task are used, which often yield inconsistent results, prompting the question of which variant is most dependable and which results are to be trusted. We addressed this question using theoretical and experimental approaches. Theoretical performance in bisection tasks is derived from a general model of psychophysical performance that includes sensory components and decisional processes. The model predicts how performance should differ across variants of the task, even when the sensory component is fixed. To test these predictions, data were collected in a within-subjects study with several variants of a spatial bisection task, including a two-response variant in which observers indicated whether a line was transected to the right or left of the midpoint, a three-response variant (which included the additional option to respond “midpoint”), and a paired-comparison variant of the three-response format. The data supported the model predictions, revealing that estimated bisection points were least dependable with the two-response variant, because this format confounds perceptual and decisional influences. Only the three-response paired-comparison format can separate out these influences. Implications for research in basic and clinical fields are discussed.  相似文献   

11.
Attention is central to perception, yet a clear understanding of how attention influences the latency of perception has proven surprisingly elusive. Recent research has indicated that spatially attended stimuli are perceived earlier than unattended stimuli across a range of sensory modalities-an effect termed prior entry. However, the method commonly used to measure this, the temporal order judgment (TOJ) task, has been criticized as susceptible to response bias, despite deliberate attempts to minimize such bias. A preferred alternative is the simultaneity judgment (SJ) task. We tested the prior-entry hypothesis for somatosensory stimuli using both a TOJ task (replicating an earlier experiment) and an SJ task. Prior-entry effects were found for both, though the effect was reduced in the SJ task. Additional experiments (TOJ and SJ) using visual cues established that the earlier perception of cued tactile targets does not result from intramodal sensory interactions between tactile cues and targets.  相似文献   

12.
Language as context for the perception of emotion   总被引:1,自引:0,他引:1  
In the blink of an eye, people can easily see emotion in another person's face. This fact leads many to assume that emotion perception is given and proceeds independently of conceptual processes such as language. In this paper we suggest otherwise and offer the hypothesis that language functions as a context in emotion perception. We review a variety of evidence consistent with the language-as-context view and then discuss how a linguistically relative approach to emotion perception allows for intriguing and generative questions about the extent to which language shapes the sensory processing involved in seeing emotion in another person's face.  相似文献   

13.
ABSTRACT— Integrating emotional cues from different senses is critical for adaptive behavior. Much of the evidence on cross-modal perception of emotions has come from studies of vision and audition. This research has shown that an emotion signaled by one sense modulates how the same emotion is perceived in another sense, especially when the input to the latter sense is ambiguous. We tested whether olfaction causes similar sensory modulation of emotion perception. In two experiments, the chemosignal of fearful sweat biased women toward interpreting ambiguous expressions as more fearful, but had no effect when the facial emotion was more discernible. Our findings provide direct behavioral evidence that social chemosignals can communicate emotions and demonstrate that fear-related chemosignals modulate humans' visual emotion perception in an emotion-specific way—an effect that has been hitherto unsuspected.  相似文献   

14.
When an object looks red to an observer, the visual experience of the observer has two important features. The experience visually represents the object as having a property—being red. And the experience has a phenomenological character; that is, there is something that it is like to have an experience of seeing an object as red. Let qualia be the properties that give our sensory and perceptual experiences their phenomenological character. This essay takes up two related problem for a nonreductive account of qualia. Some have argued that on such an account there is no room in a physicalist ontology for qualia. Section 1 shows how qualia might fit into a physicalist ontology. The second problem begins with the observation that there is a gap in scientific accounts of color experience; there is no explanation of why the features of the brain that determine our color experiences give those experiences their phenomenological character. Building on the results of Sect. 1, Sect. 2 develops an account of color perception that bridges this gap and shows how qualia give color perception its phenomenological character. To get a grip on the issues involved the paper begins by considering some aspects of a physicalist account of color.  相似文献   

15.
Abstract

Significant attention has been paid to Berkeley's account of perception; however, the interpretations of Berkeley's account of perception by suggestion are either incomplete or mistaken. In this paper I begin by examining a common interpretation of suggestion, the ‘Propositional Account’. I argue that the Propositional Account is inadequate and defend an alternative, non‐propositional, account. I then address George Pitcher's objection that Berkeley's view of sense perception forces him to adopt a ‘non‐conciliatory’ attitude towards common sense. I argue that Pitcher's charge is no longer plausible once we recognize that Berkeley endorses the non‐propositional sense of mediate perception. I close by urging that the non‐propositional interpretation of Berkeley's account of mediate perception affords a greater appreciation of Berkeley's attempt to bring a philosophical account of sense perception in line with some key principles of common sense. While Berkeley's account of perception and physical objects permits physical objects to be immediately perceived by some of the senses, they are, most often, mediately perceived. But for Berkeley this is not a challenge to common sense since common sense requires only that we perceive objects by our senses and that they are, more or less, as we perceive them. Mediate perception by suggestion is, for Berkeley, as genuine a form of perception as immediate perception, and both are compatible with Berkeley's understanding of the demands of common sense.  相似文献   

16.
Crossmodal correspondences are a feature of human perception in which two or more sensory dimensions are linked together; for example, high-pitched noises may be more readily linked with small than with large objects. However, no study has yet systematically examined the interaction between different visual–auditory crossmodal correspondences. We investigated how the visual dimensions of luminance, saturation, size, and vertical position can influence decisions when matching particular visual stimuli with high-pitched or low-pitched auditory stimuli. For multidimensional stimuli, we found a general pattern of summation of the individual crossmodal correspondences, with some exceptions that may be explained by Garner interference. These findings have applications for the design of sensory substitution systems, which convert information from one sensory modality to another.  相似文献   

17.
Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual perception can directly determine how the information is going to be selected, consolidated, and maintained in VWM. We demonstrate the validity of this hypothesis by investigating what kinds of perceptual information can be stored as integrated objects in VWM. Three criteria for object-based storage are introduced: (a) automatic selection of task-irrelevant features, (b) synchronous consolidation of multiple features, and (c) stable maintenance of feature conjunctions. The results show that the outputs of parallel perception meet all three criteria, as opposed to the outputs of serial attentive processing, which fail all three criteria. These results indicate that (a) perception and VWM are not two sequential processes, but are dynamically intertwined; (b) there are dissociated mechanisms in VWM for storing information identified at different stages of perception; and (c) the integrated object representations in VWM originate from the "preattentive" or "proto" objects created by parallel perception. These results suggest how visual perception, attention, and VWM can be explained by a unified framework.  相似文献   

18.
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features that optimally explain the unisensory features arising in individual sensory modalities. The model qualitatively accounts for several important aspects of multisensory perception: (a) it integrates information from multiple sensory sources in such a way that it leads to superior performances in, for example, categorization tasks; (b) its performances suggest that multisensory training leads to better learning than unisensory training, even when testing is conducted in unisensory conditions; (c) its multisensory representations are modality invariant; and (d) it predicts ‘‘missing” sensory representations in modalities when the input to those modalities is absent. Our rational analysis indicates that all of these aspects emerge as part of the optimal solution to the problem of learning to represent complex multisensory environments.  相似文献   

19.
A model of social perception is presented and tested. The model is based on cognitive neuroscience models and proposes that the right cerebral hemisphere is more efficient at processing combinations of features whereas the left hemisphere is superior at identifying single features. These processes are hypothesized to produce person and group-based representations, respectively. Individuating or personalizing experience with an outgroup member was expected to facilitate the perception of the individuating features and inhibit the perception of the group features. In the presented study, participants were asked to learn about various ingroup and outgroup targets. Later, participants demonstrated that categorization response speeds to old targets were slower in the left hemisphere than in the right, particularly for outgroup members, as predicted. These findings are discussed for their relevance to models of social perception and stereotyping.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号