首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

2.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

3.
Learning to perceive differences in solid shape through vision and touch   总被引:1,自引:0,他引:1  
A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.  相似文献   

4.
If the mere exposure effect is based on implicit memory, recognition and affect judgments should be dissociated by experimental variables in the same manner as other explicit and implicit measures. Consistent with results from recognition and picture naming or object decision priming tasks (e.g., Biederman & E. E. Cooper, 1991, 1992; L. A. Cooper, Schacter, Ballesteros, & Moore, 1992), the present research showed that recognition memory but not affective preference was impaired by reflection or size transformations of three-dimensional objects between study and test. Stimulus color transformations had no effect on either measure. These results indicate that representations that support recognition memory code spatial information about an object’s left-right orientation and size, whereas representations that underlie affective preference do not. Insensitivity to surface feature changes that do not alter object form appears to be a general characteristic of implicit memory measures, including the affective preference task.  相似文献   

5.
Explicit memory tests such as recognition typically access semantic, modality-independent representations, while perceptual implicit memory tests typically access presemantic, modality-specific representations. By demonstrating comparable cross- and within-modal priming using vision and haptics with verbal materials (Easton, Srinivas, & Greene, 1997), we recently questioned whether the representations underlying perceptual implicit tests were modality specific. Unlike vision and audition, with vision and haptics verbal information can be presented in geometric terms to both modalities. The present experiments extend this line of research by assessing implicit and explicit memory within and between vision and haptics in the nonverbal domain, using both 2-D patterns and 3-D objects. Implicit test results revealed robust cross-modal priming for both 2-D patterns and 3-D objects, indicating that vision and haptics shared abstract representations of object shape and structure. Explicit test results for 3-D objects revealed modality specificity, indicating that the recognition system keeps track of the modality through which an object is experienced.  相似文献   

6.
Previous research has shown that visual perception is affected by sensory information from other modalities. For example, sound can alter the visual intensity or the number of visual objects perceived. However, when touch and vision are combined, vision normally dominates—a phenomenon known asvisual capture. Here we report a cross-modal interaction between active touch and vision: The perceived number of brief visual events (flashes) is affected by the number of concurrently performed finger movements (keypresses). This sensorimotor illusion occurred despite little ambiguity in the visual stimuli themselves and depended on a close temporal proximity between movement execution and vision.  相似文献   

7.
Martino G  Marks LE 《Perception》2000,29(6):745-754
At each moment, we experience a melange of information arriving at several senses, and often we focus on inputs from one modality and 'reject' inputs from another. Does input from a rejected sensory modality modulate one's ability to make decisions about information from a selected one? When the modalities are vision and hearing, the answer is "yes", suggesting that vision and hearing interact. In the present study, we asked whether similar interactions characterize vision and touch. As with vision and hearing, results obtained in a selective attention task show cross-modal interactions between vision and touch that depend on the synesthetic relationship between the stimulus combinations. These results imply that similar mechanisms may govern cross-modal interactions across sensory modalities.  相似文献   

8.
A new theory of mind–body interaction in healing is proposed based on considerations from the field of perception. It is suggested that the combined effect of visual imagery and mindful meditation on physical healing is simply another example of cross-modal adaptation in perception, much like adaptation to prism-displaced vision. It is argued that psychological interventions produce a conflict between the perceptual modalities of the immune system and vision (or touch), which leads to change in the immune system in order to realign the modalities. It is argued that mind–body interactions do not exist because of higher-order cognitive thoughts or beliefs influencing the body, but instead result from ordinary interactions between lower-level perceptual modalities that function to detect when sensory systems have made an error. The theory helps explain why certain illnesses may be more amenable to mind–body interaction, such as autoimmune conditions in which a sensory system (the immune system) has made an error. It also renders sensible erroneous changes, such as those brought about by “faith healers,” as conflicts between modalities that are resolved in favor of the wrong modality. The present view provides one of very few psychological theories of how guided imagery and mindfulness meditation bring about positive physical change. Also discussed are issues of self versus non-self, pain, cancer, body schema, attention, consciousness, and, importantly, developing the concept that the immune system is a rightful perceptual modality. Recognizing mind–body healing as perceptual cross-modal adaptation implies that a century of cross-modal perception research is applicable to the immune system.  相似文献   

9.
An increase in affective preference for stimuli, which a person has been repeatedly exposed to, is known as mere exposure effect. This effect has been shown for stimuli that are processed subliminally, that is, below the threshold of awareness. This study fills a current research gap by investigating mere exposure effects under processing that is preconscious, which follows from a high stimulus strength but absence of top-down amplification. In three experiments (N = 240 in total) preconscious processing was evoked using an inattentional blindness paradigm, which allowed the processing of stimuli (nonwords or Chinese symbols) under complete inattention. Contrary to our hypothesis, we did not find a mere exposure effect in our experiments. We expand the current state of knowledge by discussing the distractor devaluation effect and the attentional set of participants as possible reasons for the absence of the mere exposure effect. Directions for future investigations are outlined.  相似文献   

10.
《Cognitive development》2006,21(2):81-92
Two experiments investigated 5-month-old infants’ amodal sensitivity to numerical correspondences between sets of objects presented in the tactile and visual modes. A classical cross-modal transfer task from touch to vision was adopted. Infants were first tactually familiarized with two or three different objects presented one by one in their right hand. Then, they were presented with visual displays containing two or three objects. Visual displays were presented successively (Experiment 1) or simultaneously (Experiment 2). In both experiments, results showed that infants looked longer at the visual display which contained a different number of objects from the tactile familiarization phase. Taken together, the results revealed that infants can detect numerical correspondences between a sequence of tactile and visual stimulation, and they strengthen the hypothesis of amodal and abstract representation of small numbers of objects (two or three) across sensory modalities in 5-month-old infants.  相似文献   

11.
Preference for previously seen, unfamiliar objects reflects a memory bias on affective judgment, known as the “mere exposure effect” (MEE). Here, we investigated the effect of time, post-exposure sleep, and the brain hemisphere solicited on preference generalization toward objects viewed in different perspectives. When presented in the right visual field (RVF), which promotes preferential processing in the left hemisphere, same and mirrored exemplars were preferred immediately after exposure. MEE generalized to much dissimilar views after three nights of sleep. Conversely, object presentation in the left visual field (LVF), promoting right hemisphere processing, elicited a MEE for same views immediately after exposure, then for mirror views after sleep. Most importantly, sleep deprivation during the first post-exposure night, although followed by two recovery nights, extinguished MEE for all views in the LVF but not in the RVF. Besides demonstrating that post-exposure time and sleep facilitate the generalization process by which we integrate various representations of an object, our results suggest that mostly in the right hemisphere, sleep may be mandatory to consolidate the memory bias underlying affective preference. These interhemispheric differences tentatively call for a reappraisal of the role of cerebral asymmetries in wake- and sleep-dependent processes of memory consolidation.  相似文献   

12.
The mere exposure effect is defined as enhanced attitude toward a stimulus that has been repeatedly exposed. Repetition priming is defined as facilitated processing of a previously exposed stimulus. We conducted a direct comparison between the two phenomena to test the assumption that the mere exposure effect represents an example of repetition priming. In two experiments, having studied a set of words or nonwords, participants were given a repetition priming task (perceptual identification) or one of two mere exposure (affective liking or preference judgment) tasks. Repetition priming was obtained for both words and nonwords, but only nonwords produced a mere exposure effect. This demonstrates a key boundary for observing the mere exposure effect, one not readily accommodated by a perceptual representation systems (Tulving & Schacter, 1990) account, which assumes that both phenomena should show some sensitivity to nonwords and words.  相似文献   

13.
Experimental evidence is presented supporting Nuttin's (1985, 1987) conclusion that the name letter effect (i.e. a preference for letters occurring in the own name above not-own name letters) is an affective consequence of mere ownership. We argue that ‘evaluative conditioning’ (e.g. Martin & Levey, 1987) was not fully eradicated by Hoorens (1990) as an alternative explanation for the name letter effect. In the present experiment, we tried to separate evaluative conditioning from ownership induction. An essential requirement for ‘mere’ ownership postulated by Nuttin (1987) is that the preferences for owned versus noi-owned objects are measured or obtained in absence of subjects' awareness of their belongingness to self. This criterion was perhaps not fully satisfied. However, our results are more in agreement with the mere ownership view than with an account solely based on evaluative conditioning. The mere ownership effect (i.e. a preference for any object belonging to the self above any similar object belonging to another) is described as disclosing a purely affective self-bias.  相似文献   

14.
Repeated exposure of a nonreinforced stimulus results in an increased preference for that stimulus, the mere exposure effect. The present study repeatedly presented positive, negative, and neutrally affective faces to 48 participants while they made judgments about the emotional expression. Participants then rated the likeability of novel neutrally expressive faces and some of these previously presented faces, this time in their neutral expression. Faces originally presented as happy were rated as the most likeable, followed by faces originally presented as neutral. Negative and novel faces were not rated significantly differently from each other. These findings support the notion that the increase in preference towards repeatedly presented stimuli is the result of the reduction in negative affect, consistent with the modified two-factor uncertainty-reduction model and classical conditioning model of the mere exposure effect.  相似文献   

15.
Bedford FL 《Perception》2011,40(10):1265-1267
The five senses were handed down by Aristotle. I argue that it has only taken two millennia to recognize that the immune system has been the hidden sensory modality. The immune system completes the range of operation allowing detection of meaningful entities at all distances, from very near to very far. It also withstands the often implicit criteria for being a sense modality. Finally, cross-modal interactions between the immune system and vision and other sense modalities should be possible, opening up new research directions.  相似文献   

16.
The present research investigates newborn infants' perceptions of the shape and texture of objects through studies of the bi-directionality of cross-modal transfer between vision and touch. Using an intersensory procedure, four experiments were performed in newborns to study their ability to transfer shape and texture information from vision to touch and from touch to vision. The results showed that cross-modal transfer of shape is not bi-directional at birth. Newborns visually recognized a shape previously held but they failed to tactually recognize a shape previously seen. In contrast, a bi-directional cross-modal transfer of texture was observed. Taken together, the results suggest that newborn infants, like older children and adults, gather information differently in the visual and tactile modes, for different object properties. The findings provide evidence for continuity in the development of mechanisms for perceiving object properties.  相似文献   

17.
Two hypotheses were contrasted. One posed a positive relationship between mere repeated exposure and preference for stimulus of concern; the other predicted that preference for stimuli would be modulated by their relative novelty and complexity. These hypotheses were tested within the context of a naturalistic play situation in which 40 4- to 5-year-old children were repeatedly exposed to and interacted with play settings differing in complexity. Overt preference for play objects declined with repeated exposure, the rate of decline being inversely determined by the complexity of the play stimuli. Preference for peers, however, increased as a function of repeated exposure, with the amount of increase being an inverse function of the complexity of the external setting.  相似文献   

18.
In this study, we evaluated observers' ability to compare naturally shaped three-dimensional (3-D) objects, using their senses of vision and touch. In one experiment, the observers haptically manipulated 1 object and then indicated which of 12 visible objects possessed the same shape. In the second experiment, pairs of objects were presented, and the observers indicated whether their 3-D shape was the same or different. The 2 objects were presented either unimodally (vision-vision or haptic-haptic) or cross-modally (vision-haptic or haptic-vision). In both experiments, the observers were able to compare 3-D shape across modalities with reasonably high levels of accuracy. In Experiment 1, for example, the observers' matching performance rose to 72% correct (chance performance was 8.3%) after five experimental sessions. In Experiment 2, small (but significant) differences in performance were obtained between the unimodal vision-vision condition and the two cross-modal conditions. Taken together, the results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.  相似文献   

19.
Julia Mayas 《心理学报》2009,41(11):1063-1074
通道内重复启动的研究提示老年人内隐记忆未受损, 这不只体现在视觉通道上还包括其他感觉通道(例如触觉、听觉和嗅觉)。然而很少有研究考察启动任务是否具有通道特异性。在以年轻人为被试的研究中发现跨通道迁移(视觉到触觉和触觉到视觉)和通道内迁移(视觉到视觉, 触觉到触觉)具有相似性。一项最近的研究进一步探索老年人在跨通道启动任务上是否受损。结果显示视觉和触觉间的跨通道启动在年轻被试和老年被试上都是保留的且具有对称性。并且, 对于自然声响、图片的通道内和跨通道启动任务随着年老化发展仍旧保留。这些行为结果和其它最近神经科学结果显示跨通道启动发生于枕叶后纹状皮层区, 而这一区域在老年人中未损坏。这一领域未来的研究方向包括利用不同知觉通道间、利用熟悉的和新异的刺激并结合行为的和脑成像的方法, 通过设计完善的跨通道启动来研究正常老人与阿尔兹海默病人, 还包括将设计得完善的启动任务包括在用于改善老年人记忆功能的项目中。  相似文献   

20.
C Hulme  A Smart  G Moran  A Raine 《Perception》1983,12(4):477-483
The ability of children between the ages of 5 and 10 years to match the length of lines within and between the modalities of vision and kinaesthesis was studied. No evidence was found for specific increases in cross-modal skill which could not be explained in terms of within-modal development. Performance in the perceptual task was related to measures of developing motor skill in the children. Substantial relationships were found between performance on the within-modal tasks and motor skill, but no significant relationships were found between cross-modal measures and motor skill development. It is concluded that the development of cross-modal integration is not a major determinant of motor skill development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号