首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 959 毫秒
1.
Our motor and perceptual representations of actions seem to be intimately linked and the human mirror neuron system (MNS) has been proposed as the mediator. In two experiments, we presented biological or non-biological movement stimuli that were either congruent or incongruent to a required response prompted by a tone. When the tone occurred with the onset of the last movement in a series, i.e., it was perceived during the movement presentation, congruent biological stimuli resulted in faster reaction times than congruent non-biological stimuli. The opposite was observed for incongruent stimuli. When the tone was presented after visual movement stimulation, however, no such interaction was present. This implies that biological movement stimuli only affect motor behaviour during visual processing but not thereafter. These data suggest that the MNS is an "online" system; longstanding repetitive visual stimulation (Experiment 1) has no benefit in comparison to only one or two repetitions (Experiment 2).  相似文献   

2.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

3.
It has been suggested that representing an action through observation and imagery share neural processes with action execution. In support of this view, motor-priming research has shown that observing an action can influence action initiation. However, there is little motor-priming research showing that imagining an action can modulate action initiation. The current study examined whether action imagery could prime subsequent execution of a reach and grasp action. Across two motion analysis tracking experiments, 40 participants grasped an object following congruent or incongruent action imagery. In Experiment 1, movement initiation was faster following congruent compared to incongruent imagery, demonstrating that imagery can prime the initiation of grasping. In Experiment 2, incongruent imagery resulted in slower movement initiation compared to a no-imagery control. These data show that imagining a different action to that which is performed can interfere with action production. We propose that the most likely neural correlates of this interference effect are brain regions that code imagined and executed actions. Further, we outline a plausible mechanistic account of how priming in these brain regions through imagery could play a role in action cognition.  相似文献   

4.
Priming effects were tested on the planning of the grasping of common objects under full vision during action performance. Healthy participants took part in four experiments which manipulated the nature of the prime (objects, circular block, rectangular bar) and priming context (blocked vs. mixed). Each experiment relied on four priming conditions: (1) congruent orientation, (2) incongruent orientation, (3) neutral prime, and (4) no prime. Priming was observed to have a facilitating effect on visually guided grasping when the object to be grasped was primed by a congruently oriented identical object. This effect was rather independent of the priming context (experimental set-up). Our data suggest an object's functional identity may contribute to the priming effect, as well as its intrinsic (e.g., shape, size) and extrinsic (orientation) visual characteristics. We showed that the planning of visually guided grasping is influenced by prior visual experience, and thus that grasping is not based exclusively on real-time processing of visual information.  相似文献   

5.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

6.
Participants respond more quickly to two simultaneously presented target stimuli of two different modalities (redundant targets) than would be predicted from their reaction times to the unimodal targets. To examine the neural correlates of this redundant-target effect, event-related potentials (ERPs) were recorded to auditory, visual, and bimodal standard and target stimuli presented at two locations (left and right of central fixation). Bimodal stimuli were combinations of two standards, two targets, or a standard and a target, presented either from the same or from different locations. Responses generally were faster for bimodal stimuli than for unimodal stimuli and were faster for spatially congruent than for spatially incongruent bimodal events. ERPs to spatially congruent and spatially incongruent bimodal stimuli started to differ over the parietal cortex as early as 160 msec after stimulus onset. The present study suggests that hearing and seeing interact at sensory-processing stages by matching spatial information across modalities.  相似文献   

7.
This study tested the influence of orientation priming on grasping decisions. Two groups of 20 healthy participants had to select a preferred grasping orientation (horizontal, vertical) based on drawings of everyday objects, geometric blocks or object names. Three priming conditions were used: congruent, incongruent and neutral. The facilitating effects of priming were observed in the grasping decision task for drawings of objects and blocks but not object names. The visual information about congruent orientation in the prime quickened participants’ responses but had no effect on response accuracy. The results are discussed in the context of the hypothesis that an object automatically potentiates grasping associated with it, and that the on-line visual information is necessary for grasping potentiation to occur. The possibility that the most frequent orientation of familiar objects might be included in object-action representation is also discussed.  相似文献   

8.
Participants performed a priming task during which emotional faces served as prime stimuli and emotional words served as targets. Prime-target pairs were congruent or incongruent, and two levels of prime visibility were obtained by varying the duration of the masked primes. To probe a neural signature of the impact of the masked primes, lateralized readiness potentials (LRPs) were recorded over motor cortex. In the high-visibility condition, responses to word targets were faster when the prime-target pairs were congruent than when they were incongruent, providing evidence of priming effects. In line with the behavioral results, the electrophysiological data showed that high-visibility face primes resulted in LRP differences between congruent and incongruent trials, suggesting that prime stimuli initiated motor preparation. Contrary to the above pattern, no evidence for reaction time or LRP differences was observed in the low-visibility condition, revealing that the depth of facial expression processing is dependent on stimulus visibility.  相似文献   

9.
Previous research shows that simultaneously executed grasp and vocalization responses are faster when the precision grip is performed with the vowel [i] and the power grip is performed with the vowel [ɑ]. Research also shows that observing an object that is graspable with a precision or power grip can activate the grip congruent with the object. Given the connection between vowel articulation and grasping, this study explores whether grasp‐related size of observed objects can influence not only grasp responses but also vowel pronunciation. The participants had to categorize small and large objects into natural and manufactured categories by pronouncing the vowel [i] or [ɑ]. As predicted, [i] was produced faster when the object's grasp‐related size was congruent with the precision grip while [ɑ] was produced faster when the size was congruent with the power grip (Experiment 1). The effect was not, however, observed when the participants were presented with large objects that are not typically grasped by the power grip (Experiment 2). This study demonstrates that vowel production is systematically influenced by grasp‐related size of a viewed object, supporting the account that sensory‐motor processes related to grasp planning and representing grasp‐related properties of viewed objects interact with articulation processes. The paper discusses these findings in the context of size–sound symbolism, suggesting that mechanisms that transform size‐grasp affordances into corresponding grasp‐ and articulation‐related motor programs might provide a neural basis for size‐sound phenomena that links small objects with closed‐front vowels and large objects with open‐back vowels.  相似文献   

10.
Differential effects of cast shadows on perception and action   总被引:1,自引:0,他引:1  
Bonfiglioli C  Pavani F  Castiello U 《Perception》2004,33(11):1291-1304
In two experiments we investigated the effects of cast shadows on different real-life tasks. In experiment 1, participants were required to make a speeded verbal identification of the target object (perceptual task), whereas in experiment 2 participants were required to reach for and grasp the target object (motor task). In both experiments real three-dimensional (3-D) objects were presented, one at a time, either with their own natural cast shadow (congruent condition) or with the cast shadow of a different object (incongruent condition). Shadows were cast either to the left or to the right of the object. We asked whether the features of the shadow (ie whether it is congruent or incongruent with the object, and whether it is cast to the left or to the right of the object) could influence perception and action differently. Results showed that cast shadows did not influence identification of real 3-D objects (experiment 1), but they affected movement kinematics, producing distractor-like interference, particularly on movement trajectory (experiment 2). These findings suggest a task-dependent influence of cast shadows on human performance. In the case of object-oriented actions, cast shadows may represent further affordances of the object, and as such compete for the control of the action.  相似文献   

11.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

12.
The gestures that accompany speech are more than just arbitrary hand movements or communicative devices. They are simulated actions that can both prime and facilitate speech and cognition. This study measured participants’ reaction times for naming degraded images of objects when simultaneously adopting a gesture that was either congruent with the target object, incongruent with it, and when not making any hand gesture. A within‐subjects design was used, with participants (N= 122) naming 10 objects under each condition. Participants named the objects significantly faster when adopting a congruent gesture than when not gesturing at all. Adopting an incongruent gesture resulted in significantly slower naming times. The findings are discussed in the context of the intrapersonal cognitive and facilitatory effects of gestures and underline the relatedness between language, action, and cognition.  相似文献   

13.
The author investigated the conditions under which a congruent or incongruent orientation word affects processing of the orientation of visual objects. Participants named the orientation of a rectangle that partially occluded another rectangle. Congruent or incongruent orientation words appeared in the relevant object, in the irrelevant object, or in the background. There were two main results. First, congruent orientation words produced faster orientation-naming responses than incongruent orientation words. This finding constituted a new Stroop effect for spatial orientation. Second, only words in the relevant (i.e., attended) object produced Stroop effects, whereas words outside the relevant object had no significant effects. This object-dependent modulation of Stroop effects resembled previous findings with color-naming tasks, and hence indicated that these modulations are not restricted to a particular type of task. In summary, results suggested that object-based attention plays an important role in processing of irrelevant words.  相似文献   

14.
Eye gaze conveys rich information concerning the states of mind of others, playing a critical role in social interactions, signaling internal states, and guiding others’ attention. On the basis of its social significance, some researchers have proposed that eye gaze may represent a unique attentional stimulus. However, contrary to this notion, the majority of the literature has shown indistinguishable attentional effects when eye gaze and arrows have been used as cues. Taking a different approach, in this study we aimed at finding qualitative attentional differences between gazes and arrows when they were used as targets instead of as cues. We used a spatial Stroop task, in which participants were required to identify the direction of eyes or arrows presented to the left or the right of a fixation point. The results showed that the two types of stimuli led to opposite spatial interference effects, with arrows producing faster reaction times when the stimulus direction was congruent with the stimulus position (a typical spatial Stroop effect), and eye gaze producing faster reaction times when it was incongruent (a “reversed” spatial Stroop effect). This reversed Stroop is interpreted as an eye-contact effect, therefore revealing the unique nature of eyes as special social-attention stimuli.  相似文献   

15.
According to the ideomotor principle, action preparation involves the activation of associations between actions and their effects. However, there is only sparse research on the role of action effects in saccade control. Here, participants responded to lateralized auditory stimuli with spatially compatible saccades toward peripheral targets (e.g., a rhombus in the left hemifield and a square in the right hemifield). Prior to the imperative auditory stimulus (e.g., a left tone), an irrelevant central visual stimulus was presented that was congruent (e.g., a rhombus), incongruent (e.g., a square), or unrelated (e.g., a circle) to the peripheral saccade target (i.e., the visual effect of the saccade). Saccade targets were present throughout a trial (Experiment 1) or appeared after saccade initiation (Experiment 2). Results showed shorter response times and fewer errors in congruent (vs. incongruent) conditions, suggesting that associations between oculomotor actions and their visual effects play an important role in saccade control.  相似文献   

16.
The authors investigated the impact of different motor demands on space- and object-based attention allocation. Responses to targets were either lifting a finger, or pointing to the target, or grasping a clay object placed on the target location. Reaction times and movement times were recorded to assess covert and overt attention, respectively. Both reaction times and movement times showed more space-based attention for pointing than for finger lifting and more object-based attention for grasping than for pointing. That result supports the view that visual selectivity is tuned to specific motor intentions (H. Bekkering & F. W. Neggers, 2002) and illustrates the tight coupling of perception and action.  相似文献   

17.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

18.
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.  相似文献   

19.
When we recognize an object, do we automatically know how big it is in the world? We employed a Stroop-like paradigm, in which two familiar objects were presented at different visual sizes on the screen. Observers were faster to indicate which was bigger or smaller on the screen when the real-world size of the objects was congruent with the visual size than when it was incongruent--demonstrating a familiar-size Stroop effect. Critically, the real-world size of the objects was irrelevant for the task. This Stroop effect was also present when only one item was present at a congruent or incongruent visual size on the display. In contrast, no Stroop effect was observed for participants who simply learned a rule to categorize novel objects as big or small. These results show that people access the familiar size of objects without the intention of doing so, demonstrating that real-world size is an automatic property of object representation.  相似文献   

20.
In two experiments, we examined the spatial integration of two viewpoints onto dynamic scenes. We tested the spatial-alignment hypothesis (which predicts integration by alignment along the shortest path) against the spatial-heuristic hypothesis (which predicts integration by observation of the left?Cright orientation on the screen). The stimuli consisted of film clips comprising two shots, each showing a car driving by. In Experiment 1, the dynamic scenes were ambiguous with regard to their interpretation: The cars could have been driving either in the same or in opposite directions. In line with the spatial-heuristic hypothesis, participants responded with ??same direction?? if the cars shared a screen direction. In Experiment 2, environmental cues disambiguated the dynamic scenes, and the screen direction of the cars was either congruent or incongruent with the depicted environmental cues. As compared with congruent dynamic scenes, incongruent dynamic scenes elicited prolonged reaction times, thus suggesting that heuristic spatial updating was used with congruent stimuli, whereas spatial-alignment processes were used with incongruent stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号