首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
According to the ideomotor principle, action preparation involves the activation of associations between actions and their effects. However, there is only sparse research on the role of action effects in saccade control. Here, participants responded to lateralized auditory stimuli with spatially compatible saccades toward peripheral targets (e.g., a rhombus in the left hemifield and a square in the right hemifield). Prior to the imperative auditory stimulus (e.g., a left tone), an irrelevant central visual stimulus was presented that was congruent (e.g., a rhombus), incongruent (e.g., a square), or unrelated (e.g., a circle) to the peripheral saccade target (i.e., the visual effect of the saccade). Saccade targets were present throughout a trial (Experiment 1) or appeared after saccade initiation (Experiment 2). Results showed shorter response times and fewer errors in congruent (vs. incongruent) conditions, suggesting that associations between oculomotor actions and their visual effects play an important role in saccade control.  相似文献   

2.
In this article, we ask what serves as the “glue” that temporarily links information to form an event in an active observer. We examined whether forming a single action event in an active observer is contingent on the temporal presentation of the stimuli (hence, on the temporal availability of the action information associated with these stimuli), or on the learned temporal execution of the actions associated with the stimuli, or on both. A partial-repetition paradigm was used to assess the boundaries of an event for which the temporal properties of the stimuli (i.e., presented either simultaneously or temporally separate) and the intended execution of the actions associated with these stimuli (i.e., executed as one, temporally integrated, response or as two temporally separate responses) were manipulated. The results showed that the temporal features of action execution determined whether one or more events were constructed; the temporal presentation of the stimuli (and hence the availability of their associated actions) did not. This suggests that the action representation, or “task goal,” served as the “glue” in forming an event in an active observer. These findings emphasize the importance of action planning in event construction in an active observer.  相似文献   

3.
It has been shown that, when observing an action, infants can rely on either outcome selection information (i.e., actions that express a choice between potential outcomes) or means selection information (i.e., actions that are causally efficient toward the outcome) in their goal attribution. However, no research has investigated the relationship between these two types of information when they are present simultaneously. In an experiment that addressed this question directly, we found that when outcome selection information could disambiguate the goal of the action (e.g., the action is directed toward one of two potential targets), but means selection information could not (i.e., the action is not efficiently adjusted to the situational constraints), 7- and 9-month-old infants did not attribute a goal to an observed action. This finding suggests that means selection information takes primacy over outcome selection information. The early presence of this bias sheds light on the nature of the notion of goal in action understanding.  相似文献   

4.
The goal-directed theory of imitation (GOADI) states that copying of action outcomes (e.g., turning a light switch) takes priority over imitation of the means by which those outcomes are achieved (e.g., choice of effector or grip). The object < effector < grip error pattern in the pen-and-cups task provides strong support for GOADI. Experiment 1 replicated this effect using video stimuli. Experiment 2 showed that shifting the color cue from objects to effectors makes imitation of effector selection more accurate than imitation of object and grip selection. Experiment 3 replicated this result when participants were required to describe actions. Experiment 4 indicated that, when participants are imitating and describing actions, enhancing grip discriminability makes grip selection the most accurately executed component of the task. Consistent with theories that hypothesize that imitation relies on task-general mechanisms (e.g., the associative sequence learning model, ideomotor theory), these findings suggest that imitation is no more or less goal directed than other tasks involving action observation.  相似文献   

5.
The implications of an ideomotor approach to action control were investigated. In Experiment 1, participants made manual responses to letter stimuli and they were presented with response-contingent color patches, i.e., colored action effects. This rendered stimuli of the same color as an action's effect effective primes of that action, suggesting that bilateral associations were created between actions and the effects they produced. Experiment 2 combined this set-up with a manual Stroop task, i.e., participants responded to congruent, neutral, or incongruent color-word compounds. Standard Stroop effects were observed in a control group without action effects and in a group with target-incompatible action effects, but the reaction time Stroop effect was eliminated if actions produced target-compatible color effects (e.g., blue word --> left key --> blue patch). Experiment 3 did not replicate this interaction between target-effect compatibility and color-word congruency with color words as action effects, which rules out semantically based accounts. Theoretical implications for both action-effect acquisition and the Stroop effect are discussed. It is suggested that learning action effects, the features of which overlap with the target, allows and motivates people to recode their actions in ways that make them more stimulus-compatible. This provides a processing shortcut for translating the relevant stimulus into the correct response and, thus, shields processing from the impact of competing word distractors.  相似文献   

6.
Two experiments investigated priming in free association, a conceptual implicit memory task. The stimuli consisted of bidirectionally associated word pairs (e.g., BEACH-SAND) and unidirectionally associated word pairs that have no association from the target response back to the stimulus cue (e.g., BONE-DOG). In the study phase, target words (e.g., SAND, DOG) were presented in an incidental learning task. In the test phase, participants generated an associate to the stimulus cues (e.g., BEACH, BONE). In both experiments, priming was obtained for targets (e.g., SAND) that had an association back to the cue, but not for targets (e.g., DOG) for which such a backward association was absent. These results are problematic for theoretical accounts that attribute priming in free association to the strengthening of target responses. It is argued that priming in free association depends on the strengthening of cue-target associations.  相似文献   

7.
The Simon effect denotes faster responses when the task-irrelevant stimulus position corresponds to response position than when it does not. A common explanation is that a spatial stimulus code is formed automatically and activates a spatially corresponding response code. Previous research on stimulus–response (S–R) compatibility has focused on the ability to initiate movements to stimulus onsets. The present study investigates spatial-compatibility effects (i.e., the Simon effect) in the ability to initiate and to terminate actions both to stimulus onsets and to stimulus offsets. There were four major results. Firstly, offset stimuli produced normal Simon effects suggesting that stimulus offsets can automatically produce spatial codes. Secondly, onset stimuli produced larger Simon effects than offset stimuli, which is consistent with the attention-shift account of spatial coding. Thirdly, Simon effects were also observed in action termination. Fourthly, Simon effects in action initiation and in action termination were of similar size.  相似文献   

8.
Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.  相似文献   

9.
Behavioral research has shown that infants use both behavioral cues and verbal cues when processing the goals of others’ actions. For instance, 18-month-olds selectively imitate an observed goal-directed action depending on its (in)congruence with a model’s previous verbal announcement of a desired action goal. This EEG-study analyzed the electrophysiological underpinnings of these behavioral findings on the two functional levels of conceptual action processing and motor activation. Mid-latency mean negative ERP amplitude and mu-frequency band power were analyzed while 18-month-olds (N = 38) watched videos of an adult who performed one out of two potential actions on a novel object. In a within-subjects design, the action demonstration was preceded by either a congruent or an incongruent verbally announced action goal (e.g., “up” or “down” and upward movement). Overall, ERP negativity did not differ between conditions, but a closer inspection revealed that in two subgroups, about half of the infants showed a broadly distributed increased mid-latency ERP negativity (indicating enhanced conceptual action processing) for either the congruent or the incongruent stimuli, respectively. As expected, mu power at sensorimotor sites was reduced (indicating enhanced motor activation) for congruent relative to incongruent stimuli in the entire sample. Both EEG correlates were related to infants’ language skills. Hence, 18-month-olds integrate action-goal-related verbal cues into their processing of others’ actions, at the functional levels of both conceptual processing and motor activation. Further, cue integration when inferring others’ action goals is related to infants’ language proficiency.  相似文献   

10.
Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.  相似文献   

11.
We examined the claim that the autobiographical Implicit Association Test (aIAT) can detect concealed memories. Subjects read action statements (e.g., “break the toothpick”) and either performed the action or completed math problems. They then imagined some of these actions and some new actions. Two weeks later, the subjects completed a memory test and then an aIAT in which they categorized true and false statements (e.g., “I am in front of the computer”) and whether they had or had not performed actions from Session 1. For half of the subjects, the nonperformed statements were actions that they saw but did not perform; for the remaining subjects, these statements were actions that they saw and imagined but did not perform. Our results showed that the aIAT can distinguish between true autobiographical events (performed actions) and false events (nonperformed actions), but that it is less effective, the more that subjects remember performing actions that they did not really perform. Thus, the diagnosticity of the aIAT may be limited.  相似文献   

12.
Three studies explored how infants parse a stream of motion into distinct actions. Results show that infants (a) can perceptually discriminate different actions performed by a puppet and (b) can individuate and enumerate heterogeneous sequences of such actions (e.g., jump-fall-jump) when the actions are separated by brief motionless pauses, but (c) are not able to individuate such actions when embedded within a continuous stream of motion. Combined with previous research showing that infants can individuate homogeneous actions from an ongoing stream of motion, these findings suggest that infants can use repeating patterns of motion in the perceptual input to define action boundaries. Results have implications as well for infants' conceptual structure for actions.  相似文献   

13.
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory–motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual–motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice.  相似文献   

14.
Visuomotor priming occurs when our actions are influenced by observing a compatible or incompatible action. Here we ask whether visuomotor priming is specific to human, biological actions or generalises to non-biological movements, such as abstract shapes or robots. Reviewing the evidence indicates that priming occurs for both types of stimuli and emphasises the contributions of both bottom-up (e.g. stimulus saliency, appearance, kinematics) and top-down (e.g. attention and prior knowledge) factors. We propose a model suggesting that although bottom-up features play a critical role, the degree of difference in priming for biological versus non-biological stimuli can be ultimately shaped by top-down factors.  相似文献   

15.
Age differences in adults' memory for performed actions (e.g., wave hand) are sometimes smaller than age differences in memory for nonperformed phrases. In this study, we examined the conditions under which performance reduces age differences in recall. Younger and older adults performed or read verb-noun phrases that were either related (e.g., actions performed in a kitchen) or unrelated. Performance did not reduce age differences in recall of the exact verbs and nouns used to describe an action, but performance did reduce age differences in memory for the gist of related actions. Older adults especially had difficulty recalling the exact verb used to describe the action. These results suggest that older adults may have better memory for actions than is revealed by tests of verbatim recall. They may remember performing the action but not remember the exact words used to describe the action.  相似文献   

16.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.  相似文献   

17.
Precision and power grip priming by observed grasping   总被引:1,自引:0,他引:1  
The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.  相似文献   

18.
19.
This article reviews situations in which stimuli produce an increase or a decrease in nociceptive responses through basic associative processes and provides an associative account of such changes. Specifically, the literature suggests that cues associated with stress can produce conditioned analgesia or conditioned hyperalgesia, depending on the properties of the conditioned stimulus (e.g., contextual cues and audiovisual cues vs. gustatory and olfactory cues, respectively) and the proprieties of the unconditioned stimulus (e.g., appetitive, aversive, or analgesic, respectively). When such cues are associated with reducers of exogenous pain (e.g., opiates), they typically increase sensitivity to pain. Overall, the evidence concerning conditioned stress-induced analgesia, conditioned hyperalagesia, conditioned tolerance to morphine, and conditioned reduction of morphine analgesia suggests that selective associations between stimuli underlie changes in pain sensitivity.  相似文献   

20.
Task co-representation has been proposed to rely on the motor brain areas’ capacity to represent others’ action plans similarly to one's own. The joint memory (JM) effect suggests that working in parallel with others influences the depth of incidental encoding: Other-relevant items are better encoded than non-task-relevant items. Using this paradigm, we investigated whether task co-representation could also emerge for non-motor tasks. In Experiment 1, we found enhanced recall performance to stimuli relevant to the co-actor also when the participants’ task required non-motor responses (counting the target words) instead of key-presses. This suggests that the JM effect did not depend on simulating the co-actor's motor responses. In Experiment 2, direct visual access to the co-actor and his actions was found to be unnecessary to evoke the JM effect in case of the non-motor, but not in case of the motor task. Prior knowledge of the co-actor's target category is sufficient to evoke deeper incidental encoding. Overall, these findings indicate that the capacity of task co-representation extends beyond the realm of motor tasks: Simulating the other's motor actions is not necessary in this process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号