首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous research investigated the contributions of target objects, situational context and movement kinematics to action prediction separately. The current study addresses how these three factors combine in the prediction of observed actions. Participants observed an actor whose movements were constrained by the situational context or not, and object-directed or not. After several steps, participants had to indicate how the action would continue. Experiment 1 shows that predictions were most accurate when the action was constrained and object-directed. Experiments 2A and 2B investigated whether these predictions relied more on the presence of a target object or cues in the actor's movement kinematics. The target object was artificially moved to another location or occluded. Results suggest a crucial role for kinematics. In sum, observers predict actions based on target objects and situational constraints, and they exploit subtle movement cues of the observed actor rather than the direct visual information about target objects and context.  相似文献   

2.
Natural face and head movements were mapped onto a computer rendered three-dimensional average of 100 laser-scanned heads in order to isolate movement information from spatial cues and nonrigid movements from rigid head movements (Hill & Johnston, 2001). Experiment 1 investigated whether subjects could recognize, from a rotated view, facial motion that had previously been presented at a full-face view using a delayed match to sample experimental paradigm. Experiment 2 compared recognition for views that were either between or outside initially presented views. Experiment 3 compared discrimination at full face, three-quarters, and profile after learning at each of these views. A significant face inversion effect in Experiments 1 and 2 indicated subjects were using face-based information rather than more general motion or temporal cues for optimal performance. In each experiment recognition performance only ever declined with a change in viewpoint between sample and test views when rigid motion was present. Nonrigid, face-based motion appears to be encoded in a viewpoint invariant, object-centred manner, whereas rigid head movement is encoded in a more view specific manner.  相似文献   

3.
Speakers often use gesture to demonstrate how to perform actions—for example, they might show how to open the top of a jar by making a twisting motion above the jar. Yet it is unclear whether listeners learn as much from seeing such gestures as they learn from seeing actions that physically change the position of objects (i.e., actually opening the jar). Here, we examined participants' implicit and explicit understanding about a series of movements that demonstrated how to move a set of objects. The movements were either shown with actions that physically relocated each object or with gestures that represented the relocation without touching the objects. Further, the end location that was indicated for each object covaried with whether the object was grasped with one or two hands. We found that memory for the end location of each object was better after seeing the physical relocation of the objects, that is, after seeing action, than after seeing gesture, regardless of whether speech was absent (Experiment 1) or present (Experiment 2). However, gesture and action built similar implicit understanding of how a particular handgrasp corresponded with a particular end location. Although gestures miss the benefit of showing the end state of objects that have been acted upon, the data show that gestures are as good as action in building knowledge of how to perform an action.  相似文献   

4.
In the present paper, we investigated whether observation of bodily cues—that is, hand action and eye gaze—can modulate the onlooker's visual perspective taking. Participants were presented with scenes of an actor gazing at an object (or straight ahead) and grasping an object (or not) in a 2?×?2 factorial design and a control condition with no actor in the scene. In Experiment 1, two groups of subjects were explicitly required to judge the left/right location of the target from their own (egocentric group) or the actor's (allocentric group) point of view, whereas in Experiment 2 participants did not receive any instruction on the point of view to assume. In both experiments, allocentric coding (i.e., the actor's point of view) was triggered when the actor grasped the target, but not when he gazed towards it, or when he adopted a neutral posture. In Experiment 3, we demonstrate that the actor's gaze but not action affected participants' attention orienting. The different effects of others' grasping and eye gaze on observers' behaviour demonstrated that specific bodily cues convey distinctive information about other people's intentions.  相似文献   

5.
Across five experiments, we investigated the parameters involved in the observation and in the execution of the action of lifting an object. The observers were shown minimal information on movements, consisting of either the working-point displacement only (ie two points representing the hand and object) or additional configural information on the kinematics of the trunk, shoulder, arm, forearm, and hand, joined by a stick diagram. Furthermore, displays showed either a participant's own movements or those of another person, when different weights were lifted. The participants' task was to estimate the weight of the lifted objects. The results revealed that, although overall performance was not dependent on the visual conditions (working point versus stick diagram) or ownership conditions (self versus other), the kinematic cues used to perform the task differed as a function of these conditions. In addition, the kinematic parameters relevant for action observation did not match those relevant for action execution. This was confirmed in experiments by using artificially altered movement samples, where the variations in critical kinematic variables were manipulated separately or in combination. We discuss the implications of these results for the roles of motor simulation and visual analysis in action observation.  相似文献   

6.

Purpose

This study aims to investigate the influence of both individual consumer differences and the purchase decision context on the effectiveness of consensus information in advertising.

Design/Methodology/Approach

Three experiments explore the effectiveness of consensus information. In Experiment 1, gender serves as a moderator. Experiment 2 contains an examination of the susceptibility to interpersonal influence (SII) and purchase decision context as two potential moderators. Finally, Experiment 3 instead explores the need for cognitive closure (NFC) but again includes the purchase decision context as the two possible moderators.

Findings

In Experiment 1, female participants, but not male participants, generate higher purchase intentions for ads with consensus cues as opposed to those without them. With Experiment 2, this study demonstrates that the effectiveness of consensus cues increases for a group (vs. personal) purchase decision, but only for people with high susceptibility to individual influence. In Experiment 3, the effectiveness of consensus cues is relatively greater for a group (vs. personal) purchase decisions, but only for consumers with a high NFC.

Implications

Understanding what moderates the effectiveness of consensus information in advertising has the potential to help practitioners apply consensus information more effectively to improve their advertising returns.

Originality/Value

This study provides initial evidence about the impact of consensus information in advertising on purchase intentions, which is contingent on the situational context and individual differences.  相似文献   

7.
Studies of deception detection traditionally have focused on verbal communication. Nevertheless, people commonly deceive others through nonverbal cues. Previous research has shown that intentions can be inferred from the ways in which people move their bodies. Furthermore, motor expertise within a given domain has been shown to increase visual sensitivity to other people’s movements within that domain. Does expertise also enhance deception detection from bodily movement? In two psychophysical studies, experienced basketball players and novices attempted to distinguish deceptive intentions (fake passes) and veridical intentions (true passes) from an observed individual’s actions. Whereas experts and novices performed similarly with postural cues, only experts could detect deception from kinematics alone. These results demonstrate a link between action expertise and the detection of nonverbal deception.  相似文献   

8.
A left-handers’ performance advantage in interactive sports is assumed to result from their relative rarity compared to right-handers. Part of this advantage may be explained by athletes facing difficulties anticipating left-handers’ action intentions, particularly when anticipation is based on kinematic cues available at an early stage of an opponent’s movement. Here we tested whether the type of volleyball attack is predicted better against right- vs. left-handed opponents’ movements and whether such handedness effects are evident at earlier time points in skilled players than novices. In a video-based experiment volleyball players and novices predicted the type of shot (i.e., smash vs. lob) of left- and right-handed volleyball attacks occluded at six different time points. Overall, right-handed attacks were better anticipated than left-handed attacks, volleyball players outperformed novices, and performance improved in later occlusion conditions. Moreover, in skilled players the handedness effect was most pronounced when attacks were occluded 480 ms prior to hand-ball-contact, whereas in novices it was most evident 240 ms prior to hand-ball-contact. Our findings provide further evidence of the effect of an opponent’s handedness on action outcome anticipation and suggest that its occurrence in the course of an opponent’s unfolding action likely depends on an observers’ domain-specific skill.  相似文献   

9.
Internal knowledge and visual cues about object's weight play an important role in grasping and lifting objects. It has been shown that both visual cues and internal knowledge might influence movement kinematics and force production depending on action goal (use vs. transport). However, there is little evidence about weight's influence on action planning as reflected by initiation time. In the present study we investigated this issue. In Experiment 1, participants had to grasp light and heavy objects (without moving them) to either use or transport them. In Experiment 2 we asked another group of participants to actually use or transport the same objects. We observed that initiation times were faster for heavy objects than for light objects in both the transport and use tasks, but only in Experiment 2. Thus, weight influenced the planning of use and transport actions, only when the end-goal of the action was really achieved. These data are incompatible with the hypothesis that only use actions are supported by stored object's representations. They rather suggest that in some circumstances, depending of the end-goal of the action and the physical constraints the planning of both use and transport actions are based on stored object representation.  相似文献   

10.
Observing the movements of another person influences the observer's intention to execute similar movements. However, little is known about how action intentions formed prior to movement planning influence this effect. In the experiment reported here, we manipulated the congruency of movement intentions and action intentions in a pair of jointly acting individuals (i.e., a participant paired with a confederate coactor) and investigated how congruency influenced performance. Overall, participants initiated actions faster when they had the same action intention as the coactor (i.e., when participants and the coactor were pursuing the same conceptual goal). Participants' responses were also faster when their and the coactor's movement intentions were directed to the same spatial location, but only when participants had the same action intention as the coactor. These findings suggest that observers use the same representation to implement their own action intentions that they use to infer other people's action intentions and also that a dynamic, multitiered intentional mechanism is involved in the processing of other people's actions.  相似文献   

11.
In a behavioral study we analyzed the influence of visual action primes on abstract action sentence processing. We thereby aimed at investigating mental motor involvement during processes of meaning constitution of action verbs in abstract contexts. In the first experiment, participants executed either congruous or incongruous movements parallel to a video prime. In the second experiment, we added a no‐movement condition. After the execution of the movement, participants rendered a sensibility judgment on action sentence targets. It was expected that congruous movements would facilitate both concrete and abstract action sentence comprehension in comparison to the incongruous and the no‐movement condition. Results in Experiment 1 showed a concreteness effect but no effect of motor priming. Experiment 2 revealed a concreteness effect as well as an interaction effect of the sentence and the movement condition. The findings indicate an involvement of motor processes in abstract action language processing on a behavioral level.  相似文献   

12.
The present study examined the contribution of efficiency reasoning and statistical learning on visual action anticipation in preschool children, adolescents, and adults. To this end, Experiment 1 assessed proactive eye movements of 5-year-old children, 15-year-old adolescents, and adults, who observed an agent stating the intent to reach a goal as quickly as possible. Subsequently the agent could four times either take a short, hence efficient, or long, hence inefficient, path to get to the goal. The results showed that in the first trial participants in none of the age groups predicted above chance level that the agent would produce the efficient action. Instead, we observed an age-dependent increase in action predictions in the subsequent repeated presentation of the same action. Experiment 2 ruled out that participants’ nonconsideration of the efficient path was due to a lack of understanding of the agent's action goal. Moreover, it demonstrated that 5-year-old children do predict that the agent will act efficiently when verbally reasoning about his future action. Overall, the study supports the view that rapid learning from frequency information guides visual action anticipations.  相似文献   

13.
We demonstrate in two experiments that real and imagined body movements appropriate to metaphorical phrases facilitate people's immediate comprehension of these phrases. Participants first learned to make different body movements given specific cues. In two reading time studies, people were faster to understand a metaphorical phrase, such as push the argument, when they had previously just made an appropriate body action (e.g., a push movement) (Experiment 1), or imagined making a specific body movement (Experiment 2), than when they first made a mismatching body action (e.g., a chewing movement) or no movement. These findings support the idea that appropriate body action, or even imagined action, enhances people's embodied, metaphorical construal of abstract concepts that are referred to in metaphorical phrases.  相似文献   

14.
While bimanual interference effects can be observed when symbolic cues indicate the parameter values of simultaneous reaching movements, these effects disappear under conditions in which the target locations of two movements are cued directly. The present study investigates the generalizability of these target-location cuing benefits to conditions in which symbolic cues are used to indicate target locations (i.e., the end points of bimanual movements). Participants were asked to move to two of four possible target locations, being located either at the same and different distances (Experiment 1), or in the same and different directions (Experiment 2). Circles and crosses served as symbolic target-location cues and were arranged in a symmetric or non-symmetric fashion over the four target locations. Each trial was preceded by a variable precuing interval. Results revealed faster initiation times for equivalent as compared to non-equivalent target locations (same vs. different cues). Moreover, the time course of prepartion suggests that this effect is in fact due to target-equivalence and not to cue-similarity. Bimanual interference relative to movement parameter values was not observed. These findings suggest that cuing target locations can dominate potential intermanual interference effects during the concurrent programming of different movement parameter values.  相似文献   

15.
Human movement initiation: specification of arm, direction, and extent   总被引:24,自引:0,他引:24  
This article presents a method for discovering how the defining values of forthcoming body movements are specified. In experiments using this movement precuing technique, information is given about some, none, or all of the defining values of a movement that will be required when a reaction signal is presented. It is assumed that the reaction time (RT) reflects the time to specify those values that were not precued. With RTs for the same movements in different precue conditions, it is possible to make detailed inferences about the value specification process for each of the movements under study. The present experiments were concerned with the specification of the arm, direction, and extent (or distance) of aimed hand movements. In Experiment 1 it appeared that (a) specification times during RTs were longest for arm, shorter for direction, and shortest for extent, and (b) these values were specified serially but not in an invariant order. Experiment 2 suggested that the precuing effects obtained in Experiment 1 were not attributable to stimulus identification. Experiment 3 suggested that subjects in Experiment 1 did not use precues to prepare sets of possible movements from which the required movement was later selected. The model of value specification supported by the data is consistant with a distinctive-feature view, rather than a hierarchical view, of motor programming.  相似文献   

16.
Human observers demonstrate impressive visual sensitivity to human movement. What defines this sensitivity? If motor experience influences the visual analysis of action, then observers should be most sensitive to their own movements. If view-dependent visual experience determines visual sensitivity to human movement, then observers should be most sensitive to the movements of their friends. To test these predictions, participants viewed sagittal displays of point-light depictions of themselves, their friends, and strangers performing various actions. In actor identification and discrimination tasks, sensitivity to one's own motion was highest. Visual sensitivity to friends', but not strangers', actions was above chance. Performance was action dependent. Control studies yielded chance performance with inverted and static displays, suggesting that form and low-motion cues did not define performance. These results suggest that both motor and visual experience define visual sensitivity to human action.  相似文献   

17.
Converging evidence has shown that action observation and execution are tightly linked. The observation of an action directly activates an equivalent internal motor representation in the observer (direct matching). However, whether direct matching is primarily driven by basic perceptual features of the observed movement or is influenced by more abstract interpretative processes is an open question. A series of behavioral experiments tested whether direct matching, as measured by motor priming, can be modulated by inferred action goals and attributed intentions. Experiment 1 tested whether observing an unsuccessful attempt to execute an action is sufficient to produce a motor-priming effect. Experiment 2 tested alternative perceptual explanations for the observed findings. Experiment 3 investigated whether the attribution of intention modulates motor priming by comparing motor-priming effects during observation of intended and unintended movements. Experiment 4 tested whether participants' interpretation of the movement as triggered by an external source or the actor's intention modulates the motor-priming effect by a pure instructional manipulation. Our findings support a model in which direct matching can be top-down modulated by the observer's interpretation of the observed movement as intended or not.  相似文献   

18.
5 blindfolded Ss were required to make absolute judgments of the extent to which their extended right arm was voluntarily moved in the horizontal plane. The first experiment entailed the judgment of 20 different amplitudes and from these data a scale of equal discriminability was constructed for each S. From these individual scales amplitudes were selected for 5 additional absolute judgment experiments where the number of amplitudes were varied from 4 to 16. Analysis of the mean equal discriminability scale showed that kinesthetic sensitivity varied over the continuum of movements. The primary analysis of information transfer between number of amplitudes of movement and responses indicated that information transfer varied considerably over the 5 experiments with a maximum transfer of 2.48 bits occurring when 16 amplitudes were used. These results were discussed in terms of the possible cues involved in movement discrimination and whether kinesthetic cues could be used in a closed-loop model of voluntary movement control.  相似文献   

19.
探讨面孔部件(眼睛和鼻子)在个体和群体注意方向判断中的作用。实验1使用不同数量面孔的图片,要求报告群体或个体的注意方向。结果发现,多面孔条件下对群体注意方向估计的准确性高于单面孔条件。实验2采用眼动技术,探讨眼睛和鼻子在判断其注意方向时注视的空间与时间分布特征。结果发现,基于单张面孔判断时,对鼻子的总注视时间长于眼睛;基于多张面孔判断时,对眼睛和鼻子的总注视时间没有差异。整个研究表明,知觉个体注意主要依赖鼻子,知觉群体注意依赖眼睛和鼻子。  相似文献   

20.
The effects of viewing the face of the talker (visual speech) on the processing of clearly presented intact auditory stimuli were investigated using two measures likely to be sensitive to the articulatory motor actions produced in speaking. The aim of these experiments was to highlight the need for accounts of the effects of audio-visual (AV) speech that explicitly consider the properties of articulated action. The first experiment employed a syllable-monitoring task in which participants were required to monitor for target syllables within foreign carrier phrases. An AV effect was found in that seeing a talker's moving face (moving face condition) assisted in more accurate recognition (hits and correct rejections) of spoken syllables than of auditory-only still face (still face condition) presentations. The second experiment examined processing of spoken phrases by investigating whether an AV effect would be found for estimates of phrase duration. Two effects of seeing the moving face of the talker were found. First, the moving face condition had significantly longer duration estimates than the still face auditory-only condition. Second, estimates of auditory duration made in the moving face condition reliably correlated with the actual durations whereas those made in the still face auditory condition did not. The third experiment was carried out to determine whether the stronger correlation between estimated and actual duration in the moving face condition might have been due to generic properties of AV presentation. Experiment 3 employed the procedures of the second experiment but used stimuli that were not perceived as speech although they possessed the same timing cues as those of the speech stimuli of Experiment 2. It was found that simply presenting both auditory and visual timing information did not result in more reliable duration estimates. Further, when released from the speech context (used in Experiment 2), duration estimates for the auditory-only stimuli were significantly correlated with actual durations. In all, these results demonstrate that visual speech can assist in the analysis of clearly presented auditory stimuli in tasks concerned with information provided by viewing the production of an utterance. We suggest that these findings are consistent with there being a processing link between perception and action such that viewing a talker speaking will activate speech motor schemas in the perceiver.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号