首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
University of Illinois at Urbana-Champaign, Urbana, Illinois Much of our learning comes from interacting with objects. Two experiments investigated whether or not arbitrary actions used during category learning with objects might be incorporated into object representations and influence later recognition judgments. In a virtual-reality chamber, participants used distinct arm movements to make different classification responses. During a recognition test phase, these same objects required arm movements that were consistent or inconsistent with the classification movement. In both experiments, consistent movements were facilitated relative to inconsistent movements, suggesting that arbitrary action information is incorporated into the representations.  相似文献   

3.
Eye movements of 12 Ss were examined during learning and recognition of two-dimensional random shapes to determine the nature of the memorial representation of a stimulus and the utilization of this memorial representation in pattern recognition. Specifically, the purpose of this study was to test the scanpath model of pattern perception by determining whether scanpaths exist and, if so, how they influence recognition performance. Scanpaths, defined as overlapping fixation patterns in learning and recognition tasks, were observed in over half of all eye-movement records regardless of shape complexity. Presence of scanpaths did not increase recognition performance as measured by errors in recognition and Ss’ ability to reproduce the shapes. Although scanpaths did not influence recognition performance, their occurrence implicates them as a potential factor in the recognition process.  相似文献   

4.
5.
Between-arm performance asymmetry can be seen in different arm movements requiring specific interjoint coordination to generate the desired hand trajectory. In the current investigation, we assessed between-arm asymmetry of shoulder-elbow coordination and its stability in the performance of circular movements. Participants were 16 healthy right-handed university students. The task consisted of performing cyclic circular movements with either the dominant right arm or the nondominant left arm at movement frequencies ranging from 40% of maximum to maximum frequency in steps of 15%. Kinematic analysis of shoulder and elbow motions was performed through an optoelectronic system in the three-dimensional space. Results showed that as movement frequency increased circularity of left arm movements diminished, taking an elliptical shape, becoming significantly different from the right arm at higher movement frequencies. Shoulder-elbow coordination was found to be asymmetric between the two arms across movement frequencies, with lower shoulder-elbow angle coefficients and higher relative phase for the left compared to the right arm. Results also revealed greater variability of left arm movements in all variables assessed, an outcome observed from low to high movement frequencies. From these findings, we propose that specialization of the left cerebral hemisphere for motor control resides in its higher capacity to generate appropriate and stable interjoint coordination leading to the planned hand trajectory.  相似文献   

6.
Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function follows form. We used eye movements to explore whether activating one object's concept leads to the activation of others that share perceptual (shape) or abstract (function) features. Participants viewed 4-picture displays and clicked on the picture corresponding to a heard word. In critical trials, the conceptual representation of 1 of the objects in the display was similar in shape or function (i.e., its purpose) to the heard word. Importantly, this similarity was not apparent in the visual depictions (e.g., for the target Frisbee, the shape-related object was a triangular slice of pizza, a shape that a Frisbee cannot take); preferential fixations on the related object were therefore attributable to overlap of the conceptual representations on the relevant features. We observed relatedness effects for both shape and function, but shape effects occurred earlier than function effects. We discuss the implications of these findings for current accounts of the representation of semantic memory.  相似文献   

7.
In this paper we present a methodology for recognizing three fundamental movements of the human forearm (extension, flexion and rotation) using pattern recognition applied to the data from a single wrist-worn, inertial sensor. We propose that this technique could be used as a clinical tool to assess rehabilitation progress in neurodegenerative pathologies such as stroke or cerebral palsy by tracking the number of times a patient performs specific arm movements (e.g. prescribed exercises) with their paretic arm throughout the day. We demonstrate this with healthy subjects and stroke patients in a simple proof of concept study in which these arm movements are detected during an archetypal activity of daily-living (ADL) – ‘making-a-cup-of-tea’. Data is collected from a tri-axial accelerometer and a tri-axial gyroscope located proximal to the wrist. In a training phase, movements are initially performed in a controlled environment which are represented by a ranked set of 30 time-domain features. Using a sequential forward selection technique, for each set of feature combinations three clusters are formed using k-means clustering followed by 10 runs of 10-fold cross validation on the training data to determine the best feature combinations. For the testing phase, movements performed during the ADL are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprised of the best ranked features, using Euclidean or Mahalanobis distance as the metric. Experiments were performed with four healthy subjects and four stroke survivors and our results show that the proposed methodology can detect the three movements performed during the ADL with an overall average accuracy of 88% using the accelerometer data and 83% using the gyroscope data across all healthy subjects and arm movement types. The average accuracy across all stroke survivors was 70% using accelerometer data and 66% using gyroscope data. We also use a Linear Discriminant Analysis (LDA) classifier and a Support Vector Machine (SVM) classifier in association with the same set of features to detect the three arm movements and compare the results to demonstrate the effectiveness of our proposed methodology.  相似文献   

8.
Humans are very good at perceiving each other's movements. In this article, we investigate the role of time-based information in the recognition of individuals from point light biological motion sequences. We report an experiment in which we used an exaggeration technique that changes temporal properties while keeping spatial information constant; differences in the durations of motion segments are exaggerated relative to average values. Participants first learned to recognize six individuals on the basis of a simple, unexaggerated arm movement. Subsequently, they recognized positively exaggerated versions of those movements better than the originals. Absolute duration did not appear to be the critical cue. The results show that time-based cues are used for the recognition of movements and that exaggerating temporal differences improves performance. The results suggest that exaggeration may reflect general principles of how diagnostic information is encoded for recognition in different domains.  相似文献   

9.
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for basic but not superordinate-level animal recognition. Experiment 3 found that inverting animals eliminates the right hemisphere advantage for basic-level animal recognition. This pattern of results suggests that the cognitive and neural mechanisms underlying face recognition are recruited when computational demands of a shape representation task are best served through the use of coordinate (rather than categorical) spatial relations.  相似文献   

10.
The ability to express emotions is a protective factor for infant development. Despite the multimodal nature of emotion expression, research has mainly focused on facial expressions of emotions. The present study examined motor activity and spatial proximity in relation to positive and negative infant facial expressions and maternal postpartum depression during face-to-face interactions at four months. Video cameras and a motion capture system recorded mother-infant interactions. Repeated measures ANOVAs were conducted to analyze the effect of micro-coded infant positive and negative facial affect and maternal depression diagnosis on automatically extracted measures of motor activity and spatial proximity, including speed of mothers’ arm movements (nondepressed = 32; PPD = 16), and infants’ arm movements (nondepressed = 29; PPD = 17), and head distance (nondepressed = 45; PPD = 27). Results showed that the speed of infants’ arm movements and head distance were greater during negative compared to positive infant affect. Further, the results demonstrated that the speed of PPD mothers’ arm movements was slower than the speed of nondepressed mothers’ arm movements. In the discussion, it is suggested that increased speed of infant arm movements during negative affect functions to elicit faster caregiving responses, and that increased head distance during negative infant affect functions to decrease the intensity of the interaction. Finally, the slower speed of arm movements in PPD mothers suggests psychomotor retardation, which is proposed to limit these mothers’ abilities to engage their infants during the interaction.  相似文献   

11.
It has been found that the estimate of relative target direction is consistently biased. Relative target direction refers to the direction in which a target is located relative to another location in space (e.g., a starting position in the case of goal-directed movements). In this study, we have tested two models that could underlie this biased estimate. The first proposed model is based on a distorted internal representation of locations (i.e., we perceive a target at the “wrong” location). We call this thedistorted location model. The second model is based on the idea that the derivation of target direction from spatial information about starting and target position is biased. We call this thebiased direction model. These two models lead to different predictions of the deviations that occur when the distance between the starting position and the target position is increased. Since we know from previous studies that the initial direction of slow arm movements reflects the target direction estimate, we tested the two models by analyzing the initial direction of slow arm movements. The results show that the biased direction model can account for the biases we find in the target direction estimate for various target distances, whereas the distorted location model cannot. In two additional experiments, we explored this model further. The results show that the biases depend only on the orientation of the line through starting position and target position relative to the plane through longitudinal head or body axis and starting position. We conclude that the initial part of (slow) goal-directed arm movements is planned on the basis of a (biased) target direction estimate and not on the basis of a wrong internal representation of target location. This supports the hypothesis that we code displacements of our limbs in space as a vector.  相似文献   

12.
Two experiments dissociated the roles of intrinsic orientation of a shape and participants’ study viewpoint in shape recognition. In Experiment 1, participants learned shapes with a rectangular background that was oriented differently from their viewpoint, and then recognized target shapes, which were created by splitting study shapes along different intrinsic axes, at different views. Results showed that recognition was quicker when the study shapes were split along the axis parallel to the orientation of the rectangular background than when they were split along the axis parallel to participants’ viewpoint. In Experiment 2, participants learned shapes without the rectangular background. The results showed that recognition was quicker when the study shape was split along the axis parallel to participants’ viewpoint. In both experiments, recognition was quicker at the study view than at a novel view. An intrinsic model of object representation and recognition was proposed to explain these findings.  相似文献   

13.
14.
After-effects following sensorimotor adaptation are generally considered as evidence for the formation of an internal model, although evidence lacks on whether the absence of after-effects necessarily indicates that the adaptation did not result in the formation of an internal model. Here, we examined direct- and after-effects of dynamic adaptation with one arm at one workspace on subsequent performance with the other arm, as well as the same arm at another workspace. During training, subjects performed reaching movements under a novel dynamic condition with the right arm; during testing, they performed reaching movements with the left or right arm at a new workspace, under either the same dynamic condition (direct-effects) or a normal condition (after-effects). Results showed significant transfer within the same arm in terms of both direct- and after-effects, but significant transfer across the arms only in terms of direct-effects. These findings suggest that the formation of an internal model does not always result in after-effects. They also support the idea that the neural representation developed after sensorimotor adaptation comprise some aspects that are effector independent and other aspects that are effector dependent; and that direct- and after-effects following sensorimotor adaptation mainly reflect the effector-independent and the effector-dependent aspects, respectively.  相似文献   

15.
How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a "category boundary effect". In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.  相似文献   

16.
How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.  相似文献   

17.
Speech-associated gestures, Broca's area, and the human mirror system   总被引:3,自引:0,他引:3  
Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored"). We compared the functional connectivity of Broca's area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca's area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.  相似文献   

18.
Perceiving Real-World Viewpoint Changes   总被引:10,自引:0,他引:10  
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.  相似文献   

19.
20.
Numerous studies use arm movements (arm flexion and extension) to investigate the interaction between emotional stimuli and approach/avoidance behaviour. In many experiments, however, these arm movements are ambiguous. Arm flexion can be interpreted either as pulling (approach) or as withdrawing (avoidance). On the contrary, arm extension can be interpreted as reaching (approach) or as pushing (avoidance). This ambiguity can be resolved by regarding approach and avoidance as flexible action plans that are represented in terms of their effects. Approach actions reduce the distance between a stimulus and the self, whereas avoidance actions increase that distance. In this view, action effects are an integral part of the representation of an action. As a result, a neutral action can become an approach or avoidance reaction if it repeatedly results in decreasing or increasing the distance to a valenced stimulus. This hypothesis was tested in the current study. Participants responded to positive and negative words using key-presses. These “neutral” responses (not involving arm flexion or extension) were consistently followed by a stimulus movement toward or away from the participant. Responses to emotional words were faster when the response's effect was congruent with stimulus valence, suggesting that approach/avoidance actions are indeed defined in terms of their outcomes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号