首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
When performing a giant circle on high bar a gymnast flexes at the hips in the lower part of the circle, increasing the kinetic energy, and extends in the upper part of the circle, decreasing the kinetic energy. In order to perform a sequence of giant circles at even tempo, any variation in angular velocity at the end of the flexion phase needs to be reduced by the end of the extension phase. The aim of this study was to determine the nature and contribution of such adjustments. A computer simulation model of a gymnast performing giant circles on high bar was used to investigate strategies of (a) fixed timing of the extension phase (feedforward control) and (b) stretched timing in order to extend at the same point of the giant circle (feedforward with additional feedback control). For three elite gymnasts fixed timing reduced the angular velocity variation on average by 36% whereas stretched timing reduced the variation by 63%. The mean reduction for the actual gymnast techniques was 61%. It was concluded that both feedforward and feedback control strategies are used by gymnasts for controlling such movements.  相似文献   

2.
The purpose of this investigation was to develop and evaluate a wobbling mass model of a female performing a drop landing and to examine the influence of soft tissue properties on the impact loads experienced. A planar model comprising a foot, shank, thigh and upper body segment was developed. Spring-damper systems coupled the foot to the ground and the wobbling masses to the rigid masses. Unlike traditional wobbling mass models of landing, the model included a foot segment, which allowed replication of forefoot-heel landing techniques and also used subject and movement-specific properties to simulate the landings. Kinematics and force data collected for three drop landings (height 0.46 m) performed by a female were separately used to drive and evaluate the model. The wobbling mass model successfully reproduced the measured force profiles to 9% (RMS differences) of the measured range and replicated the measured peak vertical ground reaction forces to 6%. The accuracies of the wobbling mass model and a corresponding rigid mass model were compared. The inclusion of soft tissue properties in the model contributed up to an 8.6 bodyweights reduction in peak impact loading and produced a 52% more accurate replication of the measured force profiles. The prominent role soft tissues have in load attenuation and the benefits of modelling soft tissue in simulations of landings were therefore highlighted. The success of the wobbling mass model in replicating the kinetics of actual landing performances suggests the model may be used in the future to gain a realistic insight into load attenuation strategies used by females.  相似文献   

3.
The aim of the study was to determine the effects of variations in eccentric loading and knee joint range of motion on performance enhancement associated with the stretch-shortening cycle in vertical jumping. Seventeen male elite volleyball players performed three variations of the vertical jump which served as the research model: the squat jump (SJ), countermovement jump (CMJ) and drop jump from a height of 30 cm (DJ30). Knee joint angle (70 degrees and 90 degrees of flexion) at the commencement of the propulsive phase for each jump type was experimentally controlled, with the trunk kept as erect as possible. Force and motion data were recorded for each performance and used to compute a range of kinematic and kinetic variables, including hip, knee and ankle angles, angular velocities, work done, net joint moments and a number of temporal variables. The average of 12 trials for each participant was used in a series of repeated measures ANOVA's (jump xk nee, alpha=.05). From both knee joint angles, an increase in eccentric loading resulted in a significant increase in jump height (DJ30>CMJ>SJ; p<.05). These enhancements were significantly greater (p<.05) for 70 degrees in comparison to 90 degrees of knee flexion. From 70 degrees of knee flexion, these enhancements were due to significant increases in work done at all three joints; while from 90 degrees of knee flexion, only the hip and ankle joints appeared to contribute (p<.05). The amount of enhancement associated with employing the SSC in jumping is dependent upon the interaction of the magnitude of eccentric loading and the range of motion used.  相似文献   

4.
Performing the vertical jump: movement adaptations for submaximal jumping   总被引:1,自引:0,他引:1  
The purpose of this study was to gain insight into the kinematics and kinetics of the vertical jump when jumping for different heights and to investigate movement effectiveness as a criterion for movement control in submaximal jumping. In order to jump high a countermovement is used and large body segments are rotated, both of which consume energy which is not directly used to gain extra jump height. It was hypothesized that the energy used to reach a specified jump height is minimized by limiting the non-effective energy consumed. Standing vertical jumps attempting 100%, 75%, 50%, and 25% of maximal height were performed by a group of 10 subjects. Force and motion data were recorded simultaneously during each performance. We found that jump height increased due to increasing vertical velocity at take off. This was primarily related to an increase in countermovement amplitude. As such, flexion amplitude of the hip joint increased with jump height whereas the ankle and knee joint flexion did not. These findings revealed that for submaximal jumping a consistent strategy was used of maximizing the contribution of distal joints and minimizing the contribution of proximal joints. Taking into account the high inertia of proximal segments, the potential energy deficit due to countermovement prior to joint extension, the advantageous horizontal orientation of the foot segment during stance and the tendon lengths in distal muscles, it was concluded that movement effectiveness is a likely candidate for the driving criterion of this strategy.  相似文献   

5.
Inadequate rest intervals may contribute to impaired performance during functional tests. However, the effect of different rest intervals on performance of the SEBT in individuals with and without CAI is not known. Our purposes were to determine whether different rest intervals impact ankle kinematics during the SEBT and whether there differences between those two populations. 24 controls and 24 CAI completed 3 trials in 3 reach directions (anteromedial; AM, medial; M, posteromedial; PM). The order of rest intervals and reach distance were randomized and counterbalanced. Three visits were required to complete the 3 rest interval conditions (10, 20, 40 s). Rest interval did not impact ankle kinematics between controls and CAI during the SEBT. Dorsiflexion (DF) (AM:partial η2 = 0.18; M:partial η2 = 0.23; PM:partial η2 = 0.23) for all directions and tibial internal rotation (TIR) excursions (AM:partial η2 = 0.20) for AM direction were greater in individuals with CAI regardless of rest interval length. Rest intervals ranging from 10 to 40 s did not influence ankle kinematics. Differences exist in DF and TIR between controls and CAI during the SEBT. These findings suggest that clinicians can use any rest interval between 10 and 40 s when administrating the SEBT. However, triplanar motion differs during a complex functional movement in controls compared to CAI.  相似文献   

6.
Wendy S. Parker 《Synthese》2009,169(3):483-496
A number of recent discussions comparing computer simulation and traditional experimentation have focused on the significance of “materiality.” I challenge several claims emerging from this work and suggest that computer simulation studies are material experiments in a straightforward sense. After discussing some of the implications of this material status for the epistemology of computer simulation, I consider the extent to which materiality (in a particular sense) is important when it comes to making justified inferences about target systems on the basis of experimental results.  相似文献   

7.
We show that simple perceptual competences can emerge from an internal simulation of action effects and are thus grounded in behavior. A simulated agent learns to distinguish between dead ends and corridors without the necessity to represent these concepts in the sensory domain. Initially, the agent is only endowed with a simple value system and the means to extract low-level features from an image. In the interaction with the environment, it acquires a visuo-tactile forward model that allows the agent to predict how the visual input is changing under its movements, and whether movements will lead to a collision. From short-term predictions based on the forward model, the agent learns an inverse model. The inverse model in turn produces suggestions about which actions should be simulated in long-term predictions, and long-term predictions eventually give rise to the perceptual ability.  相似文献   

8.
    
People recognize faces of their own race more accurately than faces of other races. The “contact” hypothesis suggests that this “other‐race effect” occurs as a result of the greater experience we have with own‐ versus other‐race faces. The computational mechanisms that may underlie different versions of the contact hypothesis were explored in this study. We replicated the other‐race effect with human participants and evaluated four classes of computational face recognition algorithms for the presence of an other‐race effect. Consistent with the predictions of a developmental contact hypothesis, “experience‐based models” demonstrated an other‐race effect only when the representational system was developed through experience that warped the perceptual space in a way that was sensitive to the overall structure of the model's experience with faces of different races. When the model's representation relied on a feature set optimized to encode the information in the learned faces, experience‐based algorithms recognized minority‐race faces more accurately than majority‐race faces. The results suggest a developmental learning process that warps the perceptual space to enhance the encoding of distinctions relevant for own‐race faces. This feature space limits the quality of face representations for other‐race faces.  相似文献   

9.
    
The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed natural input to stored representations. By complementing the recognition memory process with a perceptual front end, the NIM model is able to make predictions about memorability based directly on individual natural stimuli. We demonstrate that the NIM model is able to simulate experimentally obtained similarity ratings and recognition memory for individual stimuli (i.e., face images).  相似文献   

10.
Morrison points out many similarities between the roles of simulation models and other sorts of models in science. On the basis of these similarities she claims that running a simulation is epistemologically on a par with doing a traditional experiment and that the output of a simulation therefore counts as a measurement. I agree with her premises but reject the inference. The epistemological payoff of a traditional experiment is greater (or less) confidence in the fit between a model and a target system. The source of this payoff is the existence of a causal interaction with the target system. A computer experiment, which does not go beyond the simulation system itself, lacks any such interaction. So computer experiments cannot confer any additional confidence in the fit (or lack thereof) between the simulation model and the target system.  相似文献   

11.
An angle-driven computer simulation model of aerial movement was used to determine the maximum amount of twist that can be produced in a reverse 1½ somersault dive from a three-metre springboard using various aerial and contact twisting techniques. The segmental inertia parameters of an elite springboard diver were used in the simulations and lower bounds were placed on the durations of arm and hip angle changes based on recorded performances of twisting somersaults. A limiting dive was identified as that producing the largest possible odd number of half twists. Simulations of the limiting dives were found using simulated annealing optimisation to produce the required amounts of somersault, tilt and twist after a flight time of 1.5 s. Additional optimisations were then run to seek solutions with the arms less adducted during the twisting phase. It was found that the upper limits ranged from 3½ to 5½ twists with arm abduction ranges lying between 8° and 23°. Similar results were obtained when the inertia parameters of two other springboard divers were used. It may be concluded that a reverse 1½ somersault dive using aerial asymmetrical arm and hip movements to produce 5½ twists is a realistic possibility. To accomplish this limiting dive the diver needs to be able to coordinate the timing of configurational changes with the progress of the twist with a precision of 10 ms or better.  相似文献   

12.
文章综述了动态系统研究方法在社会心理学中的应用。社会心理学现象具有复杂性、时间性和动态性特点,动态分析方法如动态系统行为模式确定、动态系统稳定性分析、人工神经网络和元细胞自动机等计算机模型的应用,可以有效的捕捉到人类思维和行为的这些特点。目前动态分析方法已经在社会心理学的动态属性研究中取得了一定的进展。  相似文献   

13.
Human sensorimotor control involves inter-segmental coordination to cope with the complexity of a multi-segment system. The combined activation of hip and ankle muscles during upright stance represents the hip–ankle coordination. This study postulates that the coordination emerges from interactions on the sensory levels in the feedback control. The hypothesis was tested in a model-based approach that compared human experimental data with model simulations. Seven subjects were standing with eyes closed on an anterior–posterior tilting motion platform. Postural responses in terms of angular excursions of trunk and legs with respect to vertical were measured and characterized using spectral analysis. The presented control model consists of separate feedback modules for the hip and ankle joints, which exchange sensory information with each other. The feedback modules utilize sensor-derived disturbance estimates rather than ‘raw’ sensory signals. The comparison of the human data with the simulation data revealed close correspondence, suggesting that the model captures important aspects of the human sensory feedback control. For verification, the model was re-embodied in a humanoid robot that was tested in the human laboratory. The findings show that the hip–ankle coordination can be explained by interactions between the feedback control modules of the hip and ankle joints.  相似文献   

14.
Computer simulation through an error-statistical lens   总被引:1,自引:1,他引:0  
Wendy S. Parker 《Synthese》2008,163(3):371-384
After showing how Deborah Mayo’s error-statistical philosophy of science might be applied to address important questions about the evidential status of computer simulation results, I argue that an error-statistical perspective offers an interesting new way of thinking about computer simulation models and has the potential to significantly improve the practice of simulation model evaluation. Though intended primarily as a contribution to the epistemology of simulation, the analysis also serves to fill in details of Mayo’s epistemology of experiment.  相似文献   

15.
The philosophy of simulation: hot new issues or same old stew?   总被引:1,自引:0,他引:1  
Roman Frigg  Julian Reiss 《Synthese》2009,169(3):593-613
Computer simulations are an exciting tool that plays important roles in many scientific disciplines. This has attracted the attention of a number of philosophers of science. The main tenor in this literature is that computer simulations not only constitute interesting and powerful new science, but that they also raise a host of new philosophical issues. The protagonists in this debate claim no less than that simulations call into question our philosophical understanding of scientific ontology, the epistemology and semantics of models and theories, and the relation between experimentation and theorising, and submit that simulations demand a fundamentally new philosophy of science in many respects. The aim of this paper is to critically evaluate these claims. Our conclusion will be sober. We argue that these claims are overblown and that simulations, far from demanding a new metaphysics, epistemology, semantics and methodology, raise few if any new philosophical problems. The philosophical problems that do come up in connection with simulations are not specific to simulations and most of them are variants of problems that have been discussed in other contexts before. An erratum to this article can be found at  相似文献   

16.
    
With reference to Henderson's (2004) assumption that inventors are “expert problem solvers”, we studied the ability of inventors to solve complex problems (CPS) using a sample of 46 German inventors. The participants had to use FSYS 2.0, a computer-simulated microworld. Additionally, we assessed metacognition, in particular the participants' ability to make deliberate use of divergent and convergent thinking. This ability was expected to be an important skill involved in solving complex problems (Dörner, Kreuzig, Reither, & Stäudel, 1983). We assumed a positive correlation between the individual success of inventors (number of granted and marketed patents) and CPS abilities. Controllability of divergent and convergent thinking turned out to be a predictor of the success of inventors and allowed us to identify the top 10% performers. Oddly however, the best problem solvers were inventors with exactly one granted patent. Data from a posteriori conducted interviews help explaining the results.  相似文献   

17.
    
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition.  相似文献   

18.
    
Humans and many other species selectively attend to stimuli or stimulus dimensions—but why should an animal constrain information input in this way? To investigate the adaptive functions of attention, we used a genetic algorithm to evolve simple connectionist networks that had to make categorization decisions in a variety of environmental structures. The results of these simulations show that while learned attention is not universally adaptive, its benefit is not restricted to the reduction of input complexity in order to keep it within an organism's processing capacity limitations. Instead, being able to shift attention provides adaptive benefit by allowing faster learning with fewer errors in a range of ecologically plausible environments.  相似文献   

19.
    
Plausibility has been implicated as playing a critical role in many cognitive phenomena from comprehension to problem solving. Yet, across cognitive science, plausibility is usually treated as an operationalized variable or metric rather than being explained or studied in itself. This article describes a new cognitive model of plausibility, the Plausibility Analysis Model (PAM), which is aimed at modeling human plausibility judgment. This model uses commonsense knowledge of concept-coherence to determine the degree of plausibility of a target scenario. In essence, a highly plausible scenario is one that fits prior knowledge well: with many different sources of corroboration, without complexity of explanation, and with minimal conjecture. A detailed simulation of empirical plausibility findings is reported, which shows a close correspondence between the model and human judgments. In addition, a sensitivity analysis demonstrates that PAM is robust in its operations.  相似文献   

20.
    
The ability to combine words into novel sentences has been used to argue that humans have symbolic language production abilities. Critiques of connectionist models of language often center on the inability of these models to generalize symbolically (Fodor & Pylyshyn, 1988; Marcus, 1998). To address these issues, a connectionist model of sentence production was developed. The model had variables (role‐concept bindings) that were inspired by spatial representations (Landau & Jackendoff, 1993). In order to take advantage of these variables, a novel dual‐pathway architecture with event semantics is proposed and shown to be better at symbolic generalization than several variants. This architecture has one pathway for mapping message content to words and a separate pathway that enforces sequencing constraints. Analysis of the model's hidden units demonstrated that the model learned different types of information in each pathway, and that the model's compositional behavior arose from the combination of these two pathways. The model's ability to balance symbolic and statistical behavior in syntax acquisition and to model aphasic double dissociations provided independent support for the dual‐pathway architecture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号