首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   137篇
  免费   6篇
  2023年   3篇
  2022年   2篇
  2021年   11篇
  2020年   5篇
  2019年   7篇
  2018年   4篇
  2017年   9篇
  2016年   6篇
  2015年   9篇
  2014年   27篇
  2013年   15篇
  2012年   3篇
  2011年   12篇
  2010年   2篇
  2009年   13篇
  2008年   1篇
  2007年   3篇
  2006年   4篇
  2005年   4篇
  2003年   1篇
  2002年   1篇
  2000年   1篇
排序方式: 共有143条查询结果,搜索用时 31 毫秒
121.
Human sensorimotor control involves inter-segmental coordination to cope with the complexity of a multi-segment system. The combined activation of hip and ankle muscles during upright stance represents the hip–ankle coordination. This study postulates that the coordination emerges from interactions on the sensory levels in the feedback control. The hypothesis was tested in a model-based approach that compared human experimental data with model simulations. Seven subjects were standing with eyes closed on an anterior–posterior tilting motion platform. Postural responses in terms of angular excursions of trunk and legs with respect to vertical were measured and characterized using spectral analysis. The presented control model consists of separate feedback modules for the hip and ankle joints, which exchange sensory information with each other. The feedback modules utilize sensor-derived disturbance estimates rather than ‘raw’ sensory signals. The comparison of the human data with the simulation data revealed close correspondence, suggesting that the model captures important aspects of the human sensory feedback control. For verification, the model was re-embodied in a humanoid robot that was tested in the human laboratory. The findings show that the hip–ankle coordination can be explained by interactions between the feedback control modules of the hip and ankle joints.  相似文献   
122.
Sighted individuals are less accurate and slower to localize sounds coming from the peripheral space than sounds coming from the frontal space. This specific bias in favour of the frontal auditory space seems reduced in early blind individuals, who are particularly better than sighted individuals at localizing sounds coming from the peripheral space. Currently, it is not clear to what extent this bias in the auditory space is a general phenomenon or if it applies only to spatial processing (i.e. sound localization). In our approach we compared the performance of early blind participants with that of sighted subjects during a frequency discrimination task with sounds originating either from frontal or peripheral locations. Results showed that early blind participants discriminated faster than sighted subjects both peripheral and frontal sounds. In addition, sighted subjects were faster at discriminating frontal sounds than peripheral ones, whereas early blind participants showed equal discrimination speed for frontal and peripheral sounds. We conclude that the spatial bias observed in sighted subjects reflects an unbalance in the spatial distribution of auditory attention resources that is induced by visual experience.  相似文献   
123.
In a haptic search task, one has to determine the presence of a target among distractors. It has been shown that if the target differs from the distractors in two properties, shape and texture, performance is better than in both single-property conditions (Van Polanen, Bergmann Tiest, & Kappers, 2013). The search for a smooth sphere among rough cubical distractors was faster than both the searches for a rough sphere (shape information only) and for a smooth cube (texture information only). This effect was replicated in this study as a baseline. The main focus here was to further investigate the nature of this integration. It was shown that performance is better when the two properties are combined in a single target (smooth sphere), than when located in two separate targets (rough sphere and smooth cube) that are simultaneously present. A race model that assumes independent parallel processing of the two properties could explain the enhanced performance with two properties, but this could only take place effectively when the two properties were located in a single target.  相似文献   
124.
Embodied theories of object representation propose that the same neural networks are involved in encoding and retrieving object knowledge. In the present study, we investigated whether motor programs play a causal role in the retrieval of object names. Participants performed an object-naming task while squeezing a sponge with either their right or left hand. The objects were artifacts (e.g. hammer) or animals (e.g. giraffe) and were presented in an orientation that favored a grasp or not. We hypothesized that, if activation of motor programs is necessary to retrieve object knowledge, then concurrent motor activity would interfere with naming manipulable artifacts but not non-manipulable animals. In Experiment 1, we observed naming interference for all objects oriented towards the occupied hand. In Experiment 2, we presented the objects in more ‘canonical orientations’. Participants named all objects more quickly when they were oriented towards the occupied hand. Together, these interference/facilitation effects suggest that concurrent motor activity affects naming for both categories. These results also suggest that picture-plane orientation interacts with an attentional bias that is elicited by the objects and their relationship to the occupied hand. These results may be more parsimoniously accounted for by a domain-general attentional effect, constraining the embodied theory of object representations. We suggest that researchers should scrutinize attentional accounts of other embodied cognitive effects.  相似文献   
125.
We investigated the discrimination of two neighboring intra- or inter-modal empty time intervals marked by three successive stimuli. Each of the three markers was a flash (visual—V) or a sound (auditory—A). The first and last markers were of the same modality, while the second one was either A or V, resulting in four conditions: VVV, VAV, AVA and AAA. Participants judged whether the second interval, whose duration was systematically varied, was shorter or longer than the 500-ms first interval. Compared with VVV and AAA, discrimination was impaired with VAV, but not so much with AVA (in Experiment 1). Whereas VAV and AVA consisted of the same set of single intermodal intervals (VA and AV), discrimination was impaired in the VAV compared to the AVA condition. This difference between VAV and AVA could not be attributed to the participants' strategy to perform the discrimination task, e.g., ignoring the standard interval or replacing the visual stimuli with sounds in their mind (in Experiment 2). These results are discussed in terms of sequential grouping according to sensory similarity.  相似文献   
126.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   
127.
Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s− 1) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt.  相似文献   
128.
The purpose of this study was to induce both trunk extensor and abdominal muscle fatigue, on separate occasions, and compare their effects on standing postural control and trunk proprioception, as well as look at the effects of a recovery period on these outcome measures. A total of 20 individuals participated, with 10 (5 males and 5 females) completing either a standing postural control or lumbar axial repositioning protocol. Participants completed their randomly assigned protocol on two occasions, separated by at least 4 days, with either their trunk extensor or abdominal muscles being fatigued on either day. Postural control centre of pressure variables and trunk proprioception errors were compared pre- and post-fatigue. Results showed that both trunk extensor and abdominal muscle fatigue significantly degraded standing postural control immediately post-fatigue, with recovery occurring within 2 min post-fatigue. In general, these degradative effects on postural control appeared to be greater when the trunk extensor muscles were fatigued compared to the abdominal muscles. No statistically significant changes in trunk proprioception were found after either fatigue protocol. The present findings demonstrate our body’s ability to quickly adapt and reweight somatosensory information to maintain postural control and trunk proprioception, as well as illustrate the importance of considering the abdominal muscles, along with the trunk extensor muscles, when considering the impact of fatigue on trunk movement and postural control.  相似文献   
129.
Based on claims resulting from grounded cognition theory that perceptual and memory processes are using the same distributed systems, the present study investigated the temporal aspect of access to memory traces through haptic and auditory modalities. Unlike in the case of visual or auditory components, the perception of a vibrotactile component is more sequential in nature and therefore cannot be fully processed before the end of the signal. The present study explores the dynamic of components activation in a situation of audio‐vibrotactile asynchrony. We used a short‐term priming paradigm consisting of an association phase (between a vibration and sound) and a test phase testing priming effect of a vibrotactile stimulation on the processing of a target sound. Results showed an interference with a simultaneous processing and a facilitation with a sequential processing. The temporality process of perceptual components is also important at a memory level.  相似文献   
130.
This study investigated the influence of culture on people's sensory responses, such as smell, taste, sound and touch to visual stimuli. The sensory feelings of university students from four countries (Japan, South Korea, Britain and France) to six images were evaluated. The images combined real and abstract objects and were presented on a notebook computer. Overall, 280 participants (144 men and 136 women; n = 70/country) were included in the statistical analysis. The chi‐square independence analysis showed differences and similarities in the sensory responses across countries. Most differences were detected in smell and taste, whereas few variations were observed for sound responses. Large variations in the response were observed for the abstract coral and butterfly images, but few differences were detected in response to the real leaf image. These variations in response were mostly found in the British and Japanese participants.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号