首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   4篇
  国内免费   5篇
  2023年   1篇
  2022年   3篇
  2021年   8篇
  2020年   1篇
  2019年   5篇
  2018年   5篇
  2017年   3篇
  2015年   3篇
  2014年   1篇
  2013年   4篇
  2012年   1篇
  2011年   4篇
  2010年   2篇
  2009年   3篇
  2008年   2篇
  2007年   3篇
  2006年   1篇
  2004年   3篇
  2003年   3篇
  2002年   1篇
  2001年   1篇
  1999年   1篇
  1998年   1篇
排序方式: 共有60条查询结果,搜索用时 15 毫秒
31.
When a prominent attribute looms larger in one response procedure than in another, a violation of procedure invariance occurs. A hypothesis based on compatibility between the structure of the input information and the required output was tested as an explanation of this phenomenon. It was also compared with other existing hypotheses in the field. The study had two aims: (1) to illustrate the prominence effect in a selection of preference tasks (choice, acceptance decisions, and preference ratings); (2) to demonstrate the processing differences in a matching procedure versus the selected preference tasks. Hence, verbal protocols were collected in both a matching task and in subsequent preference tasks. Silent control conditions were also employed. The structure compatibility hypothesis was confirmed in that a prominence effect obtained in the preference tasks was accompanied by a lower degree of attention to the attribute levels in these tasks. Furthermore, as predicted from the structure compatibility hypothesis, it was found that fewer comparisons between attribute levels were performed in the preference tasks than in the matching task. It was therefore concluded that both these processing differences may explain the occurrence of the prominence effects. © 1998 John Wiley & Sons, Ltd.  相似文献   
32.
This study investigated the sensitivity of 9-month-old infants to the alignment between prosodic and gesture prominences in pointing–speech combinations. Results revealed that the perception of prominence is multimodal and that infants are aware of the timing of gesture–speech combinations well before they can produce them.  相似文献   
33.
Previous work has found that repetitive auditory stimulation (click trains) increases the subjective velocity of subsequently presented moving stimuli. We ask whether the effect of click trains is stronger for retinal velocity signals (produced when the target moves across the retina) or for extraretinal velocity signals (produced during smooth pursuit eye movements, when target motion across the retina is limited). In Experiment 1, participants viewed leftward or rightward moving single dot targets, travelling at speeds from 7.5 to 17.5 deg/s. They estimated velocity at the end of each trial. Prior presentation of auditory click trains increased estimated velocity, but only in the pursuit condition, where estimates were based on extraretinal velocity signals. Experiment 2 generalized this result to vertical motion. Experiment 3 found that the effect of clicks during pursuit disappeared when participants tracked across a visually textured background that provided strong local motion cues. Together these results suggest that auditory click trains selectively affect extraretinal velocity signals. This novel finding suggests that the cross-modal integration required for auditory click trains to influence subjective velocity operates at later stages of processing.  相似文献   
34.
We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3‐ and 5‐year‐old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3‐year‐olds, 5‐year‐olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3‐year‐olds did not integrate information in speech and gesture, but 5‐year‐olds and adults did. In Experiment 2, 3‐year‐old children were presented with the same speech and gesture as in Experiment 1 that were produced live by an experimenter. When presented live, 3‐year‐olds could integrate speech and gesture. We concluded that development of the integration ability is a part of the broader developmental shift; however, live‐presentation facilitates the nascent integration ability in 3‐year‐olds.  相似文献   
35.

Introduction

Psychological mechanisms associated with academic motivation and academic commitment constitute promising targets for the understanding of the undergraduate students’ well being, during a particularly critical adulthood developmental period in terms of identity formation and vulnerability to psychopathologies.

Objective

The present study explored the associations between the self-determination theory's seven academic types of motivation and the multimodal commitment model's three modes of academic commitment among undergraduate students.

Method

Data were collected via self-reported questionnaires from a sample of 188 undergraduate students. Multiple regression analyses were performed.

Results

Although several results supported the initial hypotheses, some were surprising, namely that some highly self-determined types of motivation were positively associated with some commitment difficulties.

Conclusion

The discussion emphasizes the relevance of the combined use of these models to capture a rich and nuanced comprehension of psychological functioning among undergraduate students. A number of identity hypotheses are also formulated to explain the results.  相似文献   
36.
The knowledge about en-trip mode switching behavior with presence of multimodal traveler information is very limited so far. This study investigated the impacts on commute drivers’ en-trip mode switch decisions of smartphone multimodal traveler information systems (SMTIS) which integrate dynamic information of auto-drive and subway park-and-ride (P&R). This is based on data collected from a stated preference survey in Shanghai, China. A panel mixed probit model which accounts for potential correlations of observations among a same driver and heterogeneity in preferences for travel time savings and comfort level of subway car was developed. The panel model has a much better goodness of fit than a model without consideration of panel effect and heterogeneity. The results show that SMTIS have significant impacts on commuter drivers’ decision about switching from auto drive to P&R; the impacts depend on personal attributes including gender, age, education level, income, and P&R use experience; the sensitivity to time savings in the case non-incident induced delays, and the sensitivity to comfort level of subway, both vary significantly among the driver sample.  相似文献   
37.
In spite of a large body of empirical research demonstrating the importance of multisensory integration in cognition, there is still little research about multimodal encoding and maintenance effects in working memory. In this study we investigated multimodal encoding in working memory by means of an immediate serial recall task with different modality and format conditions. In a first non-verbal condition participants were presented with sequences of non-verbal inputs representing familiar (concrete) objects, either in visual, auditory or audio-visual formats. In a second verbal condition participants were presented with written, spoken, or bimodally presented words denoting the same objects represented by pictures or sounds in the non-verbal condition. The effects of articulatory suppression were assessed in both conditions. We found a bimodal superiority effect on memory span with non-verbal material, and a larger span with auditory (or bimodal) versus visual presentation with verbal material, with a significant effect of articulatory suppression in the two conditions.  相似文献   
38.
39.
Spoken language based natural Human-Robot Interaction (HRI) requires robots to have the ability to understand spoken language, and extract the intention-related information from the working scenario. For grounding the intention-related object in the working environment, object affordance recognition could be a feasible way. To this end, we propose a dataset and a deep CNN based architecture to learn the human-centered object affordance. Furthermore, we present an affordance based multimodal fusion framework to realize intended object grasping according to the spoken instructions of human users. The proposed framework contains an intention semantics extraction module which is employed to extract the intention from spoken language, a deep Convolutional Neural Network (CNN) based object affordance recognition module which is applied to recognize human-centered object affordance, and a multimodal fusion module which is adopted to bridge the extracted intentions and the recognized object affordances. We also complete multiple intended object grasping experiments on a PR2 platform to validate the feasibility and practicability of the presented HRI framework.  相似文献   
40.
Parr LA 《Animal cognition》2004,7(3):171-178
The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号