首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   4篇
  国内免费   5篇
  2023年   1篇
  2022年   3篇
  2021年   8篇
  2020年   1篇
  2019年   5篇
  2018年   5篇
  2017年   3篇
  2015年   3篇
  2014年   1篇
  2013年   4篇
  2012年   1篇
  2011年   4篇
  2010年   2篇
  2009年   3篇
  2008年   2篇
  2007年   3篇
  2006年   1篇
  2004年   3篇
  2003年   3篇
  2002年   1篇
  2001年   1篇
  1999年   1篇
  1998年   1篇
排序方式: 共有60条查询结果,搜索用时 15 毫秒
1.
2.
Representing a world or a physical/social environment in an agent’s cognitive system is essential for creating human-like artificial intelligence. This study takes a story-centered approach to this issue. In this context, a story refers to an internal representation involving a narrative structure, which is assumed to be a common form of organizing past, present, future, and fictional events and situations. In the artificial intelligence field, a story or narrative is traditionally treated as a symbolic representation. However, a symbolic story representation is limited in its representational power to construct a rich world. For example, a symbolic story representation is unfit to handle the sensory/bodily dimension of a world. In search of a computational theory for narrative-based world representation, this study proposes the conceptual framework of a Cogmic Space for a comic strip-like representation of a world. In the proposed framework, a story is positioned as a mid-level representation, in which the conceptual and sensory/bodily dimensions of a world are unified. The events and their background situations that constitute a story are unified into a sequence of panels. Based on this structure, a representation (i.e., a story) and the represented environment are connected via an isomorphism of their temporal, spatial, and relational structures. Furthermore, the framework of a Cogmic Space is associated with the generative aspect of representations, which is conceptualized in terms of unconscious- and conscious-level processes/representations. Finally, a proof-of-concept implementation is presented to provide a concrete account of the proposed framework.  相似文献   
3.
Recent studies of naturalistic face‐to‐face communication have demonstrated coordination patterns such as the temporal matching of verbal and non‐verbal behavior, which provides evidence for the proposal that verbal and non‐verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non‐verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, 2012 ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non‐verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non‐verbal channels. We propose a “temporal heterogeneity” hypothesis to explain how the language system adapts to the demands of dialog.  相似文献   
4.
The driver of a conditionally automated vehicle equivalent to level 3 of the SAE is obligated to accept a takeover request (TOR) issued by the vehicle. Considerable research has been conducted on the TOR, especially in terms of the effectiveness of multimodal methods. Therefore, in this study, the effectiveness of various multimodalities was compared and analyzed. Thirty-six volunteers were recruited to compare the effects of the multimodalities, and vehicle and physiological data were obtained using a driving simulator. Eight combinations of TOR warnings, including those implemented through LED lights on the A-pillar, earcon, speech message, or vibrations in the back support and seat pan, were analyzed to clarify the corresponding effects. When the LED lights were implemented on the A-pillar, the driver reaction was faster (p = 0.022) and steering deviation was larger (p = 0.024) than those in the case in which no LED lights were implemented. The speech message resulted in a larger steering deviation than that in the case of the earcon (p = 0.044). When vibrations were provided through the haptic seat, the reaction time (p < 0.001) was faster, and the steering deviation (p = 0.001) was larger in the presence of vibrations in the haptic seat than no vibration. An interaction effect was noted between the visual and auditory modalities; notably, the earcon resulted in a small steering deviation and skin conductance response amplitude (SCR amplitude) when implemented with LED lights on the A-pillar, whereas the speech message led to a small steering deviation and SCR amplitude without the LED lights. In the design of a multimodal warning to be used to issue a TOR, the effects of each individual modality and corresponding interaction effects must be considered. These effects must be evaluated through application to various takeover situations.  相似文献   
5.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   
6.
汉语语句重音的分类和分布   总被引:9,自引:0,他引:9  
王韫佳  初敏  贺琳 《心理学报》2003,35(6):734-742
通过两个独立进行的重音标注实验对汉语语句重音的分类和分布进行了初步探讨。实验l是由60位普通被试参加的音节重音突显度的知觉实验。实验2是由本文三位作者参加的重音类别标注实验,在此实验中语句重音被划分为节奏重音和语义重音。实验2中对于语句重音的分类性标注结果得到了实验l中普通被试对音节重音突显度知觉结果的支持,这说明人们确实能够感知到两种不同类型的重音。实验结果还表明,节奏重音倾向于出现在较大韵律单元内的最末韵律词的末音节上,并且与适当的停延相伴生,语义重音的分布则与语句的韵律结构的关系不大。  相似文献   
7.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
8.
Previous studies have mainly focused on tailoring message content to match individual characteristics and preferences. This study investigates the effect of a website tailored to individual preferences for the mode of information presentation, compared to 4 nontailored websites on younger and older adults' attention and recall of information, employing a 5 (condition: tailored vs. text, text with illustrations, audiovisual, combination) × 2 (age: younger [25–45] vs. older [≥65] adults) design (N = 559). The mode‐tailored condition (relative to nontailored conditions) improved attention to the website and, consequently, recall in older adults, but not in younger adults. Younger adults recalled more from nontailored information such as text only or text with illustrations, relative to tailored information.  相似文献   
9.
Driving simulators are valuable tools for traffic safety research as they allow for systematic reproductions of challenging situations that cannot be easily tested during real-world driving. Unfortunately, simulator sickness (i.e., nausea, dizziness, etc.) is common in many driving simulators and may limit their utility. The experience of simulator sickness is thought to be related to the sensory feedback provided to the user and is also thought to be greater in older compared to younger users. Therefore, the present study investigated whether adding auditory and/or motion cues to visual inputs in a driving simulator affected simulator sickness in younger and older adults. Fifty-eight healthy younger adults (age 18–39) and 63 healthy older adults (age 65+) performed a series of simulated drives under one of four sensory conditions: (1) visual cues alone, (2) combined visual + auditory cues (engine, tire, wind sounds), (3) combined visual + motion cues (via hydraulic hexapod motion platform), or (4) a combination of all three sensory cues (visual, auditory, motion). Simulator sickness was continuously recorded while driving and up to 15 min after driving session termination. Results indicated that older adults experienced more simulator sickness than younger adults overall and that females were more likely to drop out and drove for less time compared to males. No differences between sensory conditions were observed. However, older adults needed significantly longer time to fully recover from the driving session than younger adults, particularly in the visual-only condition. Participants reported that driving in the simulator was least realistic in the visual-only condition compared to the other conditions. Our results indicate that adding auditory and/or motion cues to the visual stimulus does not guarantee a reduction of simulator sickness per se, but might accelerate the recovery process, particularly in older adults.  相似文献   
10.
Processing the various features from different feature maps and modalities in coherent ways requires a dedicated integration mechanism (“the binding problem”). Many authors have related feature binding to conscious awareness but little is known about how tight this relationship really is. We presented subjects with asynchronous audiovisual stimuli and tested whether the two features were integrated. The results show that binding took place up to 350 ms feature-onset asynchronies, suggesting that integration covers a relatively wide temporal window. We also asked subjects to explicitly judge whether the two features would belong to the same or to the different events. Unsurprisingly, synchrony judgments decreased with increasing asynchrony. Most importantly, feature binding was entirely unaffected by conscious experience: features were bound whether they were experienced as occurring together or as belonging to a separate events, suggesting that the conscious experience of unity is not a prerequisite for, or a direct consequence of binding.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号