首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   4篇
  国内免费   5篇
  2023年   1篇
  2022年   3篇
  2021年   8篇
  2020年   1篇
  2019年   5篇
  2018年   5篇
  2017年   3篇
  2015年   3篇
  2014年   1篇
  2013年   4篇
  2012年   1篇
  2011年   4篇
  2010年   2篇
  2009年   3篇
  2008年   2篇
  2007年   3篇
  2006年   1篇
  2004年   3篇
  2003年   3篇
  2002年   1篇
  2001年   1篇
  1999年   1篇
  1998年   1篇
排序方式: 共有60条查询结果,搜索用时 15 毫秒
41.
Empirical findings of cross-linguistic studies reveal three different frames of spatial reference: intrinsic, relative, and absolute. Of special interest are relative and absolute systems because they have antagonistic logical implications concerning the dependence on standpoint and orientation of the speaker/hearer. On the background of these findings it becomes crucial to show how an agent can form such language-specific spatial representations. In this paper, the system Locator is introduced as a model of concept formation in the spatial domain. It is assumed that an agent creates necessary discriminative features in processes of self-organization and selection and cannot just discover or find them in its environment. A number of simulations show that agents successfully create concepts of either a relative system (German) or an absolute system (Marquesan), relying solely on multimodal input (visual and linguistic).
Matthias RehmEmail:
  相似文献   
42.
Lee CS  Todd NP 《Cognition》2004,93(3):225-254
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.  相似文献   
43.
This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.  相似文献   
44.
A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioral data. Empirical work on semantic processing has shown that emotion also plays an important role especially in abstract concepts; however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from state-of-the-art existing models, by choosing models that best fit available human semantic data and extending the number of concepts they cover. Crucially then, we assess whether adding affective representations (obtained from a neural network model designed to predict emojis from co-occurring text) improves the model's ability to fit semantic similarity/relatedness judgments from a purely linguistic and linguistic–visual model. We find that, given specific weights assigned to the models, adding both visual and affective representations improves performance, with visual representations providing an improvement especially for more concrete words, and affective representations improving especially the fit for more abstract words.  相似文献   
45.
The goal of this article was to outline issues critical to evaluating the literature on incremental benefit of multiple effective treatments used together, vs. a single effective treatment, for childhood ADHD. These issues include: (1) sequencing and dosage of treatments being combined and compared; (2) difficulty drawing valid conclusions about individual components of treatment when treatment packages are employed; (3) differing results emerging from measurement tools that purportedly measure the same domain; and (4) the resultant difficulty in reaching a summary conclusion when multiple outcome measures yielding conflicting results are used. The implications of these issues for the design and conduct of future studies are discussed, and recommendations are made for future research.  相似文献   
46.
I am clearly located where my body is located. But is there one particular place inside my body where I am? Recent results have provided apparently contradictory findings about this question. Here, we addressed this issue using a more direct approach than has been used in previous studies. Using a simple pointing task, we asked participants to point directly at themselves, either by manual manipulation of the pointer whilst blindfolded or by visually discerning when the pointer was in the correct position. Self-location judgements in haptic and visual modalities were highly similar, and were clearly modulated by the starting location of the pointer. Participants most frequently chose to point to one of two likely regions, the upper face or the upper torso, according to which they reached first. These results suggest that while the experienced self is not spread out homogeneously across the entire body, nor is it localised in any single point. Rather, two distinct regions, the upper face and upper torso, appear to be judged as where “I” am.  相似文献   
47.
Travelers have different concerns about traffic safety, which may affect their transportation choices and risk-taking behaviors as well as the overall safety performance of multimodal transportation systems. The objective of this study was to examine factors associated with stated concerns surrounding traffic safety among travelers using multiple transport modes. The analysis used data from an online questionnaire survey completed by over 2,000 students and employees at Utah State University in Logan, Utah, US. Four latent variables—concerns about pedestrians and cyclists, auto drivers, modal interactions, and roadway conditions—were confirmed using factor analysis from 16 questions about traffic safety concerns. These four types of safety concerns were then analyzed to understand their associations with mode choice, commuting behavior, and socio-demographics using a structural equation model. Results showed that safety concern varied systematically among different mode users and demographic groups. Auto drivers perceived interactions with pedestrians and cyclists as concerning, while non-auto users felt more concerned by automobile traffic. Commuters who were recently involved in a crash were especially concerned with non-motorized modes. Women, lower-income, and non-white road users were more concerned with traffic safety overall. Findings about multimodal traffic safety concerns provide insights into people’s perceptions, which can be useful in developing designs, plans, and policies for making a safer transportation system for all road users.  相似文献   
48.
To date, joint attention skill assessments have focused on children’s responses to multimodal bids (RJA) and their initiation of bids (IJA) to multimodal spectacles. Here we gain a systematic view of auditory joint attention skills using a novel assessment that measures both auditory and multimodal RJA and IJA. In Study 1, 47 typically developing (TD) children were tested 5 times from 12 to 30 months to document auditory joint attention skill development. In Study 2, 113 toddlers (39 TD, 33 autism spectrum disorder [ASD], and 41 non-ASD developmental disorders [DD]; average age 22.4 months) were tested to discern the effects of ASD. Our findings fit well within the established depiction of joint attention skills with one important caveat: auditory items were far more difficult to execute than multimodal ones. By 24 months, TD children passed multimodal RJA items at the near-ceiling level, an accomplishment not reached even by 30 months for auditory RJA items. Intentional communicative IJA bids also emerged more slowly to auditory spectacles than to multimodal spectacles. Toddlers with DD outperformed toddlers with ASD on multimodal RJA items but toddlers in both groups rarely passed any auditory RJA items. Toddlers with ASD often monitored their partner’s attention during IJA items, albeit less often than toddlers with DD and TD toddlers, but they essentially never produced higher-level IJA bids, regardless of modality. Future studies should investigate further how variations in bids and targets affect auditory joint attention skills and probe the relation between these skills and language development.  相似文献   
49.
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag.  相似文献   
50.
McNorgan C  Reid J  McRae K 《Cognition》2011,(2):211-233
Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号