首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   44篇
  免费   1篇
  2021年   1篇
  2020年   3篇
  2019年   1篇
  2018年   2篇
  2017年   3篇
  2016年   3篇
  2015年   2篇
  2014年   2篇
  2013年   9篇
  2012年   4篇
  2011年   3篇
  2010年   1篇
  2009年   2篇
  2008年   1篇
  2007年   1篇
  2006年   3篇
  2004年   1篇
  2003年   1篇
  2002年   1篇
  2001年   1篇
排序方式: 共有45条查询结果,搜索用时 15 毫秒
41.
To evaluate the claim that correct performance on unexpected transfer false-belief tasks specifically involves mental-state understanding, two experiments were carried out with children with autism, intellectual disabilities, and typical development. In both experiments, children were given a standard unexpected transfer false-belief task and a mental-state-free, mechanical analogue task in which participants had to predict the destination of a train based on true or false signal information. In both experiments, performance on the mechanical task was found to correlate with that on the false-belief task for all groups of children. Logistic regression showed that performance on the mechanical analogue significantly predicted performance on the false-belief task even after accounting for the effects of verbal mental age. The findings are discussed in relation to possible common mechanisms underlying correct performance on the two tasks.  相似文献   
42.
By 4 years of age, children have been reinforced repeatedly for searching where they see someone point. In two studies, we asked whether this history of reinforcement could interfere with young children's ability to discriminate between a knowledgeable and an ignorant informant. Children watched as one informant hid a sticker while another turned around, and then both informants indicated where they though the sticker was, either by pointing or by using a less practiced means of reference. Children failed to discriminate between the two informants when they pointed, but they chose the location indicated by the knowledgeable informant when the informants used a cue other than pointing. Pointing can disrupt as basic an understanding as the link between seeing and knowing.  相似文献   
43.
Visual word recognition is facilitated by the presence of orthographic neighbors that mismatch the target word by a single letter substitution. However, researchers typically do not consider where neighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number of enemies at each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over-time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions.  相似文献   
44.
45.
A long-standing question in cognitive science is how high-level knowledge is integrated with sensory input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech sound, but do such effects reflect direct top-down influences on perception or merely postperceptual biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context phoneme (e.g., /s/ in Christma#) can alter the perceived place of articulation of a subsequent target phoneme (e.g., the initial phoneme of a stimulus from a tapes-capes continuum), consistent with the influence of an unambiguous context phoneme in the same position. Because this phoneme-to-phoneme compensation for coarticulation is considered sublexical, scientists agree that evidence for LCfC would constitute strong support for top–down interaction. However, results from previous LCfC studies have been inconsistent, and positive effects have often been small. Here, we conducted extensive piloting of stimuli prior to testing for LCfC. Specifically, we ensured that context items elicited robust phoneme restoration (e.g., that the final phoneme of Christma# was reliably identified as /s/) and that unambiguous context-final segments (e.g., a clear /s/ at the end of Christmas) drove reliable compensation for coarticulation for a subsequent target phoneme. We observed robust LCfC in a well-powered, preregistered experiment with these pretested items (N = 40) as well as in a direct replication study (N = 40). These results provide strong evidence in favor of computational models of spoken word recognition that include top–down feedback.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号