首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   233篇
  免费   36篇
  国内免费   44篇
  2023年   3篇
  2022年   9篇
  2021年   12篇
  2020年   15篇
  2019年   18篇
  2018年   32篇
  2017年   25篇
  2016年   11篇
  2015年   12篇
  2014年   15篇
  2013年   37篇
  2012年   6篇
  2011年   7篇
  2010年   4篇
  2009年   12篇
  2008年   12篇
  2007年   7篇
  2006年   11篇
  2005年   19篇
  2004年   7篇
  2003年   9篇
  2002年   8篇
  2001年   6篇
  2000年   3篇
  1999年   3篇
  1996年   1篇
  1993年   1篇
  1992年   1篇
  1991年   2篇
  1986年   4篇
  1981年   1篇
排序方式: 共有313条查询结果,搜索用时 15 毫秒
311.
Young children exhibit a video deficit for spatial recall, learning less from on-screen than in-person demonstrations. Some theoretical accounts emphasize memory constraints (e.g., insufficient retrieval cues, competition between memory representations). Such accounts imply memory representations are graded, yet video deficit studies measuring spatial recall operationalize memory retrieval as dichotomous (success or failure). The current study tested a graded-representation account using a spatial recall task with a continuous search space (i.e., sandbox) rather than discrete locations. With this more sensitive task, a protracted video deficit for spatial recall was found in children 4–5 years old (n = 51). This may be due to weaker memory representations in the screen condition, evidenced by higher variability and greater perseverative bias. In general, perseverative bias decreased with repeated trials. The discussion considers how the results support a graded-representation account, potentially explaining why children might exhibit a video deficit in some tasks but not others.

Research Highlights

  • The task used a continuous search space (sandbox), making it more difficult and sensitive than spatial recall tasks used in prior video deficit research.
  • Spatial recall among 4- and 5-year-old children was more variable after watching hiding events on screen via live video feed than through a window.
  • Children's spatial recall from screens was more susceptible to proactive interference, evidenced by more perseverative bias in an A-not-B design.
  • The results demonstrate memory representations blend experiences that accumulate over time and explain why the video deficit may be protracted for more difficult tasks.
  相似文献   
312.
People frequently gesture when a word is on the tip of their tongue (TOT), yet research is mixed as to whether and why gesture aids lexical retrieval. We tested three accounts: the lexical retrieval hypothesis, which predicts that semantically related gestures facilitate successful lexical retrieval; the cognitive load account, which predicts that matching gestures facilitate lexical retrieval only when retrieval is hard, as in the case of a TOT; and the motor movement account, which predicts that any motor movements should support lexical retrieval. In Experiment 1 (a between-subjects study; N = 90), gesture inhibition, but not neck inhibition, affected TOT resolution but not overall lexical retrieval; participants in the gesture-inhibited condition resolved fewer TOTs than participants who were allowed to gesture. When participants could gesture, they produced more representational gestures during resolved than unresolved TOTs, a pattern not observed for meaningless motor movements (e.g., beats). However, the effect of gesture inhibition on TOT resolution was not uniform; some participants resolved many TOTs, while others struggled. In Experiment 2 (a within-subjects study; N = 34), the effect of gesture inhibition was traced to individual differences in verbal, not spatial short-term memory (STM) span; those with weaker verbal STM resolved fewer TOTs when unable to gesture. This relationship between verbal STM and TOT resolution was not observed when participants were allowed to gesture. Taken together, these results fit the cognitive load account; when lexical retrieval is hard, gesture effectively reduces the cognitive load of TOT resolution for those who find the task especially taxing.  相似文献   
313.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号