首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   82篇
  免费   1篇
  83篇
  2020年   1篇
  2016年   1篇
  2015年   1篇
  2014年   1篇
  2013年   4篇
  2012年   2篇
  2011年   2篇
  2010年   1篇
  2009年   1篇
  2008年   3篇
  2007年   2篇
  2006年   5篇
  2005年   2篇
  2004年   2篇
  2003年   1篇
  2002年   1篇
  2001年   3篇
  2000年   1篇
  1999年   1篇
  1997年   2篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1992年   7篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1987年   3篇
  1986年   1篇
  1985年   3篇
  1984年   2篇
  1983年   2篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1975年   3篇
  1974年   1篇
  1973年   1篇
  1971年   1篇
  1969年   2篇
  1968年   1篇
  1967年   1篇
排序方式: 共有83条查询结果,搜索用时 0 毫秒
11.
12.
How do we think about the space of bodies? Several accounts of mental representations of bodies were addressed in body part verification tasks. Animagery account predicts shorter times to larger parts (e.g., back < hand). Apart distinctiveness account predicts shorter times to more discontinuous parts (e.g., arm < chest). Apart significance account predicts shorter times to parts that are perceptually distinct and functionally important (e.g., head < back). Because distinctiveness and significance are correlated, the latter two accounts are difficult to distinguish. Both name-body and body-body comparisons were investigated in four experiments. In all, larger parts were verified more slowly than smaller ones, eliminating the imagery/size account. Despite the correlation between distinctiveness and significance, the data suggest that when comparisons are perceptual (body-body), part distinctiveness is the best predictor, and when explicit or implicit naming is involved, part significance is the best predictor. Naming seems to activate the functional aspects of bodies.  相似文献   
13.
How do people understand the everyday, yet intricate, behaviors that unfold around them? In the present research, we explored this by presenting viewers with self-paced slideshows of everyday activities and recording looking times, subjective segmentation (breakpoints) into action units, and slide-to-slide physical change. A detailed comparison of the joint time courses of these variables showed that looking time and physical change were locally maximal at breakpoints and greater for higher level action units than for lower level units. Even when slideshows were scrambled, breakpoints were regarded longer and were more physically different from ordinary moments, showing that breakpoints are distinct even out of context. Breakpoints are bridges: from one action to another, from one level to another, and from perception to conception.  相似文献   
14.
Both recognition and recall of pictures improve as picture presentation time increases and as time between pictures increases. Processing of the pictures, rehearsal and/or encoding, continues after the picture has disappeared, just as for verbal material. Both the results and conclusions stand in contrast to those of Shaffer and Shiffrin.  相似文献   
15.
A binary detection task, free from sensory components, is investigated. A deterministic model prescribing a fixed cutoff point is confirmed; a probabilistic model, which generalizes Lee’s micromatching model for externally distributed stimuli, is rejected.  相似文献   
16.
The present research attempted to manipulate the encoding modality, pictorial or verbal, of schematic faces with well-learned names by manipulating S’s expectations of the way the material was to be used. On every trial, a single name or face was presented, followed by another one; the S was asked to respond “same” if the stimuli had the same name, and “different” otherwise. The majority of second stimuli of any session was either names or faces. It was hypothesized that if S had encoded the first stimulus in the modality of the second, his judgment would be faster than if he had not appropriately encoded the first stimulus. Significantly slower reaction times were obtained to stimulus pairs where the second stimulus modality was infrequent. Further evidence that encoding of the first stimulus was in the frequent second stimulus modality comes from the finding that “different” responses were shorter when the stimuli differed on more than one attribute in the encoding (second stimulus) modality, regardless of the modality of the stimuli. Thus, evidence is presented that not only can verbal material be pictorially encoded (and vice versa), but that whether either verbal or pictorial material is verbally or pictorially encoded depends on S’s anticipation of what he is to do with the material.  相似文献   
17.
In this study we obtained direct and comparative judgments of the dissimilarity between schematic faces varying on three binary attributes. These data were used to test the hypothesis that the overall dissimilarity between faces can be decomposed (in an ordinal sense) into three additive components, one for each attribute. The hypothesis was strongly confirmed by both the direct and the comparative judgments. The study illustrates the possible usefulness of the measurement-theoretic analysis of simple combination mies for psychological dimensions.  相似文献   
18.
Eyewitnesses to traumatic events typically talk about them, and they may do so for different reasons. Of interest was whether qualitatively different retellings would lead to differences in later memory. All participants watched a violent film scene; one third talked about their emotional reactions to the film (as one might do when talking to a friend), one third described the events of the film (as the police might request), and one third did unrelated tasks. Following a delay, all participants were tested on their memories for the clip. Talking about emotions led to better memory for one's emotions, but also led to subjectivity and a greater proportion of major errors in free recall. Differences were minimized on tests providing more retrieval cues, suggesting that retellings' consequences for memory are greater when retellers have to generate their own retrieval structures. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号