全文获取类型
收费全文 | 356篇 |
免费 | 49篇 |
国内免费 | 63篇 |
出版年
2024年 | 1篇 |
2023年 | 8篇 |
2022年 | 6篇 |
2021年 | 9篇 |
2020年 | 24篇 |
2019年 | 30篇 |
2018年 | 22篇 |
2017年 | 24篇 |
2016年 | 23篇 |
2015年 | 7篇 |
2014年 | 19篇 |
2013年 | 61篇 |
2012年 | 16篇 |
2011年 | 16篇 |
2010年 | 12篇 |
2009年 | 20篇 |
2008年 | 14篇 |
2007年 | 21篇 |
2006年 | 23篇 |
2005年 | 19篇 |
2004年 | 15篇 |
2003年 | 10篇 |
2002年 | 14篇 |
2001年 | 10篇 |
2000年 | 8篇 |
1999年 | 1篇 |
1998年 | 5篇 |
1997年 | 3篇 |
1996年 | 3篇 |
1995年 | 1篇 |
1994年 | 1篇 |
1993年 | 3篇 |
1992年 | 2篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1988年 | 1篇 |
1986年 | 1篇 |
1985年 | 4篇 |
1983年 | 2篇 |
1982年 | 2篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 1篇 |
1977年 | 1篇 |
排序方式: 共有468条查询结果,搜索用时 15 毫秒
1.
In each of two experiments, 2 pigeons received discrimination training in which food reinforcement for key pecking was conditional upon both spatial and temporal cues. In Experiment 1, food was available for periods of 30 s at each of three locations (pecking keys) during trials that lasted 90 s. In Experiment 2, food was available for periods of 15 min at each of four locations (pecking keys) during a 60-min trial. In both experiments, pigeons' key pecking was jointly controlled by the spatial and temporal cues. These data, and other recent experiments, suggest that animals learn relationships between temporal and spatial cues that predict stable patterns of food availability. 相似文献
2.
3.
This study investigated the effectiveness of using visual cues to highlight the seams of baseballs to improve the hitting of curveballs. Five undergraduate varsity baseball team candidates served as subjects. Behavior change was assessed through an alternating treatments design involving unmarked balls and two treatment conditions that included baseballs with 1/4-in. and 1/8-in. orange stripes marking the seams of the baseballs. Results indicated that subjects hit a greater percentage of marked than unmarked balls. These results suggest that the addition of visual cues may be a significant and beneficial technique to enhance hitting performance. Further research is suggested regarding the training procedures, effect of feedback, rate of fading cues, generalization to live pitching, and generalization to other types of pitches. 相似文献
4.
Abstract: Do memories change as we acquire new information? Recent research on memory distortion using implicit tests along with research using confidence is reviewed and new studies are presented. Two new studies asked misinformed subjects to provide reasons for their answers. In each study 15% to 27% of subjects said they remembered seeing items they had only read about. In another study subjects were asked to identify the source of misleading items they had seen in slides or read in misleading questions. Subjects were more likely to say they had seen in slides something they read about in the questions than they were to confuse information from two nearly identical sets of slides. Recent work shows that, not only is it possible to distort memory for events, it is possible to implant an entire memory for something that never happened. The evidence is now clear that we can become mentally tricked into making large as well as small changes in the way we recall the past. 相似文献
5.
Amanda C. G. Hall Daniel G. Evans Lindsay Higginbotham Kathleen S. Thompson 《Scandinavian journal of psychology》2020,61(3):333-347
We investigated whether the previously established effect of mood on episodic memory generalizes to semantic memory and whether mood affects metacognitive judgments associated with the retrieval of semantic information. Sixty-eight participants were induced into a happy or sad mood by viewing and describing IAPS images. Following mood induction, participants saw a total of 200 general knowledge trivia items (50 open-ended and 50 multiple-choice after each of two mood inductions) and were asked to provide a metacognitive judgment about their knowledge for each item before providing a response. A sample trivia item is: Author – – To kill a mockingbird. Results indicate that mood affects the retrieval of semantic information, but only when the participant believes they possess the requested semantic information; furthermore, this effect depends upon the presence of retrieval cues. In addition, we found that mood does not affect the likelihood of different metacognitive judgments associated with the retrieval of semantic information, but that, in some cases, having retrieval cues increases accuracy of these metacognitive judgments. Our results suggest that semantic retrieval processes are minimally susceptible to the influence of affective state but does not preclude the possibility that affective state may influence encoding of semantic information. 相似文献
6.
7.
Larissa L. Wieczorek Cyril S. Tata Lars Penke Tanja M. Gerlach 《Personal Relationships》2020,27(1):176-208
Event history calendars (EHCs) are popular tools for retrospective data collection. Originally conceptualized as face‐to‐face interviews, EHCs contain various questions about the respondents' autobiography in order to use their experiences as cues to facilitate remembering. For relationship researchers, EHCs are particularly valuable when trying to reconstruct the relational past of individuals. However, although many studies are conducted online nowadays, no freely available online adaptation of the EHC is available yet. In this tutorial, detailed instructions are provided on how to implement an online EHC for the reconstruction of romantic relationship histories within the open‐source framework formr. Ways to customize the online EHC and provide a template for researchers to adapt the tool for their own purposes are showcased. 相似文献
8.
以往研究表明情绪背景对来源记忆存在影响,但背景的情绪效价和唤醒如何影响熟悉性及回想尚存争议。本研究以ERPs技术作为测量手段,采用来源记忆多键范式,操纵背景的情绪效价及唤醒强度,以探讨编码阶段背景情绪影响来源记忆提取的认知神经机制。学习阶段,呈现中性汉字及情绪图片(正性高唤醒、正性低唤醒、负性高唤醒、负性低唤醒);测验阶段,仅呈现汉字,被试进行五键判断。行为结果发现:来源正确的比率比来源错误的比率更高,反应时更短;同时提取正性背景的辨别力更强,反应时更短;提取高唤醒背景反应时更短。脑电结果发现了分别代表熟悉性及回想的FN400及LPC新旧效应,且在500~700 ms,提取正性背景及高唤醒背景诱发显著更正的ERPs,但效价与唤醒没有交互作用。总体来说,来源记忆中背景效价及唤醒度对回想过程存在独立影响,体现为正性背景及高唤醒背景对来源提取的促进作用。 相似文献
9.
Little is known about the acoustic cues infants might use to selectively attend to one talker in the presence of background noise. This study examined the role of talker familiarity as a possible cue. Infants either heard their own mothers (maternal-voice condition) or a different infant's mother (novel-voice condition) repeating isolated words while a female distracter voice spoke fluently in the background. Subsequently, infants heard passages produced by the target voice containing either the familiarized, target words or novel words. Infants in the maternal-voice condition listened significantly longer to the passages containing familiar words; infants in the novel-voice condition showed no preference. These results suggest that infants are able to separate the simultaneous speech of two women when one of the voices is highly familiar to them. However, infants seem to find separating the simultaneous speech of two unfamiliar women extremely difficult. 相似文献
10.
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception. 相似文献