首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   332篇
  免费   89篇
  国内免费   31篇
  2024年   1篇
  2023年   6篇
  2022年   6篇
  2021年   9篇
  2020年   13篇
  2019年   13篇
  2018年   14篇
  2017年   16篇
  2016年   14篇
  2015年   9篇
  2014年   19篇
  2013年   29篇
  2012年   12篇
  2011年   17篇
  2010年   14篇
  2009年   23篇
  2008年   27篇
  2007年   31篇
  2006年   12篇
  2005年   17篇
  2004年   13篇
  2003年   12篇
  2002年   13篇
  2001年   12篇
  2000年   4篇
  1999年   19篇
  1998年   5篇
  1997年   6篇
  1996年   7篇
  1995年   6篇
  1994年   6篇
  1993年   5篇
  1992年   4篇
  1991年   8篇
  1990年   4篇
  1989年   3篇
  1988年   2篇
  1987年   1篇
  1986年   2篇
  1985年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   4篇
  1978年   4篇
  1975年   3篇
排序方式: 共有452条查询结果,搜索用时 15 毫秒
121.
122.
Cureton & Mulaik (1975) proposed the Weighted Varimax rotation so that Varimax (Kaiser, 1958) could reach simple solutions when the complexities of the variables in the solution are larger than one. In the present paper the weighting procedure proposed by Cureton & Mulaik (1975) is applied to Direct Oblimin (Clarkson & Jennrich, 1988), and the rotation method obtained is called Weighted Oblimin. It has been tested on artificial complex data and real data, and the results seem to indicate that, even though Direct Oblimin rotation fails when applied to complex data, Weighted Oblimin gives good results if a variable with complexity one can be found for each factor in the pattern. Although the weighting procedure proposed by Cureton & Mulaik is based on Landahl's (1938) expression for orthogonal factors, Weighted Oblimin seems to be adequate even with highly oblique factors. The new rotation method was compared to other rotation methods based on the same weighting procedure and, whenever a variable with complexity one could be found for each factor in the pattern, Weighted Oblimin gave the best results. When rotating a simple empirical loading matrix, Weighted Oblimin seemed to slightly increase the performance of Direct Oblimin.The author is obliged to Henk A. L. Kiers and three anonymous reviewers for helpful comments on an earlier version of this paper.  相似文献   
123.
We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision's context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities.  相似文献   
124.
This study addresses a central question in perception of novel figurative language: whether it is interpreted intelligently and figuratively immediately, or only after a literal interpretation fails. Eighty sentence frames that could plausibly end with a literal, truly anomalous, or figurative word were created. After validation for meaningfulness and figurativeness, the 240 sentences were presented to 11 subjects for event related potential (ERP) recording. ERP's first 200 ms is believed to reflect the structuring of the input; the prominence of a dip at around 400 ms (N400) is said to relate inversely to how expected a word is. Results showed no difference between anomalous and metaphoric ERPs in the early window, metaphoric and literal ERPs converging 300-500 ms after the ending, and significant N400s only for anomalous endings. A follow-up study showed that the metaphoric endings were less frequent (in standardized word norms) than were the anomalous and literal endings and that there were significant differences in cloze probabilities (determined from 24 new subjects) among the three ending types: literal > metaphoric > anomalous. It is possible that the low frequency of the metaphoric element and lower cloze probability of the anomalous one contributed to the processes reflected in the early window, while the incongruity and near-zero cloze probability of the anomalous endings produced an N400 effect in them alone. The structure or parse derived for metaphor during the early window appears to yield a preliminary interpretation suggesting anomaly, while semantic analysis reflected in the later window renders a plausible figurative interpretation.  相似文献   
125.
Girotto V  Gonzalez M 《Cognition》2002,84(3):353-359
Do individuals unfamiliar with probability and statistics need a specific type of data in order to draw correct inferences about uncertain events? Girotto and Gonzalez (Cognition 78 (2001) 247) showed that naive individuals solve frequency as well as probability problems, when they reason extensionally, in particular when probabilities are represented by numbers of chances. Hoffrage, Gigerenzer, Krauss, and Martignon (Cognition 84 (2002) 343) argued that numbers of chances are natural frequencies disguised as probabilities, though lacking the properties of true probabilities. They concluded that we failed to demonstrate that naive individuals can deal with true probabilities as opposed to natural frequencies. In this paper, we demonstrate that numbers of chances do represent probabilities, and that naive individuals do not confuse numbers of chances with frequencies. We conclude that there is no evidence for the claim that natural frequencies have a special cognitive status, and the evolutionary argument that the human mind is unable to deal with probabilities.  相似文献   
126.
In a first study 10 adults, aged 24-44 years, solved all 105 subtraction problems in the form M - N = , where 0 < or = M < or = 13, 0 < or = N < or = 13 and N < or = M. Each participant solved every problem 10 times and in total there were 10 500 answers. Answers, response latencies and errors were registered. Retrospective verbal reports were also given, indicating how a solution was reached: (1) via a (conscious) reconstructive cognitive process or (2) via an (unconscious) reproductive (retrieval) process. The participants made 291 errors (2.8%) when solving the subtractions in study 1. The rate of self-correction was very high, 92%. In a second study 27 undergraduate students estimated overall error rates, including self-corrected errors for the 105 subtraction problems used in the first study. Judged and actual error rates were compared. The participants systematically underestimated error rates for error prone problems and overestimated error rates for error free problems. The participants were fairly accurate when they predicted problems that were most error prone, with a hit rate of 0.67 for the (18) problems predicted as the most error prone ones. In contrast, predictions of which problems were error free were very poor with a hit rate of only 0.20 of the problems predicted as error free really having no errors in study 1. The correlation between judged error rates and frequencies for actually made errors was 0.69 for answers belonging to reconstructive solutions. In contrast, there was no significant correlation between judged and actual error rates at all for retrieved solutions, possibly reflecting the inaccessibility to consciousness of quick retrieval processes.  相似文献   
127.
Katie Steele 《Synthese》2007,158(2):189-205
I focus my discussion on the well-known Ellsberg paradox. I find good normative reasons for incorporating non-precise belief, as represented by sets of probabilities, in an Ellsberg decision model. This amounts to forgoing the completeness axiom of expected utility theory. Provided that probability sets are interpreted as genuinely indeterminate belief (as opposed to “imprecise” belief), such a model can moreover make the “Ellsberg choices” rationally permissible. Without some further element to the story, however, the model does not explain how an agent may come to have unique preferences for each of the Ellsberg options. Levi (1986, Hard choices: Decision making under unresolved conflict. Cambridge, New York: Cambridge University Press) holds that the extra element amounts to innocuous secondary “risk” or security considerations that are used to break ties when more than one option is rationally permissible. While I think a lexical choice rule of this kind is very plausible, I argue that it involves a greater break with xpected utility theory than mere violation of the ordering axiom.  相似文献   
128.
样例学习条件下的因果力估计   总被引:2,自引:1,他引:1  
在逐个呈现因果样例的条件下,考察单一因果关系因果力估计的特点,同时检验联想解释,概率对比模型,权重DP模型,效力PC理论和pCI规则。实验让65名大学生被试估计不同化学药物影响动物基因变异的能力。实验结果表明:(1)对产生原因的因果力估计符合权重DP模型;(2)对预防原因的因果力估计较多符合效力PC理论;(3)因果力估计具有复杂多样性,难以用统一的模式加以描述和概括。  相似文献   
129.
Heuristics are mental shortcuts that aid people in everyday problem-solving and decision-making. Although numerous studies have demonstrated their use in contexts ranging from consumers’ shopping decisions to experts’ estimations of experimental validity, virtually no published research has addressed heuristics use in problems involving genetic conditions and associated risk probabilities. The present research consists of two studies. In the first study, 220 undergraduates attempted to solve four genetic problems—two common heuristic problems modified to focus on genetic likelihood, and two created to study heuristics and probability rule application. Results revealed that the vast majority of undergraduates used heuristics and also demonstrated a complete misuse of probability rules. In the second study, 156 practicing genetic counselors and 89 genetic counseling students solved slightly modified versions of the genetic problems used in Study 1. Results indicated that a large percentage of both genetic counselors and students used heuristics, but the counselors demonstrated superior problem-solving performance compared to both the genetic counseling students and the undergraduates from Study 1. Research, training, and practice recommendations are presented.  相似文献   
130.
向玲  王宝玺  张庆林 《心理科学》2007,30(1):253-255
采用三因素完全随机实验探究主观概率判断是否满足次可加性规律,结果表明:(1)分解方式、分解数量和分解事例的典型性等三个因素对主观概率判断均有显著的影响。(2)次可加性不是一种普遍现象,主观概率判断中也会出现可加性和超可加性:把事件隐分为非典型事例时会出现超可加性,把事件隐分为典型或者典型加非典型性的事例时会出现可加性,而把事件显分时会一致出现次可加性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号