首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   58篇
  免费   0篇
  国内免费   1篇
  2023年   2篇
  2022年   1篇
  2021年   1篇
  2020年   2篇
  2019年   9篇
  2018年   5篇
  2017年   1篇
  2015年   1篇
  2014年   1篇
  2013年   7篇
  2012年   3篇
  2011年   3篇
  2009年   1篇
  2008年   3篇
  2007年   4篇
  2006年   3篇
  2005年   2篇
  2004年   1篇
  2002年   1篇
  2000年   1篇
  1998年   2篇
  1994年   1篇
  1992年   1篇
  1991年   1篇
  1984年   1篇
  1977年   1篇
排序方式: 共有59条查询结果,搜索用时 15 毫秒
1.
The aim of this study was to investigate the role of discrete emotions in lexical processing and memory, focusing on disgust and fear. We compared neutral words to disgust-related words and fear-related words in three experiments. In Experiments 1 and 2, participants performed a lexical decision task (LDT), and in Experiment 3 an affective categorisation task. These tasks were followed by an unexpected memory task. The results of the LDT experiments showed slower reaction times for both types of negative words with respect to neutral words, plus a higher percentage of errors, this being more consistent for fear-related words (Experiments 1 and 2) than for disgust-related words (Experiment 2). Furthermore, only disgusting words exhibited a higher recall accuracy than neutral words in the memory task. Moreover, the advantage in memory for disgusting words disappeared when participants carried out an affective categorisation task during encoding (Experiment 3), suggesting that the superiority in memory for disgusting words observed in Experiments 1 and 2 could be due to greater elaborative processing. Taken together, these findings point to the relevance of discrete emotions in explaining the effects of the emotional content on lexical processing and memory.  相似文献   
2.
3.
Twelve rats made repeated choices on an adjusting-delay schedule between a smaller reinforcer (A) that was delivered immediately after a response and a larger reinforcer (B) that was delivered after a delay which increased or decreased by 20% depending on the subject's choices in successive blocks of trials. In two phases of the experiment (100 sessions and 40 sessions), reinforcer sizes were selected which enabled theoretical parameters expressing the rate of delay discounting and sensitivity to reinforcer size to be estimated from the ratio of the indifference delays obtained in the two phases. Indifference delays, calculated from adjusting delays in the last 10 sessions of each phase, were shorter when the sizes of A and B were 14 and 25 μl of a 0.6 M sucrose solution than when they were 25 and 100 μl of the same solution. The ratio of the indifference delays was significantly smaller than that predicted on the basis of an assumed linear relation between reinforcer size and instantaneous reinforcer value, consistent with a previous proposal that this relation may be hyperbolic in form. Estimates of the rate of delay discounting based on the ratio of the two indifference delays (mean, 0.08 s(-1)) were similar to values obtained previously using different intertemporal choice protocols. Estimates of the size-sensitivity parameter (mean 113 μl) were similar to estimates recently derived from performance on progressive-ratio schedules. In both phases of the experiment, adjusting delays in successive blocks of trials were analyzed using the Fourier transform. The power spectrum obtained from individual rats had a dominant frequency that corresponded to a period of oscillation of the adjusting delay between 30 and 100 trial blocks (mean, 78). Power in the dominant frequency band was highest in the early sessions of the first phase and declined with extended training. It is suggested that this experimental protocol may have utility in neurobehavioral studies of intertemporal choice.  相似文献   
4.
The privacy of patients is jeopardised when medical records and data are spread or shared beyond the protected cloud of institutions. This is because breaches force them to the brink that they start abstaining from full disclosure of their condition. This type of condition has a negative effect on scientific research, patients and all stakeholders. A blockchain-based data sharing system is proposed to tackle this issue, which employs immutability and autonomy properties of the blockchain to sufficiently resolve challenges associated with access control and handle sensitive data. Our proposed system is supported by a Discrete Wavelet Transform to enhance the overall security, and a Genetic Algorithm technique to optimise the queuing optimization technique as well. Introducing this cryptographic key generator enhances the immunity and system access control, which allows verifying users securely in a fast way. This design allows further accountability since all users involved are already known and the blockchain records a log of their actions. Only when the users’ cryptographic keys and identities are confirmed, the system allows requesting data from the shared queuing requests. The achieved execution time per node, confirmation time per node and robust index for block number of 0.19 s, 0.17 s and 20 respectively that based on system evaluation illustrates that our system is robust, efficient, immune and scalable.  相似文献   
5.
We study various axioms of discrete probabilistic choice, measuring how restrictive they are, both alone and in the presence of other axioms, given a specific class of prior distributions over a complete collection of finite choice probabilities. We do this by using Monte Carlo simulation to compute, for a range of prior distributions, probabilities that various simple and compound axioms hold. For example, the probability of the triangle inequality is usually many orders of magnitude higher than the probability of random utility. While neither the triangle inequality nor weak stochastic transitivity imply the other, the conditional probability that one holds given the other holds is greater than the marginal probability, for all priors in the class we consider. The reciprocal of the prior probability that an axiom holds is an upper bound on the Bayes factor in favor of a restricted model, in which the axiom holds, against an unrestricted model. The relatively high prior probability of the triangle inequality limits the degree of support that data from a single decision maker can provide in its favor. The much lower probability of random utility implies that the Bayes factor in favor of it can be much higher, for suitable data.  相似文献   
6.
7.
A monitoring bias account is often used to explain speech error patterns that seem to be the result of an interactive language production system, like phonological influences on lexical selection errors. A biased monitor is suggested to detect and covertly correct certain errors more often than others. For instance, this account predicts that errors that are phonologically similar to intended words are harder to detect than those that are phonologically dissimilar. To test this, we tried to elicit phonological errors under the same conditions as those that show other kinds of lexical selection errors. In five experiments, we presented participants with high cloze probability sentence fragments followed by a picture that was semantically related, a homophone of a semantically related word, or phonologically related to the (implicit) last word of the sentence. All experiments elicited semantic completions or homophones of semantic completions, but none elicited phonological completions. This finding is hard to reconcile with a monitoring bias account and is better explained with an interactive production system. Additionally, this finding constrains the amount of bottom-up information flow in interactive models.  相似文献   
8.
Dale R  Duran ND 《Cognitive Science》2011,35(5):983-996
We explored the influence of negation on cognitive dynamics, measured using mouse-movement trajectories, to test the classic notion that negation acts as an operator on linguistic processing. In three experiments, participants verified the truth or falsity of simple statements, and we tracked the computer-mouse trajectories of their responses. Sentences expressing these facts sometimes contained a negation. Such negated statements could be true (e.g., "elephants are not small") or false (e.g., "elephants are not large"). In the first experiment, as predicted by the classic notion of negation, we found that negation caused more discreteness in the mouse trajectory of a response. The second experiment induced a simple context for these statements, yet negation still increased discreteness in trajectories. A third experiment enhanced the pragmatic context of sentences, and the discreteness was substantially diminished, with one primary measure no longer significantly showing increased discreteness at all. Traditional linguistic theories predict rapid shifts in cognitive dynamics occur due to the nature of negation: It is an operator that reverses the truth or falsity of an interpretation. We argue that these results support both propositional and contextual accounts of negation present in the literature, suggesting that contextual factors are crucial for determining the kind of cognitive dynamics displayed. We conclude by drawing broader lessons about theories of cognition from the case of negation.  相似文献   
9.
Barth HC 《Cognition》2008,109(2):251-266
Evidence from human cognitive neuroscience, animal neurophysiology, and behavioral research demonstrates that human adults, infants, and children share a common nonverbal quantity processing system with nonhuman animals. This system appears to represent both discrete and continuous quantity, but the proper characterization of the relationship between judgments of discrete and continuous quantity remains controversial. Some researchers have suggested that both continuous and discrete quantity may be automatically extracted from a scene and represented internally, and that competition between these representations leads to Stroop interference. Here, four experiments provide evidence for a different explanation of adults’ performance on the types of tasks that have been said to demonstrate Stroop interference between representations of discrete and continuous quantity. Our well-established tendency to underestimate individual two-dimensional areas can provide an alternative explanation (introduced here as the “illusory-Stroop” hypothesis). Though these experiments were constructed like Stroop tasks, and they produce patterns of performance that initially appear consistent with Stroop interference, Stroop interference effects are not involved. Implications for models of the construction of cumulative area representations and for theories of discrete and continuous quantity processing in large sets are discussed.  相似文献   
10.
Origins of submovements during pointing movements   总被引:1,自引:0,他引:1  
Submovements that are frequently observed in the final portion of pointing movements have traditionally been viewed as pointing accuracy adjustments. Here we re-examine this long-lasting interpretation by developing evidence that many of submovements may be non-corrective fluctuations arising from various sources of motor output variability. In particular, non-corrective submovements may emerge during motion termination and during motion of low speed. The contribution of these factors and the factor of accuracy regulation in submovement production is investigated here by manipulating movement mode (discrete, reciprocal, and passing) and target size (small and large). The three modes provided different temporal combinations of accuracy regulation and motion termination, thus allowing us to disentangle submovements associated with each factor. The target size manipulations further emphasized the role of accuracy regulation and provided variations in movement speed. Gross and fine submovements were distinguished based on the degree of perturbation of smooth motion. It was found that gross submovements were predominantly related to motion termination and not to pointing accuracy regulation. Although fine submovements were more frequent during movements to small than to large targets, other results show that they may also be not corrective submovements but rather motion fluctuations attributed to decreases in movement speed accompanying decreases in target size. Together, the findings challenge the traditional interpretation, suggesting that the majority of submovements are fluctuations emerging from mechanical and neural sources of motion variability. The implications of the findings for the mechanisms responsible for accurate target achievement are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号