首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   767篇
  免费   34篇
  国内免费   39篇
  2023年   4篇
  2022年   7篇
  2021年   30篇
  2020年   21篇
  2019年   34篇
  2018年   23篇
  2017年   29篇
  2016年   25篇
  2015年   20篇
  2014年   40篇
  2013年   95篇
  2012年   32篇
  2011年   43篇
  2010年   10篇
  2009年   40篇
  2008年   44篇
  2007年   32篇
  2006年   21篇
  2005年   16篇
  2004年   30篇
  2003年   15篇
  2002年   16篇
  2001年   6篇
  2000年   7篇
  1998年   11篇
  1997年   5篇
  1996年   7篇
  1995年   4篇
  1994年   2篇
  1993年   6篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1988年   2篇
  1987年   4篇
  1986年   2篇
  1985年   14篇
  1984年   19篇
  1983年   18篇
  1982年   18篇
  1981年   16篇
  1980年   21篇
  1979年   14篇
  1978年   17篇
  1977年   6篇
  1976年   4篇
  1975年   1篇
  1974年   1篇
  1973年   1篇
排序方式: 共有840条查询结果,搜索用时 15 毫秒
91.
Error theories about morality often take as their starting point the supposed queerness of morality, and those resisting these arguments often try to argue by analogy that morality is no more queer than other unproblematic subject matters. Here, error theory (as exemplified primarily by the work of Richard Joyce) is resisted first by arguing that it assumes a common, modern, and peculiarly social conception of morality. Then error theorists point out that the social nature of morality requires one to act against one's self‐interest while insisting on the categorical, inescapable, or overriding status of moral considerations: they argue that morality requires magic, then (rightly) claim that there is no such thing as magic. An alternate eudaimonist conception of morality is introduced which itself has an older provenance than the social point of view, dating to the ancient Greeks. Eudaimonism answers to the normative requirements of morality, yet does not require magic. Thus, the initial motivation for error theory is removed.  相似文献   
92.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   
93.
This paper presents methods for second order meta-analysis along with several illustrative applications. A second order meta-analysis is a meta-analysis of a number of statistically independent and methodologically comparable first order meta-analyses examining ostensibly the same relationship in different contexts. First order meta-analysis greatly reduces sampling error variance but does not eliminate it. The residual sampling error is called second order sampling error. The purpose of a second order meta-analysis is to estimate the proportion of the variance in mean meta-analytic effect sizes across multiple first order meta-analyses attributable to second order sampling error and to use this information to improve accuracy of estimation for each first order meta-analytic estimate. We present equations and methods based on the random effects model for second order meta-analysis for three situations and three empirical applications of second order meta-analysis to illustrate the potential value of these methods to the pursuit of cumulative knowledge.  相似文献   
94.
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   
95.
The relevance of cognitive-control processes has been frequently discussed and studied in the context of dichotic listening. Experimental and clinical studies indicate that directing attention to either of the two simultaneously presented phonological stimuli, but especially to the left-ear stimulus increases the requirements for cognitive-control processes. Here, we extend this view by reporting the results of a behavioural and a functional magnetic-resonance imaging (fMRI) experiment designed to analyse the involvement of cognitive-control processes also in a free-report dichotic-listening paradigm. It was hypothesised that dichotically presented pairs of stop–consonant–vowel syllables would provide different demands for cognitive-control processes as a function of the spectro-temporal overlap of the two stimuli. Accordingly, in Experiment 1 it was shown that dichotic syllables of high (e.g., /ba/ and /ga/) as opposed to low spectro-temporal overlap (e.g., /ba/ and /ka/) produce significantly faster and more correct answers, and are more often perceived as one syllable. In Experiment 2 it was further shown that pairs of low as compared to high spectro-temporal overlap trigger a more pronounced activation predominately in left-hemispheric, speech-associated brain regions, namely left posterior inferior sulcus/gyrus, bilaterally in pre-supplementary motor and mid-cingulate cortex as well as in the inferior parietal lobe. Taken together, behavioural and functional data indicate a stronger involvement of reactive cognitive control in the processing of low-overlap as opposed to high-overlap stimulus pairs. This supports the notion that higher-order, speech-related cognitive-control processes also are involved in a free-report dichotic-listening paradigm.  相似文献   
96.
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody.  相似文献   
97.
Aftereffects of adaptation have revealed both independent and interactive coding of facial signals including identity and expression or gender and age. By contrast, interactive processing of non-linguistic features in voices has rarely been investigated. Here we studied bidirectional cross-categorical aftereffects of adaptation to vocal age and gender. Prolonged exposure to young (~ 20 yrs) or old (~ 70 yrs) male or female voices biased perception of subsequent test voices away from the adapting age (Exp. 1) and the adapting gender (Exp. 2). Relative to gender-congruent adaptor-test pairings, vocal age aftereffects (VAAEs) were reduced but remained significant when voice gender changed between adaptation and test. This suggests that the VAAE relies on both gender-specific and gender-independent age representations for male and female voices. By contrast, voice gender aftereffects (VGAEs) were not modulated by age-congruency of adaptor and test voices (Exp. 2). Instead, young voice adaptors generally induced larger VGAEs than old voice adaptors. This suggests that young voices are particularly efficient gender adaptors, likely reflecting more pronounced sexual dimorphism in these voices. In sum, our findings demonstrate how high-level processing of vocal age and gender is partially intertwined.  相似文献   
98.

This study presents a new research paradigm designed to explore the effect of anxiety on semantic information processing. It is based on the premise that the demonstrated effect of anxiety on cognitive performance and apparent inconsistencies reported in the literature might be better understood in terms of linguistic properties of inner speech which underlies analytic (vs. intuitive) thought processes. The study employed several parameters of functional linguistics in order to analyse properties of public speech by high- and low-anxious individuals. Results indicate that anxiety is associated with greater use of associative clauses that take the speaker further away from the original starting point before coming back and concluding (identified as reduced semantic efficiency). This is accompanied by a speech pattern that includes greater amounts of factual information unaccompanied by elaborate argumentation. While these results are considered tentative due to methodological and empirical shortcomings, they suggest the viability of this approach.  相似文献   
99.
ABSTRACT

This study investigated the association between exercise type and inhibition of prepotent responses and error detection. Totally, 75 adults (M = 68.88 years) were classified into one of three exercise groups: those who were regular participants in open- or closed-skill forms of exercise, and those who exercised only irregularly. The participants completed a Stroop and task-switching tasks with event-related brain potentials (ERPs) recorded. The results revealed that regular exercisers displayed faster reaction times (RTs) in the Stroop task compared with irregular exercisers. The open-skill exercisers exhibited smaller N200 and larger P300a amplitudes in the Stroop task compared with irregular exercisers. Furthermore, the open-skill exercisers showed a tendency of shorter error-related negativity latencies at the task-witching test. The findings suggest that older adults may gain extra cognitive benefits in areas such as inhibition functioning and error processing from participating in open-skill forms of physical exercises.  相似文献   
100.
ABSTRACT

In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号