首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   357篇
  免费   20篇
  国内免费   17篇
  2023年   1篇
  2022年   1篇
  2021年   11篇
  2020年   8篇
  2019年   10篇
  2018年   11篇
  2017年   13篇
  2016年   18篇
  2015年   15篇
  2014年   26篇
  2013年   64篇
  2012年   8篇
  2011年   31篇
  2010年   14篇
  2009年   26篇
  2008年   27篇
  2007年   26篇
  2006年   15篇
  2005年   11篇
  2004年   8篇
  2003年   15篇
  2002年   5篇
  2001年   10篇
  2000年   4篇
  1999年   4篇
  1998年   2篇
  1996年   2篇
  1994年   2篇
  1993年   1篇
  1992年   2篇
  1989年   2篇
  1988年   1篇
排序方式: 共有394条查询结果,搜索用时 31 毫秒
11.
Recent studies of naturalistic face‐to‐face communication have demonstrated coordination patterns such as the temporal matching of verbal and non‐verbal behavior, which provides evidence for the proposal that verbal and non‐verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non‐verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, 2012 ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non‐verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non‐verbal channels. We propose a “temporal heterogeneity” hypothesis to explain how the language system adapts to the demands of dialog.  相似文献   
12.
The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.  相似文献   
13.
In everyday interactions with others, people have to deal with the sight of a face and sound of a voice at the same time. How the perceptual system brings this information together over hundreds of milliseconds to perceive others remains unclear. In 2 studies, we investigated how facial and vocal cues are integrated during real-time social categorization by recording participants' hand movements (via the streaming x, y coordinates of the computer mouse) en route to “male” and “female” responses on the screen. Participants were presented with male and female faces that were accompanied by a same-sex voice morphed to be either sex-typical (e.g., masculinized male voice) or sex-atypical (i.e., feminized male voice). Before settling into ultimate sex categorizations of the face, the simultaneous processing of a sex-atypical voice led the hand to be continuously attracted to the opposite sex-category response across construal. This is evidence that ongoing results from voice perception continuously influence face perception across processing. Thus, social categorization involves dynamic updates of gradual integration of the face and voice.  相似文献   
14.
It is well-known that patients having sustained frontal-lobe traumatic brain injury (TBI) are severely impaired on tests of emotion recognition. Indeed, these patients have significant difficulty recognizing facial expressions of emotion, and such deficits are often associated with decreased social functioning and poor quality of life. As of yet, no studies have examined the response patterns which underlie facial emotion recognition impairment in TBI and which may lend clarity to the interpretation of deficits. Therefore, the present study aimed to characterize response patterns in facial emotion recognition in 14 patients with frontal TBI compared to 22 matched control subjects, using a task which required participants to rate the intensity of each emotion (happiness, sadness, anger, disgust, surprise and fear) of a series of photographs of emotional and neutral faces. Results first confirmed the presence of facial emotion recognition impairment in TBI, and further revealed that patients displayed a liberal bias when rating facial expressions, leading them to associate intense ratings of incorrect emotional labels to sad, disgusted, surprised and fearful facial expressions. These findings are generally in line with prior studies which also report important facial affect recognition deficits in TBI patients, particularly for negative emotions.  相似文献   
15.
It has long been recognised that depression and anxiety share a common core of negative affect, but research on similarities and differences between these two emotions is growing. The focus of the current study was on whether the timing of a triggering event can determine whether the dominant emotional reaction is depression or anxiety. It was hypothesised that aversive events in the past would elicit more depression than anxiety, whereas the same aversive events in the future would elicit more anxiety than depression. We created temporally varied versions of vignettes describing aversive events occurring at either time, and asked participants to rate the extent to which the events would elicit feelings of depression or anxiety. Results indicated that adverse past events elicited much higher ratings of anticipated depression and adverse future events elicited much higher ratings of anticipated anxiety. Implications for understanding these two emotions and depressive and anxiety disorders are discussed.  相似文献   
16.
Although research on language production has developed detailed maps of the brain basis of single word production in both time and space, little is known about the spatiotemporal dynamics of the processes that combine individual words into larger representations during production. Studying composition in production is challenging due to difficulties both in controlling produced utterances and in measuring the associated brain responses. Here, we circumvent both problems using a minimal composition paradigm combined with the high temporal resolution of magnetoencephalography (MEG). With MEG, we measured the planning stages of simple adjective–noun phrases (‘red tree’), matched list controls (‘red, blue’), and individual nouns (‘tree’) and adjectives (‘red’), with results indicating combinatorial processing in the ventro-medial prefrontal cortex (vmPFC) and left anterior temporal lobe (LATL), two regions previously implicated for the comprehension of similar phrases. These effects began relatively quickly (∼180 ms) after the presentation of a production prompt, suggesting that combination commences with initial lexical access. Further, while in comprehension, vmPFC effects have followed LATL effects, in this production paradigm vmPFC effects occurred mostly in parallel with LATL effects, suggesting that a late process in comprehension is an early process in production. Thus, our results provide a novel neural bridge between psycholinguistic models of comprehension and production that posit functionally similar combinatorial mechanisms operating in reversed order.  相似文献   
17.
《Body image》2014,11(1):27-35
This study examined the one-year temporal stability and the predictive and incremental validity of the Body, Eating, and Exercise Comparison Measure (BEECOM) in a sample of 237 college women who completed study measures at two time points about one year apart. One-year temporal stability was high for the BEECOM total and subscale (i.e., Body, Eating, and Exercise Comparison Orientation) scores. Additionally, the BEECOM exhibited predictive validity in that it accounted for variance in body dissatisfaction and eating disorder symptomatology one year later. These findings held even after controlling for body mass index and existing measures of social comparison orientation. However, results regarding the incremental validity of the BEECOM, or its ability to predict change in these constructs over time, were more mixed. Overall, this study demonstrated additional psychometric properties of the BEECOM among college women, further establishing the usefulness of this measure for more comprehensively assessing eating disorder-related social comparison.  相似文献   
18.
Previous electrophysiological studies have shown that attentional selection processes are highly sensitive to the temporal order of task-relevant visual events. When two successively presented colour-defined target stimuli are separated by a stimulus onset asynchrony (SOA) of only 10 ms, the onset latencies of N2pc components to these stimuli (which reflect their attentional selection) precisely match their objective temporal separation. We tested whether such small onset differences are accessible to conscious awareness by instructing participants to report the category (letter or digit) of the first of two target-colour items that were separated by an SOA of 10, 20, or 30 ms. Performance was at chance level for the 10 ms SOA, demonstrating that temporal order information which is available to attentional control processes cannot be utilized for conscious temporal order judgments. These results provide new evidence that selective attention and conscious awareness are functionally separable, and support the hypothesis that attention and awareness operate at different stages of cognitive processing.  相似文献   
19.
While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures.  相似文献   
20.
Fifty right-handed patients with focal temporal lobe epilepsy were administered a dichotic listening test with consonant-vowel syllables under non-forced, forced right and forced left attention conditions, and a neuropsychological test battery. Dichotic listening performance was compared in subgroups with and without left hemisphere cognitive dysfunction, measured by the test battery, and in subgroups with left and right temporal epileptic focus. Left hemisphere cognitive dysfunction led to more correct responses to left ear stimuli in all three attention conditions, and fewer correct responses to right ear stimuli in the non-forced attention condition. This was probably caused by basic left hemisphere perceptual dysfunction. Dichotic listening was less affected by a left-sided epileptic focus than by left hemisphere cognitive dysfunction. General cognitive functioning influenced dichotic listening performance stronger in forced than in non-forced attention conditions. Larger cerebral networks were probably involved in the forced attention conditions due to the emphasis on conscious effort.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号