首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   351篇
  免费   27篇
  国内免费   44篇
  2024年   1篇
  2023年   3篇
  2022年   3篇
  2021年   15篇
  2020年   19篇
  2019年   14篇
  2018年   9篇
  2017年   19篇
  2016年   18篇
  2015年   16篇
  2014年   29篇
  2013年   60篇
  2012年   29篇
  2011年   37篇
  2010年   12篇
  2009年   28篇
  2008年   22篇
  2007年   15篇
  2006年   13篇
  2005年   12篇
  2004年   14篇
  2003年   8篇
  2002年   9篇
  2001年   6篇
  2000年   3篇
  1998年   2篇
  1997年   3篇
  1996年   1篇
  1993年   1篇
  1976年   1篇
排序方式: 共有422条查询结果,搜索用时 31 毫秒
331.
ABSTRACT

The inability to recognise and describe emotions in the self is known as Alexithymia. In the present study we used event-related potentials (ERPs) to examine the locus of processing emotional differences in alexithymia. We tested men, both those scoring high (score?>?61) and controls who scored low (score?<?51) on the Toronto Alexithymia Scale-20 on an emotional face discrimination task. We assessed three ERP components: P1 (an index of early perceptual processing), N170 (an index of early facial processing) and P3 (an index of late attentional suppression). While controls showed a stronger P3 effect for angry faces relative to happy and neutral faces, Alexithymic men showed no significant differences in P3 across emotions. Alexithymic men showed delayed P1 and N170 amplitudes compared to controls. These results suggest that the locus of processing differences between alexithymic men and controls occur both early in perceptual processing and later in conscious processing.  相似文献   
332.
In vision, it is well established that the perceptual load of a relevant task determines the extent to which irrelevant distractors are processed. Much less research has addressed the effects of perceptual load within hearing. Here, we provide an extensive test using two different perceptual load manipulations, measuring distractor processing through response competition and awareness report. Across four experiments, we consistently failed to find support for the role of perceptual load in auditory selective attention. We therefore propose that the auditory system – although able to selectively focus processing on a relevant stream of sounds – is likely to have surplus capacity to process auditory information from other streams, regardless of the perceptual load in the attended stream. This accords well with the notion of the auditory modality acting as an ‘early-warning’ system as detection of changes in the auditory scene is crucial even when the perceptual demands of the relevant task are high.  相似文献   
333.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky “sentences” spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky “sentences” in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   
334.
Many auditory skills continue to develop beyond infancy and even into adolescence, but the factors underlying this prolonged development remain poorly understood. Of interest here is the contribution of on-line statistical learning of stimulus repetitions (anchoring) to the development of auditory spectral and temporal discrimination, as well as the potential contributions of auditory attention and working memory. Children, aged 6–13 years, as well as adults (age range: 21–33 years) were tested on auditory frequency and duration discrimination. Each type of discrimination was measured in two conditions (XAB and XXXAB) designed to afford different levels of anchoring by varying the number of repetitions of a standard stimulus (X) prior to the presentation of the test tone (A or B) in each trial. Auditory attention and working memory were also assessed. Whereas duration and frequency discrimination in either condition did not reach adult level prior to 11 years of age, the magnitude of the anchoring effect was similar across ages. These data suggest that perceptual anchoring matures prior to the attainment of adult-like discrimination thresholds. Likewise, neither attention nor working memory could account for the observed developmental trajectories. That auditory discrimination and anchoring follow dissociable developmental trajectories suggests that different factors might contribute to the development of each. We therefore conclude that although anchoring might be necessary for attaining good auditory discrimination, it does not account for the prolonged development of auditory frequency and duration discrimination in school-aged children.  相似文献   
335.
该研究以几何图形作为实验材料,测定14名硕士研究生在完成两种类比推理任务(大小变化和颜色变化)和其基线任务时的事件相关电位(ERP),探讨类比推理过程的脑内时程动态变化。研究发现,两种推理任务所诱发的波形基本一致,而推理任务与其基线任务之间的ERP波形存在显著差异,两种基线任务之间的ERP波形也存在显著差异;类比推理的加工过程是有阶段性的,即编码潍断、映射、得出结论,研究结果进一步支持了Sternberg的成分理论;推断和映射这两个类比推理所特有的加工阶段都有其对应的脑机制,图式推断阶段主要激活的是前额皮层和双侧的顶叶皮层,类比映射和调整阶段主要激活的是左半球的颞叶、额叶和中央顶。  相似文献   
336.
In a new integration, we show that the visual-spatial structuring of time converges with auditory-spatial left-right judgments for time-related words. In Experiment 1, participants placed past and future-related words respectively to the left and right of the midpoint on a horizontal line, reproducing earlier findings. In Experiment 2, neutral and time-related words were presented over headphones. Participants were asked to indicate whether words were louder on the left or right channel. On critical experimental trials, words were presented equally loud binaurally. As predicted, participants judged future words to be louder on the right channel more often than past-related words. Furthermore, there was a significant cross-modal overlap between the visual-spatial ordering (Experiment 1) and the auditory judgments (Experiment 2), which were continuously related. These findings provide support for the assumption that space and time have certain invariant properties that share a common structure across modalities.  相似文献   
337.
We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.  相似文献   
338.
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.  相似文献   
339.
用跨通道的音-字同音判断任务配以视觉词汇判断作协变分析,实验研究了汉语同音字音节的听觉词汇通达中同音字具体频率和同音字数的相互作用,等同了音节的累积频率和笔画数,结果发现了稳健的同音字族内的词频效应,同音字数与同音字频率相互牵制,表现在同音字数效应缺失,两因素刺激材料难以完全控制新的影响因素产生,出现逆抑制效应。结果说明在同音字频率和同音字数的相互作用中频率起稳健和主导的作用。  相似文献   
340.
Patel M  Chait M 《Cognition》2011,119(1):125-130
Accurately timing acoustic events in dynamic scenes is fundamental to scene analysis. To detect events in busy scenes, listeners must often identify a change in the pattern of ongoing fluctuation, resulting in many ubiquitous events being detected later than when they occurred. This raises the question of how delayed detection time affects the manner in which such events are perceived relative to other events in the environment.To model these situations, we use sequences of tone-pips with a time–frequency pattern that changes from regular to random (‘REG–RAND’) or vice versa (‘RAND–REG’). REG–RAND transitions are detected rapidly, but the emergence of regularity cannot be established immediately, and thus RAND–REG transitions take significantly longer to detect. Using a temporal order judgment task, and a light-flash as a temporal marker, we demonstrate that listeners do not perceive the onset of RAND–REG transitions at the point of detection (∼530 ms post transition), but automatically re-adjust their estimate ∼300 ms closer to the nominal transition.These results demonstrate that the auditory system possesses mechanisms that survey the proximal history of an ongoing stimulus and automatically adjust perception to compensate for prolonged detection time, allowing listeners to build meaningful representations of the environment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号