首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   351篇
  免费   27篇
  国内免费   44篇
  2024年   1篇
  2023年   3篇
  2022年   3篇
  2021年   15篇
  2020年   19篇
  2019年   14篇
  2018年   9篇
  2017年   19篇
  2016年   18篇
  2015年   16篇
  2014年   29篇
  2013年   60篇
  2012年   29篇
  2011年   37篇
  2010年   12篇
  2009年   28篇
  2008年   22篇
  2007年   15篇
  2006年   13篇
  2005年   12篇
  2004年   14篇
  2003年   8篇
  2002年   9篇
  2001年   6篇
  2000年   3篇
  1998年   2篇
  1997年   3篇
  1996年   1篇
  1993年   1篇
  1976年   1篇
排序方式: 共有422条查询结果,搜索用时 46 毫秒
21.
Individual vocal recognition behaviors in songbirds provide an excellent framework for the investigation of comparative psychological and neurobiological mechanisms that support the perception and cognition of complex acoustic communication signals. To this end, the complex songs of European starlings have been studied extensively. Yet, several basic parameters of starling individual vocal recognition have not been assessed. Here we investigate the temporal extent of song information acquired by starlings during vocal recognition learning. We trained two groups of starlings using standard operant conditioning techniques to recognize several songs from two conspecific male singers. In the first experiment we tested their ability to maintain accurate recognition when presented with (1) random sequences of 1–12 motifs (stereotyped song components) drawn from the training songs, and (2) 0.1–12-s excerpts of continuous song drawn from the training songs. We found that song recognition improved monotonically as more vocal material is provided. In the second experiment, we systematically substituted continuous, varying length regions of white noise for portions of the training songs and again examined recognition accuracy. Recognition remained above chance levels for all noise substitutions tested (up to 91% of the training stimulus) although all but the smallest substitutions led to some decrement in song recognition. Overall, above chance recognition could be obtained with surprisingly few motifs, short excerpts of song, and in the absence of large portions of the training songs. These results suggest that starlings acquire a representation of song during individual vocal recognition learning that is robust to perturbations and distributed broadly over large portions of these complex acoustic sequences.  相似文献   
22.
This study aimed to investigate the mechanisms underlying joke comprehension using event‐related potentials (ERPs). Fourteen healthy college students were presented with the context of a story without its joke or nonjoke ending, and then, when the story ending was presented, they were asked to make a funny/unfunny judgment about these endings. The behavioral results showed that there was no significant difference between funny and unfunny items, which meant that subjects could understand funny items as easily as unfunny ones. However, the ERP results showed that funny items initially elicited a more negative ERP deflection (N350–450) over frontocentral scalp regions. Dipole analysis localized the generators in the left temporal gyrus and the left medial frontal gyrus; it is suggested that these areas might be involved in detecting the incongruent element in joke comprehension. Between 600 and 800 ms, funny items subsequently elicited a more negative ERP deflection (N600–800) over frontocentral scalp regions and a more positive ERP deflection (P600–800) over posterior scalp regions. Dipole analysis localized the generator in the anterior cingulate cortex (ACC), an area involved in the breaking of mental set/expectation and the forming of novel associations. Finally, funny items elicited a more positive ERP deflection (P1250–1400) over anterior and posterior scalp regions. Dipole analysis localized the generators in the middle frontal gyrus and the fusiform gyrus, areas that might be related to the affective appreciation stage in joke process. Unlike that of Coulson and Kutas (2001), the present study might support the hypothesis of a three stage model of humor processing (humor detection, resolution of incongruity and humor appreciation).  相似文献   
23.
In this study we explored the temporal origin of processing differences between first and second language production. Forty highly proficient bilinguals named objects of high and low lexical frequency aloud for both L1 and L2 separately while event-related brain potentials (ERPs) were recorded. The first electrophysiological differences elicited by response language occurred at the same early P2 peak (∼140–220 ms) where we observed the onset of the lexical frequency effect, but only for those bilinguals who started naming in an L1 context and afterwards switched to an L2 naming context. The bilinguals who named objects in the reverse direction did not display a language effect in the ERPs. Taken together, the data show that the L2 naming disadvantage originates during the onset of lexical access and seems to be driven by both representational strength, which is lower for L2 words, and language control demands, which are higher for L2 words.  相似文献   
24.
Congenital amusia is a lifelong disorder characterized by a difficulty in perceiving and producing music despite normal intelligence and hearing. Behavioral data have indicated that it originates from a deficit in fine-grained pitch discrimination, and is expressed by the absence of a P3b event-related brain response for pitch differences smaller than a semitone and a bigger N2b–P3b brain response for large pitch differences as compared to controls. However, it is still unclear why the amusic brain overreacts to large pitch changes. Furthermore, another electrophysiological study indicates that the amusic brain can respond to changes in melodies as small as a quarter-tone, without awareness, by exhibiting a normal mismatch negativity (MMN) brain response. Here, we re-examine the event-related N2b–P3b components with the aim to clarify the cause of the larger amplitude observed by Peretz, Brattico, and Tervaniemi (2005), by experimentally matching the number of deviants presented to the controls according to the number of deviants detected by amusics. We also re-examine the MMN component as well as the N1 in an acoustical context to investigate further the pitch discrimination deficit underlying congenital amusia. In two separate conditions, namely ignore and attend, we measured the MMN, the N1, the N2b and the P3b to tones that deviated by an eight of a tone (25 cents) or whole tone (200 cents) from a repeated standard tone. The results show a normal MMN, a seemingly normal N1, a normal P3b for the 200 cents pitch deviance, and no P3b for the small 25 cents pitch differences in amusics. These results indicate that the amusic brain responds to small pitch differences at a pre-attentive level of perception, but is unable to detect consciously those same pitch deviances at a later attentive level. The results are consistent with previous MRI and fMRI studies indicating that the auditory cortex of amusic individuals is functioning normally.  相似文献   
25.
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody.  相似文献   
26.
An important task of perceptual processing is to parse incoming information into distinct units and to keep track of those units over time as the same, persisting representations. Within the study of visual perception, maintaining such persisting object representations is helped by “object files”—episodic representations that store (and update) information about objects' properties and track objects over time and motion via spatiotemporal information. Although object files are typically discussed as visual, here we demonstrate that object–file correspondence can be computed across sensory modalities. An object file can be initially formed with visual input and later accessed with corresponding auditory information, suggesting that object files may be able to operate at a multimodal level of perceptual processing.  相似文献   
27.
The irrelevant sound effect (ISE) and the stimulus suffix effect (SSE) are two qualitatively different phenomena, although in both paradigms irrelevant auditory material is played while a verbal serial recall task is being performed. Jones, Macken, and Nicholls (2004) Jones, D. M., Macken, W. J. and Nicholls, A. P. 2004. The phonological store of working memory: Is it phonological and is it a store?. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30: 656674. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] have proposed the effect of irrelevant speech on auditory serial recall to switch from an ISE to an SSE mechanism, if the auditory-perceptive similarity of relevant and irrelevant material is maximized. The experiment reported here (n = 36) tested this hypothesis by exploring auditory serial recall performance both under irrelevant speech and under speech suffix conditions. These speech materials were spoken either by the same voice as the auditory items to be recalled or by a different voice. The experimental conditions were such that the likelihood of obtaining an SSE was maximized. The results, however, show that irrelevant speech—in contrast to speech suffixes—affects auditory serial recall independently of its perceptive similarity to the items to be recalled and thus in terms of an ISE mechanism that crucially extends to recency. The ISE thus cannot turn into an SSE.  相似文献   
28.
In a recent study of musicians' sensorimotor synchronization with auditory sequences composed either of beat and subdivision tones differing in pitch or of beat tones only, Repp (2009) found that the phase correction response (PCR) to perturbed beats was inhibited by the presence of subdivisions regardless of whether beats and subdivisions formed integrated or segregated perceptual streams. The present study used a different paradigm in which perturbed subdivisions triggered the PCR. At the slower of two sequence tempi, the PCR was equally large in integrated and segregated conditions, but at the faster tempo stream segregation reduced the PCR substantially. This new finding indicates that although the PCR is strongly resistant to auditory stream segregation, it is not totally immune to it.  相似文献   
29.
Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.  相似文献   
30.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号