首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   351篇
  免费   27篇
  国内免费   44篇
  2024年   1篇
  2023年   3篇
  2022年   3篇
  2021年   15篇
  2020年   19篇
  2019年   14篇
  2018年   9篇
  2017年   19篇
  2016年   18篇
  2015年   16篇
  2014年   29篇
  2013年   60篇
  2012年   29篇
  2011年   37篇
  2010年   12篇
  2009年   28篇
  2008年   22篇
  2007年   15篇
  2006年   13篇
  2005年   12篇
  2004年   14篇
  2003年   8篇
  2002年   9篇
  2001年   6篇
  2000年   3篇
  1998年   2篇
  1997年   3篇
  1996年   1篇
  1993年   1篇
  1976年   1篇
排序方式: 共有422条查询结果,搜索用时 15 毫秒
391.
Hearing loss has been shown to exacerbate the effect of auditory distraction on driving performance in older drivers. This study controlled for the potentially confounding factor of age-related cognitive decrements, by applying a simulated hearing loss in young, normally hearing individuals. Participants drove a simulated road whilst completing auditory tasks under simulated hearing loss or normal hearing conditions. Measures of vehicle control, eye movements and auditory task performance were recorded. Results showed that performing the auditory tasks whilst driving resulted in more stable lateral vehicle control and a reduction in gaze dispersion around the road centre. These trends were not exacerbated by simulated hearing loss, suggesting no effect of hearing loss on vehicle control or eye movement patterns during auditory task engagement. However, a small effect of simulated hearing loss on the performance of the most complex auditory task was observed during driving, suggesting that the use of sound-based in-vehicle systems may be problematic for hearing impaired individuals. Further research incorporating a wider variety of driving scenarios and auditory tasks is required in order to confirm the findings of this study.  相似文献   
392.
ObjectivesWe compared the spatial concepts given to sounds' directions by blind football players with both blind non-athletes and sighted individuals.MethodParticipants verbally described the directions of sounds around them by using predefined spatial concept labels, under two blocked conditions: 1) facing front, 2) pointing with the hand towards the stimulus.ResultsBlind football players categorized the directions more precisely (i.e., they used simple labels for describing the cardinal directions and combined labels for the intermediate ones) than the other groups, and their categorization was less sensitive to the response conditions than blind non-athletes. Sighted participants' categorization was similar to previous studies, in which the front and back regions were generally more precisely described than the sides, where simple labels were often used for describing directions around the absolute left and right.ConclusionsThe differences in conceptual categorization of sound directions are a) in sighted individuals, influenced by the representation of the visual space b) in blind individuals, influenced by the level of expertise in action and locomotion based on non-visual information, which can be increased by auditive stimulation provided by blind football training.  相似文献   
393.
ObjectivesWe compared the mental representation of sound directions in blind football players, blind non-athletes and sighted individuals.DesignStanding blindfolded in the middle of a circle with 16 loudspeakers, participants judged whether the directions of two subsequently presented sounds were similar or not.MethodStructure dimensional analysis (SDA) was applied to reveal mean cluster solutions for the groups.ResultsHierarchical cluster analysis via SDA resulted in distinct representation structures of sound directions. The blind football players' mean cluster solution consisted of pairs of neighboring directions. The blind non-athletes also clustered the directions in pairs, but included non-adjacent directions. In the sighted participants' structure, frontal directions were clustered pairwise, the absolute back was singled out, and the side regions accounted for more directions.ConclusionsOur results suggest that the mental representation of egocentric auditory space is influenced by sight and by the level of expertise in auditory-based orientation and navigation.  相似文献   
394.
Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults’ ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging.  相似文献   
395.
Both filter and resource models of attention suggest an influence of task difficulty on the size of early attention effects. As for temporal orienting, the idea that early effects are modulated by task difficulty has not been tested directly, so far. To fill this empirical gap, the present study used an auditory temporal-orienting task, in which two differently pitched pure tones served as targets. To manipulate perceptual difficulty, the pitch difference between the targets was either small or large. Temporal orienting enhanced the N1 component of the auditory event-related potential. This early, sensory effect tended to be larger in the more difficult condition, particularly over the frontal scalp. Notably, increasing task difficulty affected predominantly the processing of attended stimuli. Hence, temporal orienting may operate by increasing processing resources or gain settings for the attended time point – rather than by withdrawing resources from the unattended time point.  相似文献   
396.
Sighted individuals are less accurate and slower to localize sounds coming from the peripheral space than sounds coming from the frontal space. This specific bias in favour of the frontal auditory space seems reduced in early blind individuals, who are particularly better than sighted individuals at localizing sounds coming from the peripheral space. Currently, it is not clear to what extent this bias in the auditory space is a general phenomenon or if it applies only to spatial processing (i.e. sound localization). In our approach we compared the performance of early blind participants with that of sighted subjects during a frequency discrimination task with sounds originating either from frontal or peripheral locations. Results showed that early blind participants discriminated faster than sighted subjects both peripheral and frontal sounds. In addition, sighted subjects were faster at discriminating frontal sounds than peripheral ones, whereas early blind participants showed equal discrimination speed for frontal and peripheral sounds. We conclude that the spatial bias observed in sighted subjects reflects an unbalance in the spatial distribution of auditory attention resources that is induced by visual experience.  相似文献   
397.
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4 s or 12 s. Linear correlation coefficients > .89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, ‘indirect’ loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.  相似文献   
398.
A man, woman or child saying the same vowel do so with very different voices. The auditory system solves the complex problem of extracting what the man, woman or child has said despite substantial differences in the acoustic properties of their voices. Much of the acoustic variation between the voices of men and woman is due to changes in the underlying anatomical mechanisms for producing speech. If the auditory system knew the sex of the speaker then it could potentially correct for speaker sex related acoustic variation thus facilitating vowel recognition. This study measured the minimum stimulus duration necessary to accurately discriminate whether a brief vowel segment was spoken by a man or woman, and the minimum stimulus duration necessary to accuately recognise what vowel was spoken. Results showed that reliable vowel recognition precedesreliable speaker sex discrimination, thus questioning the use of speaker sex information in compensating for speaker sex related acoustic variation in the voice. Furthermore, the pattern of performance across experiments where the fundamental frequency and formant frequency information of speaker's voices were systematically varied, was markedly different depending on whether the task was speaker-sex discrimination or vowel recognition. This argues for there being little relationship between perception of speaker sex (indexical information) and perception of what has been said (linguistic information) at short durations.  相似文献   
399.
Driving a vehicle is comprised of multiple tasks (e.g., monitoring the environment around the vehicle, planning the trajectory, and controlling the vehicle), and requires the allocation of capacity-limited attentional resources to visual, cognitive, and action processing; otherwise, the quality of task performance will deteriorate, increasing the risk of near-accidents or crashes. The present study proposes that variations in the total amount as well as the individual amounts of attentional resources allocated to visual, cognitive, and action processing depending on the driving situations could be objectively estimated by the combined use of three physiological measures: (1) the duration of eye blinks during driving, (2) the size of eye-fixation-related potentials (EFRPs), i.e., event-related potentials (ERPs) that are time-locked to the offset of saccadic eye-movements during driving, and (3) the size of auditory-evoked potentials (AEPs), i.e., ERPs time-locked to the onset of task-unrelated auditory stimuli discretely presented during driving. We implemented these measures when participants (N = 16) drove a vehicle on a slalom course under four driving conditions defined by a combination of two levels of speed requirement (fast and slow) and two levels of path width (narrow and wide). The findings suggested that, (1) when driving at fast compared to slow speeds, the total amount of resources allocated to overall processing increased, which consisted of an increase in the amount of resources allocated to cognitive (and possibly action) processing and a decrease in the amount of resources allocated to visual processing, and (2) when driving on narrow compared to wide paths, the total amount of resources allocated to overall processing remained almost the same (due to complementary speed reduction), which consisted of an increase in the amount of resources allocated to visual processing, a decrease in the amount of resources allocated to cognitive processing, and almost the same amount of resources allocated to action processing. The driver’s resource management strategies indicated by these results as well as the utility and limitations of the proposed method are discussed.  相似文献   
400.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号