首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   241篇
  免费   12篇
  国内免费   5篇
  258篇
  2024年   1篇
  2023年   2篇
  2022年   1篇
  2021年   6篇
  2020年   6篇
  2019年   8篇
  2018年   4篇
  2017年   13篇
  2016年   10篇
  2015年   6篇
  2014年   18篇
  2013年   44篇
  2012年   11篇
  2011年   30篇
  2010年   8篇
  2009年   20篇
  2008年   14篇
  2007年   9篇
  2006年   13篇
  2005年   6篇
  2004年   7篇
  2003年   6篇
  2002年   4篇
  2001年   1篇
  2000年   1篇
  1998年   1篇
  1992年   1篇
  1988年   1篇
  1978年   2篇
  1977年   2篇
  1976年   2篇
排序方式: 共有258条查询结果,搜索用时 7 毫秒
71.
A classical experiment of auditory stream segregation is revisited, reconceptualising perceptual ambiguity in terms of affordances and musical engagement. Specifically, three experiments are reported that investigate how listeners’ perception of auditory sequences change dynamically depending on emotional context. The experiments show that listeners adapt their attention to higher or lower pitched streams (Experiments 1 and 2) and the degree of auditory stream integration or segregation (Experiment 3) in accordance with the presented emotional context. Participants with and without formal musical training show this influence, although to differing degrees (Experiment 2). Contributing evidence to the literature on interactions between emotion and cognition, these experiments demonstrate how emotion is an intrinsic part of music perception and not merely a product of the listening experience.  相似文献   
72.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non‐speech sounds. In this study, we investigated rhythmic perception of non‐linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants’ biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non‐linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.  相似文献   
73.
Two experiments investigated participants’ recognition memory for word content, while varying vocal characteristics, and for vocal characteristics alone. In Experiment 1, participants performed an auditory recognition task in which they identified whether a spoken word was “new”, “old” (repeated word, repeated voice), or “similar” (repeated word, new voice). Results showed that word recognition accuracy was lower for similar trials than old trials. In Experiment 2, participants performed an auditory recognition task in which they identified whether or not a phrase was spoken in an old or new voice, with repetitions occurring after a variable number of intervening stimuli. Results showed that recognition accuracy was lower when old voices spoke an alternate message than a repeated message and accuracy decreased as a function of number of intervening items. Overall, the results suggest that speech recognition is better for lexical content than vocal characteristics alone.  相似文献   
74.
Understanding and modeling the influence of mobile phone use on pedestrian behaviour is important for several safety and performance evaluations. Mobile phone use affects pedestrian perception of the surrounding traffic environment and reduces situation awareness. This study investigates the effect of distraction due to mobile phone use (i.e., visual and auditory) on pedestrian reaction time to the pedestrian signal. Traffic video data was collected from four crosswalks in Canada and China. A multilevel mixed-effects accelerated failure time (AFT) approach is used to model pedestrian reaction times, with random intercepts capturing the clustered-specific (countries) heterogeneity. Potential reaction time influencing factors were investigated, including pedestrian demographic attributes, distraction characteristics, and environment-related parameters. Results show that pedestrian reaction times were longer in Canada than in China under the non-distraction and distraction conditions. The auditory and visual distractions increase pedestrian reaction time by 67% and 50% on average, respectively. Pedestrian reactions were slower at road segment crosswalks compared to intersection crosswalks, at higher distraction durations, and for males aged over 40 compared to other pedestrians. Moreover, pedestrian reactions were faster at higher traffic awareness levels.  相似文献   
75.
Research suggests a relationship between auditory distraction (such as environmental noises or a vocal cell phone conversation) and a decreased ability to detect and localize approaching vehicles. What is unclear is whether auditory vehicle perception is impacted more by distractions reliant on listening or distractions reliant on speaking (analogous to the two components of a vocal cell phone conversation). In two experiments, adult participants listened for approaching vehicle noises and while performing listening- and speaking-based secondary tasks. Participants were tasked with identifying when they first detect an approaching vehicle and when they no longer felt safe to cross in front of the approaching vehicle. For both experiments, the speaking task resulted in significantly later detection of approaching vehicles and riskier crossing thresholds than in the no-distraction and listening conditions. The listening secondary task significantly differed from the control condition in experiment 1, but not experiment 2. Overall, our results suggest auditory distractions, particularly those reliant on speaking, negatively impact pedestrian safety in situations where visual information is minimal. Results may provide guidance for future research and policy about the safety impacts of secondary tasks.  相似文献   
76.
Existing driver models mainly account for drivers’ responses to visual cues in manually controlled vehicles. The present study is one of the few attempts to model drivers’ responses to auditory cues in automated vehicles. It developed a mathematical model to quantify the effects of characteristics of auditory cues on drivers’ response to takeover requests in automated vehicles. The current study enhanced queuing network-model human processor (QN-MHP) by modeling the effects of different auditory warnings, including speech, spearcon, and earcon. Different levels of intuitiveness and urgency of each sound were used to estimate the psychological parameters, such as perceived trust and urgency. The model predictions of takeover time were validated via an experimental study using driving simulation with resultant R squares of 0.925 and root-mean-square-error of 73 ms. The developed mathematical model can contribute to modeling the effects of auditory cues and providing design guidelines for standard takeover request warnings for automated vehicles.  相似文献   
77.
The human capacity for processing speech is remarkable, especially given that information in speech unfolds over multiple time scales concurrently. Similarly notable is our ability to filter out of extraneous sounds and focus our attention on one conversation, epitomized by the ‘Cocktail Party’ effect. Yet, the neural mechanisms underlying on-line speech decoding and attentional stream selection are not well understood. We review findings from behavioral and neurophysiological investigations that underscore the importance of the temporal structure of speech for achieving these perceptual feats. We discuss the hypothesis that entrainment of ambient neuronal oscillations to speech’s temporal structure, across multiple time-scales, serves to facilitate its decoding and underlies the selection of an attended speech stream over other competing input. In this regard, speech decoding and attentional stream selection are examples of ‘Active Sensing’, emphasizing an interaction between proactive and predictive top-down modulation of neuronal dynamics and bottom-up sensory input.  相似文献   
78.
The present study investigated whether the neural correlates for auditory feedback control of vocal pitch can be shaped by tone language experience. Event-related potentials (P2/N1) were recorded from adult native speakers of Mandarin and Cantonese who heard their voice auditory feedback shifted in pitch by −50, −100, −200, or −500 cents when they sustained the vowel sound /u/. Cantonese speakers produced larger P2 amplitudes to −200 or −500 cents stimuli than Mandarin speakers, but this language effect failed to reach significance in the case of −50 or −100 cents. Moreover, Mandarin speakers produced shorter N1 latencies over the left hemisphere than the right hemisphere, whereas Cantonese speakers did not. These findings demonstrate that neural processing of auditory pitch feedback in vocal motor control is subject to language-dependent neural plasticity, suggesting that cortical mechanisms of auditory-vocal integration can be shaped by tone language experience.  相似文献   
79.
The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered auditory feedback (NAF), frequency altered feedback (FAF), and delayed auditory feedback (DAF; 50 and 200 ms). During speech production, females exhibited earlier N1 latencies during 50 ms DAF and earlier P2 latencies during 50 ms DAF and FAF. There were no significant differences in N1-P2 amplitudes across all conditions. Comparing listening to active speaking, N1 and P2 latencies were earlier among females, with speaking, and under NAF. N1-P2 amplitudes were significantly reduced during speech production. These findings are consistent with the notions that speech production suppresses auditory cortex responsiveness and males and females process altered auditory feedback differently while speaking.  相似文献   
80.
It is not unusual to find it stated as a fact that the left hemisphere is specialized for the processing of rapid, or temporal aspects of sound, and that the dominance of the left hemisphere in the perception of speech can be a consequence of this specialization. In this review we explore the history of this claim and assess the weight of this assumption. We will demonstrate that instead of a supposed sensitivity of the left temporal lobe for the acoustic properties of speech, it is the right temporal lobe which shows a marked preference for certain properties of sounds, for example longer durations, or variations in pitch. We finish by outlining some alternative factors that contribute to the left lateralization of speech perception.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号