首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   235篇
  免费   6篇
  国内免费   1篇
  2024年   1篇
  2023年   2篇
  2022年   1篇
  2021年   6篇
  2020年   5篇
  2019年   7篇
  2018年   4篇
  2017年   13篇
  2016年   10篇
  2015年   6篇
  2014年   17篇
  2013年   42篇
  2012年   11篇
  2011年   30篇
  2010年   8篇
  2009年   19篇
  2008年   13篇
  2007年   7篇
  2006年   13篇
  2005年   6篇
  2004年   7篇
  2003年   6篇
  2002年   4篇
  2001年   1篇
  2000年   1篇
  1998年   1篇
  1976年   1篇
排序方式: 共有242条查询结果,搜索用时 15 毫秒
61.
Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices.  相似文献   
62.
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.  相似文献   
63.
In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY (“biggest auditory lexical decision experiment yet”) data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.  相似文献   
64.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non‐speech sounds. In this study, we investigated rhythmic perception of non‐linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants’ biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non‐linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.  相似文献   
65.
Much research has explored developing sound representations in language, but less work addresses developing representations of other sound patterns. This study examined preschool children's musical representations using two different tasks: discrimination and sound–picture association. Melodic contour—a musically relevant property—and instrumental timbre, which is (arguably) less musically relevant, were tested. In Experiment 1, children failed to associate cartoon characters to melodies with maximally different pitch contours, with no advantage for melody preexposure. Experiment 2 also used different‐contour melodies and found good discrimination, whereas association was at chance. Experiment 3 replicated Experiment 2, but with a large timbre change instead of a contour change. Here, discrimination and association were both excellent. Preschool‐aged children may have stronger or more durable representations of timbre than contour, particularly in more difficult tasks. Reasons for weaker association of contour than timbre information are discussed, along with implications for auditory development.  相似文献   
66.
Two experiments investigated participants’ recognition memory for word content, while varying vocal characteristics, and for vocal characteristics alone. In Experiment 1, participants performed an auditory recognition task in which they identified whether a spoken word was “new”, “old” (repeated word, repeated voice), or “similar” (repeated word, new voice). Results showed that word recognition accuracy was lower for similar trials than old trials. In Experiment 2, participants performed an auditory recognition task in which they identified whether or not a phrase was spoken in an old or new voice, with repetitions occurring after a variable number of intervening stimuli. Results showed that recognition accuracy was lower when old voices spoke an alternate message than a repeated message and accuracy decreased as a function of number of intervening items. Overall, the results suggest that speech recognition is better for lexical content than vocal characteristics alone.  相似文献   
67.
Understanding and modeling the influence of mobile phone use on pedestrian behaviour is important for several safety and performance evaluations. Mobile phone use affects pedestrian perception of the surrounding traffic environment and reduces situation awareness. This study investigates the effect of distraction due to mobile phone use (i.e., visual and auditory) on pedestrian reaction time to the pedestrian signal. Traffic video data was collected from four crosswalks in Canada and China. A multilevel mixed-effects accelerated failure time (AFT) approach is used to model pedestrian reaction times, with random intercepts capturing the clustered-specific (countries) heterogeneity. Potential reaction time influencing factors were investigated, including pedestrian demographic attributes, distraction characteristics, and environment-related parameters. Results show that pedestrian reaction times were longer in Canada than in China under the non-distraction and distraction conditions. The auditory and visual distractions increase pedestrian reaction time by 67% and 50% on average, respectively. Pedestrian reactions were slower at road segment crosswalks compared to intersection crosswalks, at higher distraction durations, and for males aged over 40 compared to other pedestrians. Moreover, pedestrian reactions were faster at higher traffic awareness levels.  相似文献   
68.
Research suggests a relationship between auditory distraction (such as environmental noises or a vocal cell phone conversation) and a decreased ability to detect and localize approaching vehicles. What is unclear is whether auditory vehicle perception is impacted more by distractions reliant on listening or distractions reliant on speaking (analogous to the two components of a vocal cell phone conversation). In two experiments, adult participants listened for approaching vehicle noises and while performing listening- and speaking-based secondary tasks. Participants were tasked with identifying when they first detect an approaching vehicle and when they no longer felt safe to cross in front of the approaching vehicle. For both experiments, the speaking task resulted in significantly later detection of approaching vehicles and riskier crossing thresholds than in the no-distraction and listening conditions. The listening secondary task significantly differed from the control condition in experiment 1, but not experiment 2. Overall, our results suggest auditory distractions, particularly those reliant on speaking, negatively impact pedestrian safety in situations where visual information is minimal. Results may provide guidance for future research and policy about the safety impacts of secondary tasks.  相似文献   
69.
Perceptual grouping is fundamental to many auditory processes. The Iambic–Trochaic Law (ITL) is a default grouping strategy, where rhythmic alternations of duration are perceived iambically (weak‐strong), while alternations of intensity are perceived trochaically (strong‐weak). Some argue that the ITL is experience dependent. For instance, French speakers follow the ITL, but not as consistently as German speakers. We hypothesized that learning about prosodic patterns, like word stress, modulates this rhythmic grouping. We tested this idea by training French adults on a German‐like stress contrast. Individuals who showed better phonological learning had more ITL‐like grouping, particularly over duration cues. In a non‐phonological condition, French adults were trained using identical stimuli, but they learned to attend to acoustic variation that was not linguistic. Here, no learning effects were observed. Results thus suggest that phonological learning can modulate low‐level auditory grouping phenomena, but it is constrained by the ability of individuals to learn from short‐term training.  相似文献   
70.
Existing driver models mainly account for drivers’ responses to visual cues in manually controlled vehicles. The present study is one of the few attempts to model drivers’ responses to auditory cues in automated vehicles. It developed a mathematical model to quantify the effects of characteristics of auditory cues on drivers’ response to takeover requests in automated vehicles. The current study enhanced queuing network-model human processor (QN-MHP) by modeling the effects of different auditory warnings, including speech, spearcon, and earcon. Different levels of intuitiveness and urgency of each sound were used to estimate the psychological parameters, such as perceived trust and urgency. The model predictions of takeover time were validated via an experimental study using driving simulation with resultant R squares of 0.925 and root-mean-square-error of 73 ms. The developed mathematical model can contribute to modeling the effects of auditory cues and providing design guidelines for standard takeover request warnings for automated vehicles.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号