首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   270篇
  免费   7篇
  国内免费   4篇
  2024年   1篇
  2023年   3篇
  2022年   5篇
  2021年   10篇
  2020年   5篇
  2019年   7篇
  2018年   7篇
  2017年   16篇
  2016年   15篇
  2015年   8篇
  2014年   17篇
  2013年   47篇
  2012年   14篇
  2011年   30篇
  2010年   9篇
  2009年   21篇
  2008年   13篇
  2007年   9篇
  2006年   13篇
  2005年   7篇
  2004年   7篇
  2003年   6篇
  2002年   4篇
  2001年   1篇
  2000年   2篇
  1998年   2篇
  1997年   1篇
  1976年   1篇
排序方式: 共有281条查询结果,搜索用时 116 毫秒
71.
The integrity of phonological representation/processing in dyslexic children was explored with a gating task in which children listened to successively longer segments (gates) of a word. At each gate, the task was to decide what the entire word was. Responses were scored for overall accuracy as well as the children's sensitivity to coarticulation from the final consonant. As a group, dyslexic children were less able than normally achieving readers to detect coarticulation present in the vowel portion of the word, particularly on the most difficult items, namely those ending in a nasal sound. Hierarchical regression and path analyses indicated that phonological awareness mediated the relation of gating and general language ability to word and pseudoword reading ability.  相似文献   
72.
Processing local elements of hierarchical patterns at a superior level and independently from an intact global influence is a well-established characteristic of autistic visual perception. However, whether this confirmed finding has an equivalent in the auditory modality is still unknown. To fill this gap, 18 autistics and 18 typical participants completed a melodic decision task where global and local level information can be congruent or incongruent. While focusing either on the global (melody) or local level (group of notes) of hierarchical auditory stimuli, participants have to decide whether the focused level is rising or falling. Autistics showed intact global processing, a superior performance when processing local elements and a reduced global-to-local interference compared to typical participants. These results are the first to demonstrate that autistic processing of auditory hierarchical stimuli closely parallels processing of visual hierarchical stimuli. When analyzing complex auditory information, autistic participants present a local bias and a more autonomous local processing, but not to the detriment of global processing.  相似文献   
73.
Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices.  相似文献   
74.
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.  相似文献   
75.
In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY (“biggest auditory lexical decision experiment yet”) data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.  相似文献   
76.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non‐speech sounds. In this study, we investigated rhythmic perception of non‐linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants’ biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non‐linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.  相似文献   
77.
Much research has explored developing sound representations in language, but less work addresses developing representations of other sound patterns. This study examined preschool children's musical representations using two different tasks: discrimination and sound–picture association. Melodic contour—a musically relevant property—and instrumental timbre, which is (arguably) less musically relevant, were tested. In Experiment 1, children failed to associate cartoon characters to melodies with maximally different pitch contours, with no advantage for melody preexposure. Experiment 2 also used different‐contour melodies and found good discrimination, whereas association was at chance. Experiment 3 replicated Experiment 2, but with a large timbre change instead of a contour change. Here, discrimination and association were both excellent. Preschool‐aged children may have stronger or more durable representations of timbre than contour, particularly in more difficult tasks. Reasons for weaker association of contour than timbre information are discussed, along with implications for auditory development.  相似文献   
78.
People report suggested misinformation about a previously witnessed event for manifold reasons, such as social pressure, lack of memory of the original aspect, or a firm belief to remember the misinformation from the witnessed event. In our experiments (N?=?429), which follow Loftus's paradigm, we tried to disentangle the reasons for reporting a central and a peripheral piece of misinformation in a recognition task by examining (a) the impact a warning about possible misinformation has on the error rate, and (b) whether once reported misinformation was actually attributed to the witnessed event in a later source-monitoring (SM) task. Overall, a misinformation effect was found for both items. The warning strongly reduced the misinformation effect, but only for the central item. In contrast, reports of the peripheral misinformation were correctly attributed to the misinformation source or, at least, ascribed to guesswork much more often than the central ones. As a consequence, after the SM task, the initially higher error rate for the peripheral item was even lower than that of the central item. Results convincingly show that the reasons for reporting misinformation, and correspondingly also the potential to avoid them in legal settings, depend on the centrality of the misinformation.  相似文献   
79.
Two experiments investigated participants’ recognition memory for word content, while varying vocal characteristics, and for vocal characteristics alone. In Experiment 1, participants performed an auditory recognition task in which they identified whether a spoken word was “new”, “old” (repeated word, repeated voice), or “similar” (repeated word, new voice). Results showed that word recognition accuracy was lower for similar trials than old trials. In Experiment 2, participants performed an auditory recognition task in which they identified whether or not a phrase was spoken in an old or new voice, with repetitions occurring after a variable number of intervening stimuli. Results showed that recognition accuracy was lower when old voices spoke an alternate message than a repeated message and accuracy decreased as a function of number of intervening items. Overall, the results suggest that speech recognition is better for lexical content than vocal characteristics alone.  相似文献   
80.
Understanding and modeling the influence of mobile phone use on pedestrian behaviour is important for several safety and performance evaluations. Mobile phone use affects pedestrian perception of the surrounding traffic environment and reduces situation awareness. This study investigates the effect of distraction due to mobile phone use (i.e., visual and auditory) on pedestrian reaction time to the pedestrian signal. Traffic video data was collected from four crosswalks in Canada and China. A multilevel mixed-effects accelerated failure time (AFT) approach is used to model pedestrian reaction times, with random intercepts capturing the clustered-specific (countries) heterogeneity. Potential reaction time influencing factors were investigated, including pedestrian demographic attributes, distraction characteristics, and environment-related parameters. Results show that pedestrian reaction times were longer in Canada than in China under the non-distraction and distraction conditions. The auditory and visual distractions increase pedestrian reaction time by 67% and 50% on average, respectively. Pedestrian reactions were slower at road segment crosswalks compared to intersection crosswalks, at higher distraction durations, and for males aged over 40 compared to other pedestrians. Moreover, pedestrian reactions were faster at higher traffic awareness levels.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号