首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   141篇
  免费   6篇
  147篇
  2023年   1篇
  2022年   1篇
  2021年   5篇
  2020年   4篇
  2019年   3篇
  2018年   4篇
  2017年   4篇
  2016年   13篇
  2015年   3篇
  2014年   11篇
  2013年   23篇
  2012年   3篇
  2011年   12篇
  2010年   3篇
  2009年   10篇
  2008年   13篇
  2007年   13篇
  2006年   7篇
  2005年   2篇
  2004年   6篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
  1999年   1篇
  1997年   1篇
排序方式: 共有147条查询结果,搜索用时 15 毫秒
71.
The automobile is currently the most popular and frequently reported location for listening to music. Yet, not much is known about the effects of music on driving performance, and only a handful of studies report that music-evoked arousal generated by loudness decreases automotive performance. Nevertheless, music tempo increases driving risks by competing for attentional space; the greater number of temporal events which must be processed, and the frequency of temporal changes which require larger memory storage, distract operations and optimal driving capacities. The current study explored the effects of music tempo on PC-controlled simulated driving. It was hypothesized that simulated driving while listening to fast-paced music would increase heart rate (HR), decrease simulated lap time, and increase virtual traffic violations. The study found that music tempo consistently affected both simulated driving speed and perceived speed estimates: as the tempo of background music increased, so too did simulated driving speed and speed estimate. Further, the tempo of background music consistently affected the frequency of virtual traffic violations: disregarded red traffic-lights (RLs), lane crossings (LNs), and collisions (ACs) were most frequent with fast-paced music. The number of music-related automobile accidents and fatalities is not a known statistic. Police investigators, drivers, and traffic researchers themselves are not mindful of the risks associated with listening to music while driving. Implications of the study point to a need for drivers' education courses to raise public awareness about the effects of music during driving.  相似文献   
72.
Sensorimotor synchronization with adaptively timed sequences   总被引:1,自引:0,他引:1  
Most studies of human sensorimotor synchronization require participants to coordinate actions with computer-controlled event sequences that are unresponsive to their behavior. In the present research, the computer was programmed to carry out phase and/or period correction in response to asynchronies between taps and tones, and thereby to modulate adaptively the timing of the auditory sequence that human participants were synchronizing with, as a human partner might do. In five experiments the computer's error correction parameters were varied over a wide range, including "uncooperative" settings that a human synchronization partner could not (or would not normally) adopt. Musically trained participants were able to maintain synchrony in all these situations, but their behavior varied systematically as a function of the computer's parameter settings. Computer simulations were conducted to infer the human participants' error correction parameters from statistical properties of their behavior (means, standard deviations, auto- and cross-correlations). The results suggest that participants maintained a fixed gain of phase correction as long as the computer was cooperative, but changed their error correction strategies adaptively when faced with an uncooperative computer.  相似文献   
73.
A critical issue in perception is the manner in which top-down expectancies guide lower level perceptual processes. In speech, a common paradigm is to construct continua ranging between two phonetic endpoints and to determine how higher level lexical context influences the perceived boundary. We applied this approach to music, presenting participants with major/minor triad continua after brief musical contexts. Two experiments yielded results that differed from classic results in speech perception. In speech, context generally expands the category of the expected stimuli. We found the opposite in music: The major/minor boundary shifted toward the expected category, contracting it. Together, these experiments support the hypothesis that musical expectancy can feed back to affect lower-level perceptual processes. However, it may do so in a way that differs fundamentally from what has been seen in other domains.  相似文献   
74.
Battling over the public sphere: Islamic reactions to the music of today   总被引:1,自引:1,他引:0  
This article analyses discussions about music in the new public sphere of the Arab world. First, it focuses on what states do to control musical expressions and what functions religious actors have in that control. Four cases are looked into: Saudi Arabia, Egypt, Lebanon and Palestine. Then the article discusses theological arguments, in the public sphere, about music. The theologians are divided into three positions: moderates, hard-liners and liberals. It is argued that structural changes of the public sphere—especially with regards to new media and consumer culture—have caused a heated debate about music and morality. While hard-liners and moderates engage in a discussion about the legal and the forbidden in Islam, liberals stress the importance of allowing competing norms. Examples of extremist violence against musicians is discussed and contextualised.
Jonas OtterbeckEmail:
  相似文献   
75.
Research of music preferences yielded consistent results about the relationship of music preference, biographic variables, and personality. This study replicates some of these findings in a German-speaking sample (N = 1329). We conducted an online study using self-report assessments. We confirmed the five-factor structure of music genre preference and the three-factor structure of music attribute preference using EFA and CFA. In addition to previous research, we showed that the three-factor structure of music attribute preference is also replicated in self-reported assessments. We examined the relationships of personality and music preferences using SEM. This study contributes to the overall picture of music preference research and provides additional insights into the little-examined field of the relationship of music attribute preference and personality.  相似文献   
76.
77.
78.
This study investigated the interaction between sampling behavior and preference formation underlying subjective decision making for like and dislike decisions. Two-alternative forced-choice tasks were used with closely-matched musical excerpts and the participants were free to listen and re-listen, i.e. to sample and resample each excerpt, until they reached a decision. We predicted that for decisions involving resampling, a sampling bias would be observed before the moment of conscious decision for the like decision only. The results indeed showed a gradually increasing sampling bias favouring the choice (73%) before the moment of overt response for like decisions. Such a bias was absent for dislike decisions. Furthermore, the participants reported stronger relative preferences for like decisions as compared to dislike decisions. This study demonstrated distinct differences in preference formation between like and dislike decisions, both in the implicit orienting/sampling processes prior to the conscious decision and in the subjective evaluation afterwards.  相似文献   
79.
Electrophysiological studies investigating similarities between music and language perception have relied exclusively on the signal averaging technique, which does not adequately represent oscillatory aspects of electrical brain activity that are relevant for higher cognition. The current study investigated the patterns of brain oscillations during simultaneous processing of music and language using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular or irregular chord functions were presented in sync with syntactically or semantically correct or incorrect words. Irregular chord functions (presented simultaneously with a syntactically correct word) produced an early (150-250 ms) spectral power decrease over anterior frontal regions in the theta band (5-7 Hz) and a late (350-700 ms) power increase in both the delta and the theta band (2-7 Hz) over parietal regions. Syntactically incorrect words (presented simultaneously with a regular chord) elicited a similar late power increase in delta-theta band over parietal sites, but no early effect. Interestingly, the late effect was significantly diminished when the language-syntactic and music-syntactic irregularities occurred at the same time. Further, the presence of a semantic violation occurring simultaneously with regular chords produced a significant increase in later delta-theta power at posterior regions; this effect was marginally decreased when the identical semantic violation occurred simultaneously with a music syntactical violation. Altogether, these results show that low frequency oscillatory networks get activated during the syntactic processing of both music and language, and further, these networks may possibly be shared.  相似文献   
80.
Rhythm and metrical regularities are fundamental properties of music and poetry - and all of those are used in the interaction between infants and their parents. Music and rhythm perception have been shown to support auditory and language skills. Here we compare newborn infants’ learning from a song, a nursery rhyme, and normal speech for the first time in the same study. Infants’ electrophysiological brain responses revealed that the nursery rhyme condition facilitated learning from auditory input, and thus led to successful detection of deviations. These findings suggest that coincidence of prosodic cue patterns and to-be-learned items is more important than the format of the input. Overall, the present results support the view that rhythm is likely to create a template for future events, which allows auditory system to predict prospective input and thus facilitates language development.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号