首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   472篇
  免费   42篇
  国内免费   51篇
  2024年   1篇
  2023年   25篇
  2022年   4篇
  2021年   13篇
  2020年   25篇
  2019年   27篇
  2018年   20篇
  2017年   31篇
  2016年   23篇
  2015年   20篇
  2014年   16篇
  2013年   41篇
  2012年   10篇
  2011年   21篇
  2010年   12篇
  2009年   23篇
  2008年   26篇
  2007年   23篇
  2006年   23篇
  2005年   18篇
  2004年   23篇
  2003年   23篇
  2002年   19篇
  2001年   18篇
  2000年   4篇
  1999年   5篇
  1998年   7篇
  1997年   2篇
  1996年   7篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   3篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1980年   2篇
  1979年   2篇
  1978年   1篇
  1976年   4篇
  1975年   3篇
  1974年   1篇
排序方式: 共有565条查询结果,搜索用时 265 毫秒
261.
Immediate serial recall of visually presented verbal stimuli is impaired by the presence of irrelevant auditory background speech, the so-called irrelevant speech effect. Two of the three main accounts of this effect place restrictions on when it will be observed, limiting its occurrence either to items processed by the phonological loop (the phonological loop hypothesis) or to items that are not too dissimilar from the irrelevant speech (the feature model). A third, the object-oriented episodic record (O-OER) model, requires only that the memory task involves seriation. The present studies test these three accounts by examining whether irrelevant auditory speech will interfere with a task that does not involve the phonological loop, does not use stimuli that are compatible with those to be remembered, but does require seriation. Two experiments found that irrelevant speech led to lower levels of performance in a visual statistical learning task, offering more support for the O-OER model and posing a challenge for the other two accounts.  相似文献   
262.
People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing—for example, actually hearing “chair” compared to simply thinking about a chair can temporarily make the visual system a better “chair detector”. Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.  相似文献   
263.
There is an ongoing debate on the question whether semantic interference effects in language production reflect competitive processes at the level of lexical selection or whether they reflect a post-lexical bottleneck, occupied in particular by response-relevant distractor words. To disentangle item-inherent categorical relatedness and task-related response relevance effects, we combined the picture–word interference task with the conditional naming paradigm in an orthogonal design, varying categorical relatedness and task-related response relevance independent of each other. Participants were instructed to name only objects that are typically seen in or on the water (e.g. canoe) and refrain from naming objects that are typically located outside the water (e.g. bike), and vice versa. Semantic relatedness and the response relevance of superimposed distractor words were manipulated orthogonally. The pattern of results revealed no evidence for response relevance as a major source of semantic interference effects in the PWI paradigm. In contrast, our data demonstrate that semantic similarity beyond categorical relations is critical for interference effects to be observed. Together, these findings provide support for the assumption that lexical selection is competitive and that semantic interference effects in the PWI paradigm reflect this competition.  相似文献   
264.
Recent investigations of timing in motor control have been interpreted as support for the concept of brain modularity. According to this concept, the brain is organized into functional modules that contain mechanisms responsible for general processes. Keele and colleagues (Keele & Hawkins, 1982; Keele & Ivry, 1987; Keele, Ivry, & Pokorny, 1987; Keele, Pokorny, Corcos, & Ivry, 1985) demonstrated that the within-subject variability in cycle duration of repetitive movements is correlated across finger, forearm, and foot movements, providing evidence in support of a general timing module. The present study examines the notion of timing modularity of speech and nonspeech movements of the oral motor system as well as the manual motor system. Subjects produced repetitive movements with the finger, forearm, and jaw. In addition, a fourth task involved the repetition of a syllable. All tasks were to be produced with a 400-ms cycle duration; target duration was established with a pacing tone, which then was removed. For each task, the within-subject variability of the cycle duration was computed for the unpaced movements over 20 trials. Significant correlations were found between each pair of effectors and tasks. The present results provide evidence that common timing processes are involved not only in movements of the limbs, but also in speech and nonspeech movements of oral structures.  相似文献   
265.
Models of motor control have highlighted the role of temporal predictive mechanisms in sensorimotor processing of speech and limb movement timing. We investigated how these mechanisms are affected in Parkinson’s disease (PD) while patients performed speech and hand motor reaction time tasks in response to sensory stimuli with predictable and unpredictable temporal dynamics. Results showed slower motor reaction times in PD vs. control in response to temporally predictable, but not unpredictable stimuli. This effect was driven by faster motor responses to predictable stimuli in control subjects; however, no such effect was observed in the PD group. These findings indicated the relationship between PD pathology and sensorimotor deficits in temporal predictive mechanisms of timing processing during speech production and hand movement.  相似文献   
266.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   
267.
This study reports the occurrence of ‘tonal synchrony’ as a new dimension of early mother–infant interaction synchrony. The findings are based on a tonal and temporal analysis of vocal interactions between 15 mothers and their 3-month-old infants during 5 min of free-play in a laboratory setting. In total, 558 vocal exchanges were identified and analysed, of which 84% reflected harmonic or pentatonic series. Another 10% of the exchanges contained absolute and/or relative pitch and/or interval imitations. The total durations of dyads being in tonal synchrony were normally distributed (M = 3.71, SD = 2.44). Vocalisations based on harmonic series appeared organised around the major triad, containing significantly more simple frequency ratios (octave, fifth and third) than complex ones (non-major triad tones). Tonal synchrony and its characteristics are discussed in relation to infant-directed speech, communicative musicality, pre-reflective communication and its impact on the quality of early mother–infant interaction and child's development.  相似文献   
268.
It has been argued that natural language, in the form of inner speech, plays a central role in self-consciousness. However, it is not quite clear why. In this paper, we present a novel answer to the why question. According to the thesis presented in this paper, the brain as a physical system is limited in observing itself and relies on the mediation of natural language for the reconstruction of its phase space trajectory. Drawing on knowledge gathered on the measurement of dynamical systems, we detail the unique properties of natural language that may support this reconstruction.  相似文献   
269.
Spoken language perception may be constrained by a listener's cognitive resources, including verbal working memory (WM) capacity and basic auditory perception mechanisms. For Japanese listeners, it is unknown how, or even if, these resources are involved in the processing of pitch accent at the word level. The present study examined the extent to which native Japanese speakers could make correctness judgments on and categorize spoken Japanese words by pitch accent pattern, and how verbal WM capacity and acoustic pitch sensitivity related to perception ability. Results showed that Japanese listeners were highly accurate at judging pitch accent correctness (M = 93%), but that the more cognitively demanding accent categorization task yielded notably lower performance (M = 61%). Of chief interest was the finding that acoustic pitch sensitivity significantly predicted accuracy scores on both perception tasks, while verbal WM had a predictive role only for the categorization of a specific accent pattern. These results indicate first, that task demands greatly influence accuracy and second, that basic cognitive capacities continue to support perception of lexical prosody even in adult listeners.  相似文献   
270.
PurposeAdults who stutter speak more fluently during choral speech contexts than they do during solo speech contexts. The underlying mechanisms for this effect remain unclear, however. In this study, we examined the extent to which the choral speech effect depended on presentation of intact temporal speech cues. We also examined whether speakers who stutter followed choral signals more closely than typical speakers did.Method8 adults who stuttered and 8 adults who did not stutter read 60 sentences aloud during a solo speaking condition and three choral speaking conditions (240 total sentences), two of which featured either temporally altered or indeterminate word duration patterns. Effects of these manipulations on speech fluency, rate, and temporal entrainment with the choral speech signal were assessed.ResultsAdults who stutter spoke more fluently in all choral speaking conditions than they did when speaking solo. They also spoke slower and exhibited closer temporal entrainment with the choral signal during the mid- to late-stages of sentence production than the adults who did not stutter. Both groups entrained more closely with unaltered choral signals than they did with altered choral signals.ConclusionsFindings suggest that adults who stutter make greater use of speech-related information in choral signals when talking than adults with typical fluency do. The presence of fluency facilitation during temporally altered choral speech and conversation babble, however, suggests that temporal/gestural cueing alone cannot account for fluency facilitation in speakers who stutter. Other potential fluency enhancing mechanisms are discussed.Educational Objectives: The reader will be able to (a) summarize competing views on stuttering as a speech timing disorder, (b) describe the extent to which adults who stutter depend on an accurate rendering of temporal information in order to benefit from choral speech, and (c) discuss possible explanations for fluency facilitation in the presence of inaccurate or indeterminate temporal cues.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号