首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   552篇
  免费   7篇
  国内免费   2篇
  561篇
  2024年   6篇
  2023年   8篇
  2022年   2篇
  2021年   19篇
  2020年   10篇
  2019年   20篇
  2018年   7篇
  2017年   15篇
  2016年   12篇
  2015年   10篇
  2014年   27篇
  2013年   66篇
  2012年   22篇
  2011年   37篇
  2010年   5篇
  2009年   27篇
  2008年   33篇
  2007年   25篇
  2006年   15篇
  2005年   10篇
  2004年   18篇
  2003年   11篇
  2002年   7篇
  2001年   4篇
  2000年   3篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   3篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有561条查询结果,搜索用时 0 毫秒
131.
The method of vibrotactile mangitude production scaling was used to determine the tactile sensory-perceptual integrity for the dorsum of the tongue and thenar eminence of the right hand for 10 fluent speakers and 10 stutterers. It was discovered that both groups performed the task in a similar manner for the thenar eminence of the hand (a nonoral structure) but in a dissimilar manner for the tongue (an oral structure). From these data, it is suggested that the stutterers may maintain a different internal sensory-perceptual process for the tactile system involved in the speech process. The possibility exists that stuttering, for some, may be an “internal disorder” of the tactile-proprioceptive feedback mechanism that is directly involved in speech production.  相似文献   
132.
The ultimate test of the adequacy of linguistic models of fluency breakdown is the degree to which they may account for patterns of dysfluency in the speech of a stutterer who is fluent in more than one language. We present the case of an adult bilingual stutterer (Spanish-English), whose spontaneous language in both Spanish and English was structurally analyzed to assess the relationships of phonological and syntactic structure to the frequency and location of fluency breakdown. Our findings suggest that syntax is probably a greater determinant of stuttered moments than is phonology. Additionally, similarities and differences between English and Spanish sentence structure were associated with similarities and differences in the loci of dysfluencies across the two languages. The need for crosslinguistic research utilizing monolingual and bilingual speakers of languages other than English is emphasized.  相似文献   
133.
    
During much of the past century, it was widely believed that phonemes—the human speech sounds that constitute words—have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary emotional quality. Moreover, we show that the perceived emotional valence of certain phoneme combinations depends on a specific acoustic feature—namely, the dynamic shift within the phonemes' first two frequency components. These data suggest a phoneme-relevant acoustic property influencing the communication of emotion in humans, and provide further evidence against previously held assumptions regarding the structure of human language. This finding has potential applications for a variety of social, educational, clinical, and marketing contexts.  相似文献   
134.
    
Existing driver models mainly account for drivers’ responses to visual cues in manually controlled vehicles. The present study is one of the few attempts to model drivers’ responses to auditory cues in automated vehicles. It developed a mathematical model to quantify the effects of characteristics of auditory cues on drivers’ response to takeover requests in automated vehicles. The current study enhanced queuing network-model human processor (QN-MHP) by modeling the effects of different auditory warnings, including speech, spearcon, and earcon. Different levels of intuitiveness and urgency of each sound were used to estimate the psychological parameters, such as perceived trust and urgency. The model predictions of takeover time were validated via an experimental study using driving simulation with resultant R squares of 0.925 and root-mean-square-error of 73 ms. The developed mathematical model can contribute to modeling the effects of auditory cues and providing design guidelines for standard takeover request warnings for automated vehicles.  相似文献   
135.
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant–ant for big–small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.  相似文献   
136.
Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such “quantized” views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.  相似文献   
137.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   
138.
    
The efficiency of linguistic education based on the code model of language is questioned. The view of written language as a representation of speech ignores the important difference between the experientially different cognitive domains of speech and writing which affect human cognitive development by establishing an extended ecology of languaging. As a consequence, functional illiteracy in societies with established literate cultures becomes a real threat. It can be avoided when it is understood that language is a kind of socially driven behavior which contributes, in a quite definitive way, to the rich context of the human ecological niche, including texts, without which it cannot be understood.  相似文献   
139.
    
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non‐speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non‐speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics.  相似文献   
140.
    
Prior research has demonstrated that the late-term fetus is capable of learning and then remembering a passage of speech for several days, but there are no data to describe the earliest emergence of learning a passage of speech, and thus, how long that learning could be remembered before birth. This study investigated these questions. Pregnant women began reciting or speaking a passage out loud (either Rhyme A or Rhyme B) when their fetuses were 28 weeks gestational age (GA) and continued to do so until their fetuses reached 34 weeks of age, at which time the recitations stopped. Fetuses’ learning and memory of their rhyme were assessed at 28, 32, 33, 34, 36 and 38 weeks. The criterion for learning and memory was the occurrence of a stimulus-elicited heart rate deceleration following onset of a recording of the passage spoken by a female stranger. Detection of a sustained heart rate deceleration began to emerge by 34 weeks GA and was statistically evident at 38 weeks GA. Thus, fetuses begin to show evidence of learning by 34 weeks GA and, without any further exposure to it, are capable of remembering until just prior to birth. Further study using dose–response curves is needed in order to more fully understand how ongoing experience, in the context of ongoing development in the last trimester of pregnancy, affects learning and memory.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号