首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   534篇
  免费   8篇
  国内免费   9篇
  2023年   7篇
  2022年   2篇
  2021年   19篇
  2020年   9篇
  2019年   19篇
  2018年   7篇
  2017年   15篇
  2016年   12篇
  2015年   10篇
  2014年   27篇
  2013年   66篇
  2012年   21篇
  2011年   37篇
  2010年   5篇
  2009年   27篇
  2008年   33篇
  2007年   25篇
  2006年   15篇
  2005年   10篇
  2004年   18篇
  2003年   11篇
  2002年   7篇
  2001年   4篇
  2000年   3篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   3篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有551条查询结果,搜索用时 31 毫秒
71.
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody.  相似文献   
72.

This study presents a new research paradigm designed to explore the effect of anxiety on semantic information processing. It is based on the premise that the demonstrated effect of anxiety on cognitive performance and apparent inconsistencies reported in the literature might be better understood in terms of linguistic properties of inner speech which underlies analytic (vs. intuitive) thought processes. The study employed several parameters of functional linguistics in order to analyse properties of public speech by high- and low-anxious individuals. Results indicate that anxiety is associated with greater use of associative clauses that take the speaker further away from the original starting point before coming back and concluding (identified as reduced semantic efficiency). This is accompanied by a speech pattern that includes greater amounts of factual information unaccompanied by elaborate argumentation. While these results are considered tentative due to methodological and empirical shortcomings, they suggest the viability of this approach.  相似文献   
73.
ABSTRACT

In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.  相似文献   
74.
A monitoring bias account is often used to explain speech error patterns that seem to be the result of an interactive language production system, like phonological influences on lexical selection errors. A biased monitor is suggested to detect and covertly correct certain errors more often than others. For instance, this account predicts that errors that are phonologically similar to intended words are harder to detect than those that are phonologically dissimilar. To test this, we tried to elicit phonological errors under the same conditions as those that show other kinds of lexical selection errors. In five experiments, we presented participants with high cloze probability sentence fragments followed by a picture that was semantically related, a homophone of a semantically related word, or phonologically related to the (implicit) last word of the sentence. All experiments elicited semantic completions or homophones of semantic completions, but none elicited phonological completions. This finding is hard to reconcile with a monitoring bias account and is better explained with an interactive production system. Additionally, this finding constrains the amount of bottom-up information flow in interactive models.  相似文献   
75.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   
76.
The poor performance of autistic individuals on a test of homograph reading is widely interpreted as evidence for a reduction in sensitivity to context termed “weak central coherence”. To better understand the cognitive processes involved in completing the homograph-reading task, we monitored the eye movements of nonautistic adults as they completed the task. Using single trial analysis, we determined that the time between fixating and producing the homograph (eye-to-voice span) increased significantly across the experiment and predicted accuracy of homograph pronunciation, suggesting that participants adapted their reading strategy to minimize pronunciation errors. Additionally, we found evidence for interference from previous trials involving the same homograph. This progressively reduced the initial advantage for dominant homograph pronunciations as the experiment progressed. Our results identify several additional factors that contribute to performance on the homograph reading task and may help to reconcile the findings of poor performance on the test with contradictory findings from other studies using different measures of context sensitivity in autism. The results also undermine some of the broader theoretical inferences that have been drawn from studies of autism using the homograph task. Finally, we suggest that this approach to task deconstruction might have wider applications in experimental psychology.  相似文献   
77.
Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners’ verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.  相似文献   
78.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
79.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   
80.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号