首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   451篇
  免费   41篇
  国内免费   51篇
  2024年   1篇
  2023年   24篇
  2022年   4篇
  2021年   13篇
  2020年   25篇
  2019年   27篇
  2018年   21篇
  2017年   31篇
  2016年   24篇
  2015年   20篇
  2014年   14篇
  2013年   39篇
  2012年   10篇
  2011年   20篇
  2010年   11篇
  2009年   20篇
  2008年   23篇
  2007年   22篇
  2006年   21篇
  2005年   18篇
  2004年   23篇
  2003年   23篇
  2002年   19篇
  2001年   18篇
  2000年   4篇
  1999年   5篇
  1998年   7篇
  1997年   2篇
  1996年   7篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有543条查询结果,搜索用时 31 毫秒
171.
In the current study, we examined the developmental course of the perception of non‐native tonal contrast. We tested 4, 6 and 12‐month‐old Dutch infants on their discrimination of Chinese low‐rising tone and low‐dipping tone using the visual fixation paradigm. The infants were tested in two conditions that differed in terms of degree of variability. The 4‐month‐olds did not show discrimination effect in either condition. The 6‐ and 12‐month‐old infants, however, discriminated the tones in both conditions. The improvement of perception might be the result of cognitive development carried over from learning the native phonology. Infants can become better listeners in general in the first year of life, as well as get cognitively better equipped in dealing with the variable input in speech in general. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
172.
Two experiments investigated participants’ recognition memory for word content, while varying vocal characteristics, and for vocal characteristics alone. In Experiment 1, participants performed an auditory recognition task in which they identified whether a spoken word was “new”, “old” (repeated word, repeated voice), or “similar” (repeated word, new voice). Results showed that word recognition accuracy was lower for similar trials than old trials. In Experiment 2, participants performed an auditory recognition task in which they identified whether or not a phrase was spoken in an old or new voice, with repetitions occurring after a variable number of intervening stimuli. Results showed that recognition accuracy was lower when old voices spoke an alternate message than a repeated message and accuracy decreased as a function of number of intervening items. Overall, the results suggest that speech recognition is better for lexical content than vocal characteristics alone.  相似文献   
173.
Young children have an overall preference for child‐directed speech (CDS) over adult‐directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about processing of CDS at short, sub‐second timescales. How do the moment‐to‐moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word‐level contour shapes: ‘fall’, ‘rise’, ‘hill’, and ‘valley’. In Study 2, we adapted a measure from adult attention research—pupil size synchrony—to quantify real‐time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers’ attention when listening to the four word‐level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers’ learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers’ attention but guide their learning of new words. By revealing a physiological response to the real‐time dynamics of CDS, this investigation yields a new sub‐second framework for understanding young children's engagement with one of the most important signals in their environment.  相似文献   
174.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   
175.
Anna Cho 《Dialog》2021,60(1):14-21
COVID‐19 is changing everyday life. COVID‐19 is also changing the look of the church. The church is a community of people who gather for worship, fellowship, and sharing. However, due to the coronavirus, the church is no longer able to gather and worship together. Moreover, because of the coronavirus, social distancing with as little as possible face‐to‐face contact has been recommended worldwide. If this situation is prolonged, the church community interactions will have difficulty in surviving. Therefore, this article seeks ways to maintain and strengthen the church community in and after the coronavirus era through insight into speech act theory.  相似文献   
176.
This study evaluated the efficacy of (a) remote video-based behavioral skills training (BST) with added speech outlines on teaching public speaking behaviors and (b) remote video-based awareness training (AT) on speech-disfluency rates. A multiple-baseline design across speech behaviors was used to evaluate the training. Remote video-based BST and AT were effective at teaching public speaking behaviors and reducing speech disfluencies, respectively, for both participants. In addition, performance generalized to increased audience size. Although expert ratings of perceived public speaking effectiveness improved following BST, the ratings did not improve and some worsened following AT. Both participants reported satisfaction with video-based BST and AT. One participant reported greater comfort, confidence, overall ability, and less anxiety as a public speaker following BST. Both participants reported greater improvements in those categories following AT. Our results suggest that public speaking behaviors can be taught using remote video-based BST and speech disfluencies can be reduced using remote video-based AT.  相似文献   
177.
Most people are left-hemisphere dominant for language. However the neuroanatomy of language lateralization is not fully understood. By combining functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI), we studied whether language lateralization is associated with cerebral white-matter (WM) microstructure. Sixteen healthy, left-handed women aged 20–25 were included in the study. Left-handers were targeted in order to increase the chances of involving subjects with atypical language lateralization. Language lateralization was determined by fMRI using a verbal fluency paradigm. Tract-based spatial statistics analysis of DTI data was applied to test for WM microstructural correlates of language lateralization across the whole brain. Fractional anisotropy and mean diffusivity were used as indicators of WM microstructural organization. Right-hemispheric language dominance was associated with reduced microstructural integrity of the left superior longitudinal fasciculus and left-sided parietal lobe WM. In left-handed women, reduced integrity of the left-sided language related tracts may be closely linked to the development of right hemispheric language dominance. Our results may offer new insights into language lateralization and structure–function relationships in human language system.  相似文献   
178.
179.
Recently it has been suggested that speech and manual timing tasks share a common central process (Franz, Zelaznik, & Smith, 1992). Because stuttering is thought to be related to deficits in motoric processes such as timing, stutterers (n = 15) were compared with a set of age-, education-, and sex-matched nonstutterers on timing and isometric force-production tasks. In the timing tasks, subjects flexed and extended the right index finger at the metacarpophalangeal joint at cycle durations of 600, 500, 400, 300, and 200 ms. In the force-production tasks, subjects generated isometric forces to match target force levels displayed on a cathode-ray tube (CRT) screen. There were five levels of force, ranging from .11 to 7.85 newtons. Overall, there were no differences in timing and force-production performance between stutterers and nonstutterers. These results are similar to those obtained recently by Hulstijn, Summers, van Lieshout, and Peters (1992). We suggest that stuttering is not characterized by a general deficit in rhythmic timing. Instead, the motor deficit associated with stuttering should be viewed as speech specific.  相似文献   
180.
Relations between children’s imaginary companion status and their engagement in private speech during free play were investigated in a socially diverse sample of 5-year-olds (N = 148). Controlling for socioeconomic status, receptive verbal ability, total number of utterances, and duration of observation, there was a main effect of imaginary companion status on type of private speech. Children who had imaginary companions were more likely to engage in covert private speech compared with their peers who did not have imaginary companions. These results suggest that the private speech of children with imaginary companions is more internalized than that of their peers who do not have imaginary companions and that social engagement with imaginary beings may fulfill a similar role to social engagement with real-life partners in the developmental progression of private speech.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号