全文获取类型
收费全文 | 446篇 |
免费 | 41篇 |
国内免费 | 51篇 |
专业分类
538篇 |
出版年
2024年 | 1篇 |
2023年 | 24篇 |
2022年 | 4篇 |
2021年 | 13篇 |
2020年 | 24篇 |
2019年 | 27篇 |
2018年 | 20篇 |
2017年 | 31篇 |
2016年 | 22篇 |
2015年 | 19篇 |
2014年 | 14篇 |
2013年 | 39篇 |
2012年 | 10篇 |
2011年 | 20篇 |
2010年 | 11篇 |
2009年 | 20篇 |
2008年 | 23篇 |
2007年 | 22篇 |
2006年 | 21篇 |
2005年 | 18篇 |
2004年 | 23篇 |
2003年 | 23篇 |
2002年 | 19篇 |
2001年 | 18篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 7篇 |
1997年 | 2篇 |
1996年 | 7篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 7篇 |
1991年 | 3篇 |
1990年 | 4篇 |
1989年 | 7篇 |
1988年 | 6篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1981年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1976年 | 2篇 |
1975年 | 2篇 |
排序方式: 共有538条查询结果,搜索用时 15 毫秒
41.
Carol Schersten LaHurd 《Dialog》2018,57(1):23-30
The 2016 election of Donald Trump as president and the first year of his administration have been accompanied by intensified social and political divides in the United States. A comparison of today's polarization with that during the Vietnam War and civil rights movement of the 1960s suggests strategies for bridging the divides—and in particular for an expanding role by faith communities. 相似文献
42.
The Swedish Hayling task,and its relation to working memory,verbal ability,and speech‐recognition‐in‐noise 下载免费PDF全文
Victoria Stenbäck Mathias Hällgren Björn Lyxell Birgitta Larsby 《Scandinavian journal of psychology》2015,56(3):264-272
Cognitive functions and speech‐recognition‐in‐noise were evaluated with a cognitive test battery, assessing response inhibition using the Hayling task, working memory capacity (WMC) and verbal information processing, and an auditory test of speech recognition. The cognitive tests were performed in silence whereas the speech recognition task was presented in noise. Thirty young normally‐hearing individuals participated in the study. The aim of the study was to investigate one executive function, response inhibition, and whether it is related to individual working memory capacity (WMC), and how speech‐recognition‐in‐noise relates to WMC and inhibitory control. The results showed a significant difference between initiation and response inhibition, suggesting that the Hayling task taps cognitive activity responsible for executive control. Our findings also suggest that high verbal ability was associated with better performance in the Hayling task. We also present findings suggesting that individuals who perform well on tasks involving response inhibition, and WMC, also perform well on a speech‐in‐noise task. Our findings indicate that capacity to resist semantic interference can be used to predict performance on speech‐in‐noise tasks. 相似文献
43.
The goal of the study was to examine whether the ‘noun-bias’ phenomenon, which exists in the lexicon of Hebrew-speaking children, also exists in Hebrew child-directed speech (CDS) as well as in Hebrew adult-directed speech (ADS). In addition, we aimed to describe the use of the different classes of content words in the speech of Hebrew-speaking parents to their children at different ages compared to the speech of parents to adults (ADS). Thirty infants (age range 8:5–33 months) were divided into three stages according to age: pre-lexical, single-word, and early grammar. The ADS corpus included 18 Hebrew-speaking parents of children at the same three stages of language development as in the CDS corpus. The CDS corpus was collected from parent–child dyads during naturalistic activities at home: mealtime, bathing, and play. The ADS corpus was collected from parent–experimenter interactions including the parent watching a video and then being interviewed by the experimenter. 200 utterances of each sample were transcribed, coded for types and tokens and analyzed quantitatively and qualitatively. Results show that in CDS, when speaking to infants of all ages, parents’ use of types and tokens of verbs and nouns was similar and significantly higher than their use of adjectives or adverbs. In ADS, however, verbs were the main lexical category used by Hebrew-speaking parents in both types and tokens. It seems that both the properties of the input language (e.g. the pro-drop parameter) and the interactional styles of the caregivers are important factors that may influence the high presence of verbs in Hebrew-speaking parents’ ADS and CDS. The negative correlation between the widespread use of verbs in the speech of parents to their infants and the ‘noun-bias’ phenomenon in the Hebrew-child lexicon will be discussed in detail. 相似文献
44.
Lewis Kirshner 《The International journal of psycho-analysis》2015,96(1):65-81
The translational metaphor in psychoanalysis refers to the traditional method of interpreting or restating the meaning of verbal and behavioral acts of a patient in other, presumably more accurate terms that specify the forces and conflicts underlying symptoms. The analyst translates the clinical phenomenology to explain its true meaning and origin. This model of analytic process has been challenged from different vantage points by authors presenting alternative conceptions of therapeutic action. Although the temptation to find and make interpretations of clinical material is difficult to resist, behaving in this way places the analyst in the position of a teacher or diagnostician, seeking a specific etiology, which has not proven fruitful. Despite its historical appeal, I argue that the translational model is a misleading and anachronistic version of what actually occurs in psychoanalysis. I emphasize instead the capacity of analysis to promote the emergence of new forms of representation, or figuration, from the unconscious, using the work of Lacan, Laplanche, and Modell to exemplify this reformulation, and provide clinical illustrations of how it looks in practice. 相似文献
45.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners. 相似文献
46.
The development in the interface of smart devices has lead to voice interactive systems. An additional step in this direction is to enable the devices to recognize the speaker. But this is a challenging task because the interaction involves short duration speech utterances. The traditional Gaussian mixture models (GMM) based systems have achieved satisfactory results for speaker recognition only when the speech lengths are sufficiently long. The current state-of-the-art method utilizes i-vector based approach using a GMM based universal background model (GMM-UBM). It prepares an i-vector speaker model from a speaker’s enrollment data and uses it to recognize any new test speech. In this work, we propose a multi-model i-vector system for short speech lengths. We use an open database THUYG-20 for the analysis and development of short speech speaker verification and identification system. By using an optimum set of mel-frequency cepstrum coefficients (MFCC) based features we are able to achieve an equal error rate (EER) of 3.21% as compared to the previous benchmark score of EER 4.01% on the THUYG-20 database. Experiments are conducted for speech lengths as short as 0.25 s and the results are presented. The proposed method shows improvement as compared to the current i-vector based approach for shorter speech lengths. We are able to achieve improvement of around 28% even for 0.25 s speech samples. We also prepared and tested the proposed approach on our own database with 2500 speech recordings in English language consisting of actual short speech commands used in any voice interactive system. 相似文献
47.
Primates, including humans, communicate using facial expressions, vocalizations and often a combination of the two modalities. For humans, such bimodal integration is best exemplified by speech-reading - humans readily use facial cues to enhance speech comprehension, particularly in noisy environments. Studies of the eye movement patterns of human speech-readers have revealed, unexpectedly, that they predominantly fixate on the eye region of the face as opposed to the mouth. Here, we tested the evolutionary basis for such a behavioral strategy by examining the eye movements of rhesus monkeys observers as they viewed vocalizing conspecifics. Under a variety of listening conditions, we found that rhesus monkeys predominantly focused on the eye region versus the mouth and that fixations on the mouth were tightly correlated with the onset of mouth movements. These eye movement patterns of rhesus monkeys are strikingly similar to those reported for humans observing the visual components of speech. The data therefore suggest that the sensorimotor strategies underlying bimodal speech perception may have a homologous counterpart in a closely related primate ancestor. 相似文献
48.
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain. 相似文献
49.
Sources of variability in children’s language growth 总被引:1,自引:0,他引:1
Janellen Huttenlocher Heidi Waterfall Marina Vasilyeva Jack Vevea Larry V. Hedges 《Cognitive psychology》2010,61(4):343-365
The present longitudinal study examines the role of caregiver speech in language development, especially syntactic development, using 47 parent–child pairs of diverse SES background from 14 to 46 months. We assess the diversity (variety) of words and syntactic structures produced by caregivers and children. We use lagged correlations to examine language growth and its relation to caregiver speech. Results show substantial individual differences among children, and indicate that diversity of earlier caregiver speech significantly predicts corresponding diversity in later child speech. For vocabulary, earlier child speech also predicts later caregiver speech, suggesting mutual influence. However, for syntax, earlier child speech does not significantly predict later caregiver speech, suggesting a causal flow from caregiver to child. Finally, demographic factors, notably SES, are related to language growth, and are, at least partially, mediated by differences in caregiver speech, showing the pervasive influence of caregiver speech on language growth. 相似文献
50.