全文获取类型
收费全文 | 2047篇 |
免费 | 217篇 |
国内免费 | 166篇 |
专业分类
2430篇 |
出版年
2024年 | 8篇 |
2023年 | 68篇 |
2022年 | 27篇 |
2021年 | 62篇 |
2020年 | 114篇 |
2019年 | 126篇 |
2018年 | 80篇 |
2017年 | 118篇 |
2016年 | 121篇 |
2015年 | 72篇 |
2014年 | 80篇 |
2013年 | 287篇 |
2012年 | 77篇 |
2011年 | 73篇 |
2010年 | 55篇 |
2009年 | 99篇 |
2008年 | 105篇 |
2007年 | 98篇 |
2006年 | 84篇 |
2005年 | 77篇 |
2004年 | 80篇 |
2003年 | 73篇 |
2002年 | 65篇 |
2001年 | 54篇 |
2000年 | 42篇 |
1999年 | 33篇 |
1998年 | 32篇 |
1997年 | 18篇 |
1996年 | 29篇 |
1995年 | 24篇 |
1994年 | 14篇 |
1993年 | 13篇 |
1992年 | 20篇 |
1991年 | 4篇 |
1990年 | 9篇 |
1989年 | 10篇 |
1988年 | 7篇 |
1987年 | 4篇 |
1986年 | 6篇 |
1985年 | 7篇 |
1984年 | 5篇 |
1983年 | 6篇 |
1982年 | 5篇 |
1981年 | 7篇 |
1980年 | 6篇 |
1979年 | 5篇 |
1978年 | 7篇 |
1977年 | 8篇 |
1976年 | 4篇 |
1975年 | 2篇 |
排序方式: 共有2430条查询结果,搜索用时 15 毫秒
171.
Communication is a multimodal phenomenon. The cognitive mechanisms supporting it are still understudied. We explored a natural dataset of academic lectures to determine how communication modalities are used and coordinated during the presentation of complex information. Using automated and semi‐automated techniques, we extracted and analyzed, from the videos of 30 speakers, measures capturing the dynamics of their body movement, their slide change rate, and various aspects of their speech (speech rate, articulation rate, fundamental frequency, and intensity). There were consistent but statistically subtle patterns in the use of speech rate, articulation rate, intensity, and body motion across the presentation. Principal component analysis also revealed patterns of system‐like covariation among modalities. These findings, although tentative, do suggest that the cognitive system is integrating body, slides, and speech in a coordinated manner during natural language use. Further research is needed to clarify the specific coordination patterns that occur between the different modalities. 相似文献
172.
Olivia Afonso Paz Surez‐Coalla Fernando Cuetos Agustín Ibez Lucas Sedeo Adolfo M. García 《Cognitive Science》2019,43(7)
Several studies have illuminated how processing manual action verbs (MaVs) affects the programming or execution of concurrent hand movements. Here, to circumvent key confounds in extant designs, we conducted the first assessment of motor–language integration during handwriting—a task in which linguistic and motoric processes are co‐substantiated. Participants copied MaVs, non‐manual action verbs, and non‐action verbs as we collected measures of motor programming and motor execution. Programming latencies were similar across conditions, but execution was faster for MaVs than for the other categories, regardless of whether word meanings were accessed implicitly or explicitly. In line with the Hand‐Action‐Network Dynamic Language Embodiment (HANDLE) model, such findings suggest that effector‐congruent verbs can prime manual movements even during highly automatized tasks in which motoric and verbal processes are naturally intertwined. Our paradigm opens new avenues for fine‐grained explorations of embodied language processes. 相似文献
173.
To understand how individuals adapt to and anticipate each other in joint tasks, we employ a bidirectional delay–coupled dynamical system that allows for mutual adaptation and anticipation. In delay–coupled systems, anticipation is achieved when one system compares its own time‐delayed behavior, which implicitly includes past information about the other system’s behavior, with the other system’s instantaneous behavior. Applied to joint music performance, the model allows each system to adapt its behavior to the dynamics of the other. Model predictions of asynchrony between two simultaneously produced musical voices were compared with duet pianists’ behavior; each partner performed one voice while auditory feedback perturbations occurred at unpredictable times during live performance. As the model predicted, when auditory feedback from one musical voice was removed, the asynchrony changed: The pianist’s voice that was removed anticipated (preceded) the actions of their partner. When the auditory feedback returned and both musicians could hear each other, they rapidly returned to baseline levels of asynchrony. To understand how the pianists anticipated each other, their performances were fitted by the model to examine change in model parameters (coupling strength, time‐delay). When auditory feedback for one or both voices was removed, the fits showed the expected decrease in coupling strength and time‐delay between the systems. When feedback about the voice(s) returned, the coupling strength and time‐delay returned to baseline. These findings support the idea that when people perform actions together, they do so as a coupled bidirectional anticipatory system. 相似文献
174.
Human communication is thoroughly context bound. We present two experiments investigating the importance of the shared context, that is, the amount of knowledge two interlocutors have in common, for the successful emergence and use of novel conventions. Using a referential communication task where black‐and‐white pictorial symbols are used to convey colors, pairs of participants build shared conventions peculiar to their dyad without experimenter feedback, relying purely on ostensive‐inferential communication. Both experiments demonstrate that access to the visual context promotes more successful communication. Importantly, success improves cumulatively, supporting the view that pairs establish conventional ways of using the symbols to communicate. Furthermore, Experiment 2 suggests that dyads with access to the visual context successfully adapt the conventions built for one color space to another color space, unlike dyads lacking it. In linking experimental pragmatics with language evolution, the study illustrates the benefits of exploring the emergence of linguistic conventions using an ostensive‐inferential model of communication. 相似文献
175.
Human connectome studies suggest that the brain has a modular small world network structure with rich-club effect. Such structure emerges spontaneously in simple model neural networks, (e.g. coupled maps), through adaptive rewiring according to the dynamic functional connectivity. The utility of adaptive rewiring has so far exclusively been demonstrated for unweighted networks; it is anything but guaranteed to work as well for weighted networks. We investigate adaptive rewiring in weighted networks, comparing various right-skewed, symmetrical, and left-skewed fixed weight distributions. We examine how network clustering, path length, modularity, and rich club coefficients develop for weakly, intermediate and strongly coupled networks. At low coupling strength, the weight distribution, as well as episodes of functional synchrony, have a significant effect on network evolution. With increased coupling strengths, all weighted networks robustly develop architectures similar to the unweighted ones. Adaptive rewiring appears relatively ineffective in networks with (biologically implausibly) extreme right-skewed weight distributions but performed most economically in biologically plausible log-normal distributions. 相似文献
176.
The development in the interface of smart devices has lead to voice interactive systems. An additional step in this direction is to enable the devices to recognize the speaker. But this is a challenging task because the interaction involves short duration speech utterances. The traditional Gaussian mixture models (GMM) based systems have achieved satisfactory results for speaker recognition only when the speech lengths are sufficiently long. The current state-of-the-art method utilizes i-vector based approach using a GMM based universal background model (GMM-UBM). It prepares an i-vector speaker model from a speaker’s enrollment data and uses it to recognize any new test speech. In this work, we propose a multi-model i-vector system for short speech lengths. We use an open database THUYG-20 for the analysis and development of short speech speaker verification and identification system. By using an optimum set of mel-frequency cepstrum coefficients (MFCC) based features we are able to achieve an equal error rate (EER) of 3.21% as compared to the previous benchmark score of EER 4.01% on the THUYG-20 database. Experiments are conducted for speech lengths as short as 0.25 s and the results are presented. The proposed method shows improvement as compared to the current i-vector based approach for shorter speech lengths. We are able to achieve improvement of around 28% even for 0.25 s speech samples. We also prepared and tested the proposed approach on our own database with 2500 speech recordings in English language consisting of actual short speech commands used in any voice interactive system. 相似文献
177.
《British journal of psychology (London, England : 1953)》2017,108(1):28-30
Research on how language acquisition begins has been fragmented both in terms of scientific communities and in terms of the phenomena that are taken to characterize developmental progress. In her article, Marilyn Vihman argues for an integrative approach that takes the child's efforts at speech production as primary, and notes that infants' knowledge of how words sound may accrue over a protracted period developmentally. Here, I briefly discuss how reconceptualization of the process can help integrate perspectives previously at odds. 相似文献
178.
179.
This paper is concerned with detecting the presence of switching behavior in experimentally obtained posturographic data sets by means of a novel algorithm that is based on a combination of wavelet analysis and Hilbert transform. As a test-bed for the algorithm, we first use a switched model of human balance control during quiet standing with known switching behavior in four distinct configurations. We obtain a time–frequency representation of a signal generated by our model system. We are then able to detect manifestations of discontinuities (switchings) in the signal as spiking behavior. The frequency of switchings, measured by means of our algorithm and detected in our models systems, agrees with the frequency of spiking behavior found in the experimentally obtained posturographic data. 相似文献
180.