首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1067篇
  免费   67篇
  国内免费   55篇
  1189篇
  2024年   5篇
  2023年   28篇
  2022年   21篇
  2021年   30篇
  2020年   44篇
  2019年   46篇
  2018年   32篇
  2017年   55篇
  2016年   37篇
  2015年   38篇
  2014年   46篇
  2013年   105篇
  2012年   16篇
  2011年   56篇
  2010年   25篇
  2009年   64篇
  2008年   66篇
  2007年   53篇
  2006年   36篇
  2005年   34篇
  2004年   35篇
  2003年   35篇
  2002年   30篇
  2001年   24篇
  2000年   8篇
  1999年   10篇
  1998年   15篇
  1997年   3篇
  1996年   7篇
  1995年   2篇
  1994年   4篇
  1993年   4篇
  1992年   8篇
  1991年   6篇
  1990年   4篇
  1989年   7篇
  1988年   7篇
  1985年   9篇
  1984年   15篇
  1983年   16篇
  1982年   16篇
  1981年   9篇
  1980年   20篇
  1979年   7篇
  1978年   12篇
  1977年   9篇
  1976年   14篇
  1975年   3篇
  1974年   8篇
  1973年   3篇
排序方式: 共有1189条查询结果,搜索用时 0 毫秒
41.
The use of conversation-related skills by youthful offenders can influence social interactions with adults. These behaviors are also likely to be useful to adolescents after their release from a treatment program (Journal of Applied Behavior Analysis, 1972, 5 , 343–372). Four girls, aged 13 to 15 yr, residing at Achievement Place for Girls in Lawrence, Kansas, received training on conversation-related behaviors. A multiple-baseline design across youths and across behaviors was used. Youth answer-volunteering in response to questions and three youth nonverbal components (“hand on face”, “hand at rest”, and “facial orientation”) were measured during daily 10-min sessions with a simulated guest in the group home's living room. Answer-volunteering was scored each session as the per cent of 13 “secondary” questions that the simulated guest did not have to ask following 10 “primary” questions. The three nonverbal components were scored according to their occurrence during 10-sec intervals and the resultant scores were averaged per session for an overall appropriate nonverbal score. The girls individually earned points within the home's token economy for participating in each session and additional points were awarded after training if preselected behavioral criteria were achieved for each of the two behavior categories per girl. Some of the training sessions were led by a “teaching-parent” (specially trained houseparent) while others were led by individual girls. Point consequences were administered by both the teaching-parent and by the “peer-trainers”. The average observed rate of answer-volunteering by the girls during pretraining sessions was 30% for S1, 30% for S2, 23% for S3, and 68% for S4. The average rate of answer-volunteering during posttraining sessions was: S1 = 92%, S2 = 89%, S3 = 90%, and S4 = 98%. The average nonverbal score during pretraining sessions was 82% for S1, 53% for S2, 60% for S3, and 82% for S4. The average nonverbal score during posttraining sessions was: S1 = 98%, S2 = 98%, S3 = 98%, and S4 = 100%. Videotapes of the sessions were shown in a random sequence to four adults (probation officer, social worker, etc who represented “significant others” for the youths' future success in the community. The adults judged posttraining tapes on the average as more appropriate 100% of the time for S1, 100% of the time for S2, 90% of the time for S3, and 70% of the time for S4. The study demonstrated that training of conversation-related skills is feasible with predelinquent girls, that the girls can help train each other, and that social validation of the training results is possible.  相似文献   
42.
The development in the interface of smart devices has lead to voice interactive systems. An additional step in this direction is to enable the devices to recognize the speaker. But this is a challenging task because the interaction involves short duration speech utterances. The traditional Gaussian mixture models (GMM) based systems have achieved satisfactory results for speaker recognition only when the speech lengths are sufficiently long. The current state-of-the-art method utilizes i-vector based approach using a GMM based universal background model (GMM-UBM). It prepares an i-vector speaker model from a speaker’s enrollment data and uses it to recognize any new test speech. In this work, we propose a multi-model i-vector system for short speech lengths. We use an open database THUYG-20 for the analysis and development of short speech speaker verification and identification system. By using an optimum set of mel-frequency cepstrum coefficients (MFCC) based features we are able to achieve an equal error rate (EER) of 3.21% as compared to the previous benchmark score of EER 4.01% on the THUYG-20 database. Experiments are conducted for speech lengths as short as 0.25 s and the results are presented. The proposed method shows improvement as compared to the current i-vector based approach for shorter speech lengths. We are able to achieve improvement of around 28% even for 0.25 s speech samples. We also prepared and tested the proposed approach on our own database with 2500 speech recordings in English language consisting of actual short speech commands used in any voice interactive system.  相似文献   
43.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
44.
Advances in robotics, automation, and artificial intelligence increasingly enable firms to replace human labor with technology, thereby fundamentally transforming how goods and services are produced. From both managerial and societal points of view, it is therefore important to understand demand‐side incentives for firms to employ human labor. We begin to address this question by examining for which products and services consumers are more likely to favor human (vs. robotic) labor. In six studies, we demonstrate that consumers prefer human (vs. robotic) labor more for products with higher (vs. lower) symbolic value (e.g., when expressing something about one's beliefs and personality is of greater importance). We theorize that this is because consumers have stronger uniqueness motives in more (vs. less) symbolic consumption contexts (and associate human labor more strongly with product uniqueness). In line with this account, we demonstrate that individual differences in need for uniqueness moderate the interaction between production mode and symbolic motives and that a measure of uniqueness motives mediates the effect of consumption context on preferences for human (vs. robotic) production.  相似文献   
45.
Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants’ trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants’ preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.  相似文献   
46.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   
47.
Individuals with developmental disabilities often do not develop vocal repertoires, thus requiring the use of augmentative devices. Teaching caregivers to conduct communication training with their children may be one way to foster communication with their device in the natural environment. This study replicates Rosales, Stone and Rehfeldt (2009), but using an augmentative device. Behavioral skills training was used to teach caregivers to implement mand training procedures. Caregivers quickly learned to implement mand training with their children and independent mands increased from pretraining to posttraining observations for 2 out of 3 children.  相似文献   
48.
Endre Begby 《Ratio》2020,33(4):295-306
This paper aims to show that the Knowledge Norm of Assertion (KNA) can lead to trouble in certain dialectical contexts. Suppose a person knows that p but does not know that they know that p. They assert p in compliance with the KNA. Their interlocutor responds: ‘but do you know that p?’ It will be shown that the KNA blocks the original asserter from providing any good response to this perfectly natural follow-up question, effectively forcing them to retract p from the conversational scoreboard. This finding is not simply of theoretical interest: I will argue that the KNA would allow the retort ‘but do you know that p?’ to be weaponized in strategic communication, serving as a tool for silencing speakers without having to challenge their testimonial contributions on their own merits. Our analysis can thereby provide a new dimension to the study of epistemic injustice, as well as underscoring the importance of considering the norms governing speech acts also from the point of view of non-ideal social contexts.  相似文献   
49.
Medicalization is the process by which conditions, for example, intellectual disability, hyperactivity in children, and posttraumatic stress disorder, become understood as medical disorders. During this process, the medical community often collectively assigns a label to a condition and consequently to those who would be said to have the disorder. We argue that there are at least two previously overlooked ways in which this linguistic practice may be wrongful, and sometimes, unjust: first, when the initial introduction of a medical label is done without the participation of those individuals who are being labelled, and second, when attempts by those individuals to renegotiate the labels are thwarted or otherwise rendered ineffective. In both cases, we argue, individuals are unfairly excluded from a linguistic practice that would be valuable for them to participate in. Furthermore, we argue that their exclusion depends in part on the authority of the medical institution to ignore their demands for participation. In making this case, we will propose the more general claim that participating in the linguistic processes of determining and renegotiating the words that will be used to describe oneself is an exercise of linguistic agency, a capacity that has both instrumental and intrinsic value.  相似文献   
50.
Despite the lack of invariance problem (the many-to-many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side-stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real-world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to provide direct insights into mechanisms that may support HSR. In this brief article, we report preliminary results from a two-layer network that borrows one element from ASR, long short-term memory nodes, which provide dynamic memory for a range of temporal spans. This allows the model to learn to map real speech from multiple talkers to semantic targets with high accuracy, with human-like timecourse of lexical access and phonological competition. Internal representations emerge that resemble phonetically organized responses in human superior temporal gyrus, suggesting that the model develops a distributed phonological code despite no explicit training on phonetic or phonemic targets. The ability to work with real speech is a major advance for cognitive models of HSR.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号