首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   73篇
  免费   8篇
  2020年   3篇
  2019年   1篇
  2018年   1篇
  2017年   2篇
  2016年   2篇
  2014年   2篇
  2013年   4篇
  2012年   5篇
  2011年   1篇
  2010年   2篇
  2009年   1篇
  2008年   5篇
  2007年   4篇
  2006年   4篇
  2005年   3篇
  2004年   1篇
  2003年   2篇
  2002年   2篇
  2001年   8篇
  2000年   3篇
  1997年   2篇
  1995年   1篇
  1994年   1篇
  1993年   1篇
  1992年   2篇
  1991年   2篇
  1990年   1篇
  1989年   2篇
  1987年   1篇
  1986年   2篇
  1983年   2篇
  1981年   1篇
  1978年   4篇
  1972年   1篇
  1965年   1篇
  1964年   1篇
排序方式: 共有81条查询结果,搜索用时 15 毫秒
41.
Speech alignment is the tendency for interlocutors to unconsciously imitate one another’s speaking style. Alignment also occurs when a talker is asked to shadow recorded words (e.g., Shockley, Sabadini, & Fowler, 2004). In two experiments, we examined whether alignment could be induced with visual (lipread) speech and with auditory speech. In Experiment 1, we asked subjects to lipread and shadow out loud a model silently uttering words. The results indicate that shadowed utterances sounded more similar to the model’s utterances than did subjects’ nonshadowed read utterances. This suggests that speech alignment can be based on visual speech. In Experiment 2, we tested whether raters could perceive alignment across modalities. Raters were asked to judge the relative similarity between a model’s visual (silent video) utterance and subjects’ audio utterances. The subjects’ shadowed utterances were again judged as more similar to the model’s than were read utterances, suggesting that raters are sensitive to cross-modal similarity between aligned words.  相似文献   
42.
Rosenblum, Miller, and Sanchez (Psychological Science, 18, 392-396, 2007) found that subjects first trained to lip-read a particular talker were then better able to perceive the auditory speech of that same talker, as compared with that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur (1) from auditory to lip-read speech, (2) with subjects not screened for adequate lipreading skill, (3) when both a familiar and an unfamiliar talker are presented during lipreading, and (4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lip-read from the same talker they had heard performed better than those who lip-read a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words.  相似文献   
43.
The goal of this study is to compare the handwriting behaviours of true and false writing. Based on the cognitive load and dis‐automaticity known to be experienced while communicating a deceptive message, we hypothesized a difference (in temporal and spatial, pressure measures and peak velocities) between the handwriting of true vs. false messages. Thirty‐four participants wrote true and false sentences on a digitizer, which is part of a new system called the Computerized Penmanship Evaluation Tool (ComPET). The ComPET evaluates brain‐hand performance, as manifested through handwriting behaviour, and was found to be a valid measure for detecting the dis‐automaticity that is indicative of certain diseases in the clinical field. Differences were found in mean pressure, spatial measures (mean stroke length and mean stroke height), but no differences were found in temporal measures and in the number of peak velocities. The use of ComPET in lie detection is discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
44.
Protein synthesis is required for the expression of enduring memories and long-lasting synaptic plasticity. During cellular proliferation and growth, S6 kinases (S6Ks) are activated and coordinate the synthesis of de novo proteins. We hypothesized that protein synthesis mediated by S6Ks is critical for the manifestation of learning, memory, and synaptic plasticity. We have tested this hypothesis with genetically engineered mice deficient for either S6K1 or S6K2. We have found that S6K1-deficient mice express an early-onset contextual fear memory deficit within one hour of training, a deficit in conditioned taste aversion (CTA), impaired Morris water maze acquisition, and hypoactive exploratory behavior. In contrast, S6K2-deficient mice exhibit decreased contextual fear memory seven days after training, a reduction in latent inhibition of CTA, and normal spatial learning in the Morris water maze. Surprisingly, neither S6K1- nor S6K2-deficient mice exhibited alterations in protein synthesis-dependent late-phase long-term potentiation (L-LTP). However, removal of S6K1, but not S6K2, compromised early-phase LTP expression. Furthermore, we observed that S6K1-deficient mice have elevated basal levels of Akt phosphorylation, which is further elevated following induction of L-LTP. Taken together, our findings demonstrate that removal of S6K1 leads to a distinct array of behavioral and synaptic plasticity phenotypes that are not mirrored by the removal of S6K2. Our observations suggest that neither gene by itself is required for L-LTP but instead may be required for other types of synaptic plasticity required for cognitive processing.  相似文献   
45.
46.
ABSTRACT

The perceptual brain is designed around multisensory input. Areas once thought dedicated to a single sense are now known to work with multiple senses. It has been argued that the multisensory nature of the brain reflects a cortical architecture for which task, rather than sensory system, is the primary design principle. This supramodal thesis is supported by recent research on human echolocation and multisensory speech perception. In this review, we discuss the behavioural implications of a supramodal architecture, especially as they pertain to auditory perception. We suggest that the architecture implies a degree of perceptual parity between the senses and that cross-sensory integration occurs early and completely. We also argue that a supramodal architecture implies that perceptual experience can be shared across modalities and that this sharing should occur even without bimodal experience. We finish by briefly suggesting areas of future research.  相似文献   
47.
Three individually housed bonnet macaques, with long-term experience in performing a joystick task with a reward choice of either viewing color video of a single bonnet group or obtaining a banana-flavored food treat, were presented with images of a new group when they chose to view social video. The change produced absolute increases in responding for social video and enhanced preference for viewing social video relative to obtaining the food treat, supporting the view that the monkeys were strongly attending to the social content of the videos.  相似文献   
48.
49.
mRNA translation, or protein synthesis, is a major component of the transformation of the genetic code into any cellular activity. This complicated, multistep process is divided into three phases: initiation, elongation, and termination. Initiation is the step at which the ribosome is recruited to the mRNA, and is regarded as the major rate-limiting step in translation, while elongation consists of the elongation of the polypeptide chain; both steps are frequent targets for regulation, which is defined as a change in the rate of translation of an mRNA per unit time. In the normal brain, control of translation is a key mechanism for regulation of memory and synaptic plasticity consolidation, i.e., the off-line processing of acquired information. These regulation processes may differ between different brain structures or neuronal populations. Moreover, dysregulation of translation leads to pathological brain function such as memory impairment. Both normal and abnormal function of the translation machinery is believed to lead to translational up-regulation or down-regulation of a subset of mRNAs. However, the identification of these newly synthesized proteins and determination of the rates of protein synthesis or degradation taking place in different neuronal types and compartments at different time points in the brain demand new proteomic methods and system biology approaches. Here, we discuss in detail the relationship between translation regulation and memory or synaptic plasticity consolidation while focusing on a model of cortical-dependent taste learning task and hippocampal-dependent plasticity. In addition, we describe a novel systems biology perspective to better describe consolidation.  相似文献   
50.
Speech alignment, or the tendency of individuals to subtly imitate each other’s speaking styles, is often assessed by comparing a subject’s baseline and shadowed utterances to a model’s utterances, often through perceptual ratings. These types of comparisons provide information about the occurrence of a change in subject’s speech, but they do not indicate that this change is toward the specific shadowed model. In three experiments, we investigated whether alignment is specific to a shadowed model. Experiment 1 involved the classic baseline-to-shadowed comparison, to confirm that subjects did, in fact, sound more like their model when they shadowed, relative to any preexisting similarities between a subject and a model. Experiment 2 tested whether subjects’ utterances sounded more similar to the model whom they had shadowed or to another, unshadowed model. In Experiment 3, we examined whether subjects’ utterances sounded more similar to the model whom they had shadowed or to another subject who had shadowed a different model. The results of all experiments revealed that subjects sounded more similar to the model whom they had shadowed. This suggests that shadowing-based speech alignment is not just a change, but a change in the direction of the shadowed model, specifically.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号