首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   63篇
  免费   0篇
  2021年   1篇
  2020年   1篇
  2017年   2篇
  2016年   2篇
  2014年   3篇
  2013年   5篇
  2012年   3篇
  2011年   7篇
  2010年   1篇
  2009年   3篇
  2008年   2篇
  2007年   1篇
  2006年   8篇
  2005年   2篇
  2003年   3篇
  2002年   3篇
  2001年   1篇
  2000年   1篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1994年   2篇
  1993年   2篇
  1991年   1篇
  1986年   1篇
  1984年   1篇
  1981年   1篇
  1975年   1篇
  1974年   1篇
  1967年   1篇
排序方式: 共有63条查询结果,搜索用时 15 毫秒
31.
Book reviews     
Narmour, E. (1992). The analysis and cognition of melodic complexity: The implication-realization model. Chicago, IL: University of Chicago Press. Pp. xii + 444. ISBN 0-226-56842-3. £39.95 (Hbk).

Gabriel, M., & Moore, J. (Eds.) (1990). Learning and computational neuroscience: Foundations of adaptive networks. Cambridge, MA: MIT Press. Pp. xv + 13. ISBN 0-262-07102-9. £44.95.

Brady, S.A., & Shankweiler, D.P. (Eds.) (1991). Phonological processes in literacy: A tribute to Isabelle Y. Liberman. Hillsdale, NJ: Lawrence Erlbaum Associates Inc. Pp. xxv + 266. ISBN 0-8059-0501-X. £46.95 (Hbk).

Tokhura, Y., Vatikiotis-Bateson, E., & Sagisaka, Y. (1992). Speech perception, production and linguistic strucutre. Tokyo.: IOS Press. Pp. 463. ISBN 90-5199-084-7. £70.00.

Tohkura, Y., Vatikiotis-Bateson, E., & Sagisaka, Y. (Eds.) (1992). Speech perception, production and linguistic structure. Amsterdam: IOS Press. Pp. 463. ISBN 90-5199-084-7. £70.00.  相似文献   
32.
Norris D  McQueen JM  Cutler A 《The Behavioral and brain sciences》2000,23(3):299-325; discussion 325-70
Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.  相似文献   
33.
The Journal of Ethics - This paper establishes what constitutes a sexual interaction between two or more people. It does this by first defining a sexual activity as one in which the agent intends...  相似文献   
34.
The development of reading-related phonological processing abilities (PPA) represents an important developmental milestone in the process of learning to read. In this cross-sectional study, confirmatory factor analysis was used to examine the structure of PPA in 129 younger preschoolers (M = 40.88 months, SD = 4.65) and 304 older preschoolers (M = 56.49 months, SD = 5.31). A 2-factor model in which phonological awareness and phonological memory was represented by one factor and lexical access was represented by a second factor provided the best fit for both samples and was largely invariant across samples. Measures of vocabulary, cognitive abilities, and print knowledge were significantly correlated with both factors, but phonological awareness/memory had unique relations with word reading. Despite significant development of PPA across the preschool years and into kindergarten, these results show that the structure of these skills remains invariant.  相似文献   
35.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., "strawberry"), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing "strawberry"? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   
36.
Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2–4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search.  相似文献   
37.
A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' "once (s)pear," [t] in 'nooit (t)rap,' "never staircase/quick") were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing.  相似文献   
38.
Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception.  相似文献   
39.
We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.  相似文献   
40.
An eye-tracking study examined the involvement of prosodic knowledge--specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words--in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners' fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号