首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
While there is ample evidence that children treat words as mutually exclusive, the cognitive basis of this bias is widely debated. We focus on the distinction between pragmatic and lexical constraints accounts. High-functioning children with autism spectrum disorders (ASD) offer a unique perspective on this debate, as they acquire substantial vocabularies despite impoverished social-pragmatic skills. We tested children and adolescents with ASD in a paradigm examining mutual exclusivity for words and facts. Words were interpreted contrastively more often than facts. Word performance was associated with vocabulary size; fact performance was associated with social-communication skills. Thus mutual exclusivity does not appear to be driven by pragmatics, suggesting that it is either a lexical constraint or a reflection of domain-general learning processes.  相似文献   

3.
4.
The statement, some elephants have trunks, is logically true but pragmatically infelicitous. Whilst some is logically consistent with all, it is often pragmatically interpreted as precluding all. In Experiments 1 and 2, we show that with pragmatically impoverished materials, sensitivity to the pragmatic implicature associated with some is apparent earlier in development than has previously been found. Amongst 8-year-old children, we observed much greater sensitivity to the implicature in pragmatically enriched contexts. Finally, in Experiment 3, we found that amongst adults, logical responses to infelicitous some statements take longer to produce than do logical responses to felicitous some statements, and that working memory capacity predicts the tendency to give logical responses to the former kind of statement. These results suggest that some adults develop the ability to inhibit a pragmatic response in favour of a logical answer. We discuss the implications of these findings for theories of pragmatic inference.  相似文献   

5.
We explore mutual information (MI) as a means of characterizing linguistic statistical structure. The MI between two linguistic tokens x and y is the degree to which seeing x helps us anticipate the occurrence of y. We computed MI between words in 595 samples of written text in 25 languages. Our analyses indicate that MI dependencies do not extend beyond a range of five words. Moreover, the similarity between MI profiles of different languages was used to cluster the languages. These results are discussed in terms of a putative link between short‐term memory and linguistic structure and the further utility of MI in terms of characterizing the latter.  相似文献   

6.
Recent research has considered the phonological specificity of children's word representations, but few studies have examined the flexibility of those representations. Tolerating acoustic-phonetic deviations has been viewed as a negative in terms of discriminating minimally different word forms, but may be a positive in an increasingly multicultural society where children encounter speakers with variable accents. To explore children's on-line processing of accented speech, preschoolers heard atypically pronounced words (e. g. 'fesh', from fish) and selected pictures from a four-item display as eye movements were tracked. Children recognized similarity between typical and accented variants, selecting the fish overwhelmingly when hearing 'fesh' (Experiment 1), even when a novel-picture alternative was present (Experiment 2). However, eye movements indicated slowed on-line recognition of accented relative to typical variants. Novel-picture selections increased with feature distance from familiar forms, but were similarly sensitive to vowel, onset, and coda changes (Experiment 3). Implications for child accent processing and mutual exclusivity are discussed.  相似文献   

7.
Booth AE  Waxman SR 《Cognition》2002,84(1):B11-B22
We examined electrophysiological correlates of conscious change detection versus change blindness for equivalent displays. Observers had to detect any changes, across a visual interruption, between a pair of successive displays. Each display comprised grey circles on a background of alternate black and white stripes. Foreground changes arose when light-grey circles turned dark-grey and vice-versa. Physically stronger background changes arose when all black stripes turned white and vice-versa. Despite their physical strength, background changes were undetected unless attention was directed to them, whereas foreground changes were invariably seen. Event-related potentials revealed that the P300 component was suppressed for unseen background changes, as compared with the same changes when seen. This effect arose first over frontal sites, and then spread to parietal sites. These results extend recent fMRI findings that fronto-parietal activation is associated with conscious visual change detection, to reveal the timing of these neural correlates.  相似文献   

8.
Five experiments were designed to examine whether subjects attend to different aspects of meaning for familiar and unfamiliar words. In Experiments 1–3, subjects gave free associations to high- and low-familiarity words from the same taxonomic category (e.g.,seltzer:sarsparilla; Experiment 1), from the same noun synonym set (e.g.,baby:neonate; Experiment 2), and from the same verb synonym set (e.g.,abscond:escape; Experiment 3). In Experiments 4 and 5, subjects first read a context sentence containing the stimulus word and then gave associations; stimuli were novel words or either high- or low-familiarity nouns. Low-familiarity and novel words elicited more nonsemantically based responses (e.g.,engram:graham) than did high-familiarity words. Of the responses semantically related to the stimulus, low-familiarity and novel words elicited a higher proportion of definitional responses [category (e.g.,sarsparilla:soda), synonym (e.g.,neonate:newborn), and coordinate (e.g.,armoire:dresser)], whereas high-familiarity stimuli elicited a higher proportion of event-based responses [thematic (e.g.,seltzer:glass) and noun:verb (e.g.,baby:cry)]. Unfamiliar words appear to elicit a shift of attentional resources from relations useful in understanding the message to relations useful in understanding the meaning of the unfamiliar word.  相似文献   

9.
This study investigated spreading activation for words presented to the left and right hemispheres using an automatic semantic priming paradigm. Three types of semantic relations were used: similar-only (Deer-Pony), associated-only (Bee-Honey), and similar + associated (Doctor-Nurse). Priming of lexical decisions was symmetrical over visual fields for all semantic relations when prime words were centrally presented. However, when primes and targets were lateralized to the same visual field, similar-only priming was greater in the LVF than in the RVF, no priming was obtained for associated-only words, and priming was equivalent over visual fields for similar + associated words. Similar results were found using a naming task. These findings suggest that it is important to lateralize both prime and target information to assess hemisphere-specific spreading activation processes. Further, while spreading activation occurs in either hemisphere for the most highly related words (those related by category membership and association), our findings suggest that automatic access to semantic category relatedness occurs primarily in the right cerebral hemisphere. These results imply a unique role for the right hemisphere in the processing of word meanings. We relate our results to our previous proposal (Burgess & Simpson, 1988a; Chiarello, 1988c) that there is rapid selection of one meaning and suppression of other candidates in the left hemisphere, while activation spreads more diffusely in the right hemisphere. We also outline a new proposal that activation spreads in a different manner for associated words than for words related by semantic similarity.  相似文献   

10.
A well-established finding in the simulation literature is that participants simulate the positive argument of negation soon after reading a negative sentence, prior to simulating a scene consistent with the negated sentence (Kaup, Lüdtke, & Zwaan, 2006 Kaup, B., Lüdtke, J. and Zwaan, R. A. 2006. Processing negated sentences with contradictory predicates: Is a door that is not open mentally closed?. Journal of Pragmatics, 38: 10331050. [Crossref], [Web of Science ®] [Google Scholar]; Kaup, Yaxley, Madden, Zwaan, & Lüdtke, 2007 Kaup, B., Yaxley, R. H., Madden, C. J., Zwaan, R. A. and Lüdtke, J. 2007. Experiential simulations of negated text information. Quarterly Journal of Experimental Psychology, 60: 976990. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). One interpretation of this finding is that negation requires two steps to process: first represent what is being negated then “reject” that in favour of a representation of a negation-consistent state of affairs (Kaup et al., 2007 Kaup, B., Yaxley, R. H., Madden, C. J., Zwaan, R. A. and Lüdtke, J. 2007. Experiential simulations of negated text information. Quarterly Journal of Experimental Psychology, 60: 976990. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). In this paper we argue that this finding with negative sentences could be a by-product of the dynamic way that language is interpreted relative to a common ground and not the way that negation is represented. We present a study based on Kaup et al. (2007) Kaup, B., Yaxley, R. H., Madden, C. J., Zwaan, R. A. and Lüdtke, J. 2007. Experiential simulations of negated text information. Quarterly Journal of Experimental Psychology, 60: 976990. [Taylor & Francis Online], [Web of Science ®] [Google Scholar] that tests the competing accounts. Our results suggest that some negative sentences are not processed in two steps, but provide support for the alternative, dynamic account.  相似文献   

11.
A well-established finding in the simulation literature is that participants simulate the positive argument of negation soon after reading a negative sentence, prior to simulating a scene consistent with the negated sentence (Kaup, Ludtke, & Zwaan, 2006; Kaup, Yaxley, Madden, Zwaan, & Ludtke, 2007). One interpretation of this finding is that negation requires two steps to process: first represent what is being negated then "reject" that in favour of a representation of a negation-consistent state of affairs (Kaup et al., 2007). In this paper we argue that this finding with negative sentences could be a by-product of the dynamic way that language is interpreted relative to a common ground and not the way that negation is represented. We present a study based on Kaup et al. (2007) that tests the competing accounts. Our results suggest that some negative sentences are not processed in two steps, but provide support for the alternative, dynamic account.  相似文献   

12.
In a delayed matching task, the influence of spatial congruence between study and test on visual short-term memory for geometric figures and words was investigated. Subjects processed series of pictures which showed three words or three geometric figures arranged as rows or as triangular configurations. At test, the elements were presented in the identical or in the alternative configuration as at study. In the non-matching case, one of the studied elements was exchanged. The delay was 5 s. Subjects judged whether the elements were the same as during study, independent of their configuration. In Exp. 1, pictures of figures and words were mixed within one list. For both modalities, the response times were longer if the configuration at test was incongruent to the one at study. This contradicts the results of Santa, who observed effects of spatial congruency for figures, but not for words. In Exp. 2 we therefore presented the same material as in Exp. 1, but now the lists were modality-pure, as in the experiment of Santa – i.e., words and figures were shown in different lists. This time, spatial incongruency impaired recognition of the figures, but not recognition of the words. These results show that in a non-verbal context, isolated visually presented words are spatially encoded as non-verbal stimuli (figures) are. However, the word stimuli are encoded differently if the task is a pure verbal one. In the latter case, spatial information is discarded. Received: 9 September 1997 / Accepted: 30 March 1998  相似文献   

13.
Computational models of lexical semantics, such as latent semantic analysis, can automatically generate semantic similarity measures between words from statistical redundancies in text. These measures are useful for experimental stimulus selection and for evaluating a model’s cognitive plausibility as a mechanism that people might use to organize meaning in memory. Although humans are exposed to enormous quantities of speech, practical constraints limit the amount of data that many current computational models can learn from. We follow up on previous work evaluating a simple metric of pointwise mutual information. Controlling for confounds in previous work, we demonstrate that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. We also present a simple tool for building simple and scalable models from large corpora quickly and efficiently.  相似文献   

14.
15.
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating ‘perceptual narrowing’ in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.  相似文献   

16.
17.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

18.
19.
In two experiments (Ns = 144 and 192), second, fourth, and sixth graders learned pairs of pictures or of words, and were tested for both item learning (either with pictures or words) and associative learning. By analysis of false recognition errors, it was determined that implicit verbal labeling occurred only among the older two groups. However, there was no evidence that this labeling affected paired-associate learning.  相似文献   

20.
Changes to our everyday activities mean that adult language users need to learn new meanings for previously unambiguous words. For example, we need to learn that a "tweet" is not only the sound a bird makes, but also a short message on a social networking site. In these experiments, adult participants learned new fictional meanings for words with a single dominant meaning (e.g., "ant") by reading paragraphs that described these novel meanings. Explicit recall of these meanings was significantly better when there was a strong semantic relationship between the novel meaning and the existing meaning. This relatedness effect emerged after relatively brief exposure to the meanings (Experiment 1), but it persisted when training was extended across 7?days (Experiment 2) and when semantically demanding tasks were used during this extended training (Experiment 3). A lexical decision task was used to assess the impact of learning on online recognition. In Experiment 3, participants responded more quickly to words whose new meaning was semantically related than to those with an unrelated meaning. This result is consistent with earlier studies showing an effect of meaning relatedness on lexical decision, and it indicates that these newly acquired meanings become integrated with participants' preexisting knowledge about the meanings of words.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号