首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is commonly held that implicit learning is based largely on familiarity. It is also commonly held that familiarity is not affected by intentions. It follows that people should not be able to use familiarity to distinguish strings from two different implicitly learned grammars. In two experiments, subjects were trained on two grammars and then asked to endorse strings from only one of the grammars. Subjects also rated how familiar each string felt and reported whether or not they used familiarity to make their grammaticality judgment. We found subjects could endorse the strings of just one grammar and ignore the strings from the other. Importantly, when subjects said they were using familiarity, the rated familiarity for test strings consistent with their chosen grammar was greater than that for strings from the other grammar. Familiarity, subjectively defined, is sensitive to intentions and can play a key role in strategic control.  相似文献   

2.
We address Jacoby’s (1991) proposal that strategic control over knowledge requires conscious awareness of that knowledge. In a two-grammar artificial grammar learning experiment all participants were trained on two grammars, consisting of a regularity in letter sequences, while two other dimensions (colours and fonts) varied randomly. Strategic control was measured as the ability to selectively apply the grammars during classification. For each classification, participants also made a combined judgement of (a) decision strategy and (b) relevant stimulus dimension. Strategic control was found for all types of decision strategy, including trials where participants claimed to lack conscious structural knowledge. However, strong evidence of strategic control only occurred when participants knew or guessed that the letter dimension was relevant, suggesting that strategic control might be associated with – or even causally requires – global awareness of the nature of the rules even though it does not require detailed knowledge of their content.  相似文献   

3.
Cognitive processes are often attributed to statistical or symbolic general-purpose mechanisms. Here we show that some spontaneous generalizations are driven by specialized, highly constrained symbolic operations. We explore how two types of artificial grammars are acquired, one based on repetitions and the other on characteristic relations between tones ("ordinal" grammars). Whereas participants readily acquire repetition-based grammars, displaying early electrophysiological responses to grammar violations, they perform poorly with ordinal grammars, displaying no such electrophysiological responses. This outcome is problematic for both general symbolic and statistical models, which predict that both types of grammars should be processed equally easily. This suggests that some simple grammars are acquired using perceptual primitives rather than general-purpose mechanisms; such primitives may be elements of a "toolbox" of specialized computational heuristics, which may ultimately allow constructing a psychological theory of symbol manipulation.  相似文献   

4.
In this paper we present learning algorithms for classes of categorial grammars restricted by negative constraints. We modify learning functions of Kanazawa [10] and apply them to these classes of grammars. We also prove the learnability of intersection of the class of minimal grammars with the class of k-valued grammars. Presented by Wojciech Buszkowski  相似文献   

5.
In Norman, Price, and Jones (2011), we argued that the ability to apply two sets of grammar rules flexibly from trial to trial on a “mixed-block” AGL classification task indicated strategic control over knowledge that was less than fully explicit. Jiménez (2011) suggested that our results do not in themselves prove that participants learned – and strategically controlled – complex properties of the structures of the grammars, but that they may be accounted for by learning of simple letter frequencies. We first explain why our main conclusions regarding strategic control and conscious awareness are a separable issue to this criticism. We then report additional data which show that our participants’ ability to discriminate between the two grammars was not attributable to differences in simple letter frequencies.  相似文献   

6.
A recent study by Fitch and Hauser reported that finite-state grammars can be learned by non-human primates, whereas phrase-structure grammars cannot. Humans, by contrast, learn both grammars easily. This species difference is taken as the critical juncture in the evolution of the human language faculty. Given the far-reaching relevance of this conclusion, the question arises as to whether the distinction between these two types of grammars finds its reflection in different neural systems within the human brain.  相似文献   

7.
Many believe that the grammatical sentences of a natural language are a recursive set. In this paper I argue that the commonly adduced grounds for this belief are inconclusive, if not simply unsound. Neither the native speaker's ability to classify sentences nor his ability to comprehend them requires it. Nor is there at present any reason to think that decidability has any bearing on first-language acquisition. I conclude that there are at present no compelling theoretical grounds for requiring that transformational grammars enumerate only recursive sets. Hence, the fact that proposed transformational grammars do not satisfy this requirement does not, as some have claimed, represent a shortcoming in current theory.  相似文献   

8.
A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing.  相似文献   

9.
In response to concerns with existing procedures for measuring strategic control over implicit knowledge in artificial grammar learning (AGL), we introduce a more stringent measurement procedure. After two separate training blocks which each consisted of letter strings derived from a different grammar, participants either judged the grammaticality of novel letter strings with respect to only one of these two grammars (pure-block condition), or had the target grammar varying randomly from trial to trial (novel mixed-block condition) which required a higher degree of conscious flexible control. Random variation in the colour and font of letters was introduced to disguise the nature of the rule and reduce explicit learning. Strategic control was observed both in the pure-block and mixed-block conditions, and even among participants who did not realise the rule was based on letter identity. This indicated detailed strategic control in the absence of explicit learning.  相似文献   

10.
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment ( Culbertson, Smolensky, & Legendre, 2012 ) targeting the learning of word‐order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross‐linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word‐order pattern relating nouns, adjectives, and numerals.  相似文献   

11.
Previous research has established that people can implicitly learn chunks, which (in terms of formal language theory) do not require a memory buffer to process. The present study explores the implicit learning of nonlocal dependencies generated by higher than finite-state grammars, specifically, Chinese tonal retrogrades (i.e. centre embeddings generated from a context-free grammar) and inversions (i.e. cross-serial dependencies generated from a mildly context-sensitive grammar), which do require buffers (for example, last in-first out and first in-first out, respectively). People were asked to listen to and memorize artificial poetry instantiating one of the two grammars; after this training phase, people were informed of the existence of rules and asked to classify new poems, while providing attributions of the basis of their judgments. People acquired unconscious structural knowledge of both tonal retrogrades and inversions. Moreover, inversions were implicitly learnt more easily than retrogrades constraining the nature of the memory buffer in computational models of implicit learning.  相似文献   

12.
Parsing to Learn     
Learning a language by parameter setting is almost certainly less onerous than composing a grammar from scratch. But recent computational modeling of how parameters are set has shown that it is not at all the simple mechanical process sometimes imagined. Sentences must be parsed to discover the properties that select between parameter values. But the sentences that drive learning cannot be parsed with the learner's current grammar. And there is not much point in parsing them with just one new grammar. They must apparently be parsed with all possible grammars, in order to find out which one is most successful at licensing the language. The research task is to reconcile this with the fact that the human sentence parsing mechanism, even in adults, has only very limited parallel parsing capacity. I have proposed that all possible grammars can be folded into one, if parameter values are fragments of sentential tree structures that the parser can make use of where necessary to assign a structure to an input sentence. However, the problem of capacity limitations remains. The combined grammar will afford multiple analyses for some sentences, too many to be computed on-line. I propose that the parser computes only one analysis per sentence but can detect ambiguity, and that the learner makes use of unambiguous input only. This provides secure information but relatively little of it, particularly at early stages of learning where few grammars have been excluded and ambiguity is rife. I consider three solutions: improving the parser's ability to extract unambiguous information from partially ambiguous sentences, assuming default parameter values to temporarily eliminate ambiguity, reconfiguring the parameters so that some are subordinate to others and do not present themselves to the learner until the others have been set. A more radical alternative is to give up the quest for error-free learning and permit parameters to be set without regard for whether the parser may have overlooked an alternative analysis of the sentence. If it can be assumed that the human parser keeps a running tally of the parameter values it has accessed, then the learner would do nothing other than parse sentences for comprehension, as adults do. The most useful parameter values would become more and more easily accessed; the noncontributors would drop out of the running. There would be no learning mechanism at all, over and above the parser. But how accurate this system would be remains to be established.  相似文献   

13.
14.
We investigated whether tufted capuchin monkeys (Cebus apella) learn from others' mistakes. We prepared three kinds of transparent containers having the same appearance: one that could be opened by the lid, one that could be opened from the bottom, and one that could be opened either way. Using each of the first two one-way-open type containers, the monkeys were trained to copy the human demonstrator's action to open the container and obtain a piece of sweet potato contained therein. After this training, the demonstrator showed the monkeys an action that would open or fail to open the third, two-way-open type container. None of the monkeys reliably opened the container by spontaneously compensating for the demonstrator's failure (Experiment 1). In Experiment 2, the same subjects were trained to correct their own mistakes immediately after failure, before we introduced the same test as in Experiment 1. This experience did not result in subjects using the demonstrator's failure to produce a successful action. In Experiment 3, we placed two monkeys face to face. In this situation, the second monkey was presented with the container after the first monkey failed to open it. As a result, two capuchin monkeys capitalized on the partner's failure to correctly guide his/her behavior. Thus, the monkeys monitored not only the outcome of the others' action, but also that action per se. This result suggests that not only humans and apes, but also monkeys may understand the meaning of others' actions in social learning.  相似文献   

15.
Studies of implicit learning often examine peoples’ sensitivity to sequential structure. Computational accounts have evolved to reflect this bias. An experiment conducted by Neil and Higham [Neil, G. J., & Higham, P. A.(2012). Implicit learning of conjunctive rule sets: An alternative to artificial grammars. Consciousness and Cognition, 21, 1393–1400] points to limitations in the sequential approach. In the experiment, participants studied words selected according to a conjunctive rule. At test, participants discriminated rule-consistent from rule-violating words but could not verbalize the rule. Although the data elude explanation by sequential models, an exemplar model of implicit learning can explain them. To make the case, we simulate the full pattern of results by incorporating vector representations for the words used in the experiment, derived from the large-scale semantic space models LSA and BEAGLE, into an exemplar model of memory, MINERVA 2. We show that basic memory processes in a classic model of memory capture implicit learning of non-sequential rules, provided that stimuli are appropriately represented.  相似文献   

16.
This study investigated the effect of semantic information on artificial grammar learning (AGL). Recursive grammars of different complexity levels (regular language, mirror language, copy language) were investigated in a series of AGL experiments. In the with-semantics condition, participants acquired semantic information prior to the AGL experiment; in the without-semantics control condition, participants did not receive semantic information. It was hypothesized that semantics would generally facilitate grammar acquisition and that the learning benefit in the with-semantics conditions would increase with increasing grammar complexity. Experiment 1 showed learning effects for all grammars but no performance difference between conditions. Experiment 2 replicated the absence of a semantic benefit for all grammars even though semantic information was more prominent during grammar acquisition as compared to Experiment 1. Thus, we did not find evidence for the idea that semantics facilitates grammar acquisition, which seems to support the view of an independent syntactic processing component.  相似文献   

17.
Penke M 《Brain and language》2001,77(3):351-363
In both language acquisition research and the study of language impairments in Broca's aphasia there is an ongoing debate whether or not phrase-structure representations contain the Complementizer Phrase (CP) layer. To shed some light on this debate, I will provide data on German child language and on German agrammatic Broca's aphasia. Analyses of subordinate clauses, wh-questions, and verb placement indicate that early child grammars do not generate the CP layer yet, whereas the ability to project the CP layer is retained in agrammatism.  相似文献   

18.
Four different kinds of grammars that can define crossing dependencies in human language are compared here: (i) context sensitive rewrite grammars with rules that depend on context, (ii) matching grammars with constraints that filter the generative structure of the language, (iii) copying grammars which can copy structures of unbounded size, and (iv) generating grammars in which crossing dependencies are generated from a finite lexical basis. Context sensitive rewrite grammars are syntactically, semantically and computationally unattractive. Generating grammars have a collection of nice properties that ensure they define only “mildly context sensitive” languages, and Joshi has proposed that human languages have those properties too. But for certain distinctive kinds of crossing dependencies in human languages, copying or matching analyses predominate. Some results relevant to the viability of mildly context sensitive analyses and some open questions are reviewed.  相似文献   

19.
The approach to language evolution suggested here focuses on three questions: How did the human brain evolve so that humans can develop, use, and acquire languages? How can the evolutionary quest be informed by studying brain, behavior, and social interaction in monkeys, apes, and humans? How can computational modeling advance these studies? I hypothesize that the brain is language ready in that the earliest humans had protolanguages but not languages (i.e., communication systems endowed with rich and open-ended lexicons and grammars supporting a compositional semantics), and that it took cultural evolution to yield societies (a cultural constructed niche) in which language-ready brains could become language-using brains. The mirror system hypothesis is a well-developed example of this approach, but I offer it here not as a closed theory but as an evolving framework for the development and analysis of conflicting subhypotheses in the hope of their eventual integration. I also stress that computational modeling helps us understand the evolving role of mirror neurons, not in and of themselves, but only in their interaction with systems “beyond the mirror.” Because a theory of evolution needs a clear characterization of what it is that evolved, I also outline ideas for research in neurolinguistics to complement studies of the evolution of the language-ready brain. A clear challenge is to go beyond models of speech comprehension to include sign language and models of production, and to link language to visuomotor interaction with the physical and social world.  相似文献   

20.
In trying to make clear whether understanding is a mental state Wittgenstein asks a series of questions about the timing and duration of understanding. These questions are awkward, and they have posed a great challenge for commentators. In this paper I review the interpretations by Mole and by Baker and Hacker, and point out their problems. I then offer a new interpretation which shows (1) that a “mental state” in this context means a state of consciousness, (2) that Wittgenstein's questions are intended to elicit the grammars of the words “understand” and a “state of consciousness”, (3) that, in this way, he clearly shows that understanding is not a state of consciousness and (4) that he also provides a therapy to dissolve the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号