首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Four different kinds of grammars that can define crossing dependencies in human language are compared here: (i) context sensitive rewrite grammars with rules that depend on context, (ii) matching grammars with constraints that filter the generative structure of the language, (iii) copying grammars which can copy structures of unbounded size, and (iv) generating grammars in which crossing dependencies are generated from a finite lexical basis. Context sensitive rewrite grammars are syntactically, semantically and computationally unattractive. Generating grammars have a collection of nice properties that ensure they define only “mildly context sensitive” languages, and Joshi has proposed that human languages have those properties too. But for certain distinctive kinds of crossing dependencies in human languages, copying or matching analyses predominate. Some results relevant to the viability of mildly context sensitive analyses and some open questions are reviewed.  相似文献   

2.
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect "the innate schematism of mind that is applied to the data of experience" and that "might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge" (Chomsky, 1971). Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various "poverty of stimulus" (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire.  相似文献   

3.
A number of criticisms of a recent paper byare made. (1) In attempting to assess the observational adequacy of story grammars, they state that a context-free grammar cannot handle discontinuous elements; however, they do not show that such elements occur in the domain to which the grammars apply. Further, they do not present adequate evidence for their claim that there are acceptable stories not accounted for by existing grammars and that the grammars will accept nonstories such as procedures. (2) They state that it has been proven that under natural conditions children cannot learn transformational grammars, which is a misrepresentation of the learnability proofs which have been offered. (3) Most important, they take an unduly narrow approach to story understanding by claiming that people only understand story content and do not have knowledge of story structure which is useful in comprehension or memory. Counterevidence from the literature is cited which indicates that such knowledge is both useful and used, and a number of methods for assessing the psychological adequacy of structural models are discussed.  相似文献   

4.
In this paper we present learning algorithms for classes of categorial grammars restricted by negative constraints. We modify learning functions of Kanazawa [10] and apply them to these classes of grammars. We also prove the learnability of intersection of the class of minimal grammars with the class of k-valued grammars. Presented by Wojciech Buszkowski  相似文献   

5.
Anne Preller 《Studia Logica》2007,87(2-3):171-197
Pregroup grammars have a cubic recognition algorithm. Here, we define a correct and complete recognition and parsing algorithm and give sufficient conditions for the algorithm to run in linear time. These conditions are satisfied by a large class of pregroup grammars, including grammars that handle coordinate structures and distant constituents.  相似文献   

6.
The equivalence of (classical) categorial grammars and context-free grammars, proved by Gaifman [4], is a very basic result of the theory of formal grammars (an essentially equivalent result is known as the Greibach normal form theorem [1], [14]). We analyse the contents of Gaifman's theorem within the framework of structure and type transformations. We give a new proof of this theorem which relies on the algebra of phrase structures and exhibit a possibility to justify the key construction used in Gaifman's proof by means of the Lambek calculus of syntactic types [15].  相似文献   

7.
Cognitive processes are often attributed to statistical or symbolic general-purpose mechanisms. Here we show that some spontaneous generalizations are driven by specialized, highly constrained symbolic operations. We explore how two types of artificial grammars are acquired, one based on repetitions and the other on characteristic relations between tones ("ordinal" grammars). Whereas participants readily acquire repetition-based grammars, displaying early electrophysiological responses to grammar violations, they perform poorly with ordinal grammars, displaying no such electrophysiological responses. This outcome is problematic for both general symbolic and statistical models, which predict that both types of grammars should be processed equally easily. This suggests that some simple grammars are acquired using perceptual primitives rather than general-purpose mechanisms; such primitives may be elements of a "toolbox" of specialized computational heuristics, which may ultimately allow constructing a psychological theory of symbol manipulation.  相似文献   

8.
Infant rule learning facilitated by speech   总被引:1,自引:0,他引:1  
Sequences of speech sounds play a central role in human cognitive life, and the principles that govern such sequences are crucial in determining the syntax and semantics of natural languages. Infants are capable of extracting both simple transitional probabilities and simple algebraic rules from sequences of speech, as demonstrated by studies using ABB grammars (la ta ta, gai mu mu, etc.). Here, we report a striking finding: Infants are better able to extract rules from sequences of nonspeech--such as sequences of musical tones, animal sounds, or varying timbres--if they first hear those rules instantiated in sequences of speech.  相似文献   

9.
A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing.  相似文献   

10.
Eero Hyvönen 《Synthese》1986,66(1):177-190
In this paper a logical interpretation of semantic nets and graph grammars is proposed for modelling natural language understanding and creating language understanding computer systems. An example of parsing a Finnish question by graph grammars and inferring the answer to it by a semantic net representation is provided.  相似文献   

11.
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment ( Culbertson, Smolensky, & Legendre, 2012 ) targeting the learning of word‐order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross‐linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word‐order pattern relating nouns, adjectives, and numerals.  相似文献   

12.
This study investigated the effect of semantic information on artificial grammar learning (AGL). Recursive grammars of different complexity levels (regular language, mirror language, copy language) were investigated in a series of AGL experiments. In the with-semantics condition, participants acquired semantic information prior to the AGL experiment; in the without-semantics control condition, participants did not receive semantic information. It was hypothesized that semantics would generally facilitate grammar acquisition and that the learning benefit in the with-semantics conditions would increase with increasing grammar complexity. Experiment 1 showed learning effects for all grammars but no performance difference between conditions. Experiment 2 replicated the absence of a semantic benefit for all grammars even though semantic information was more prominent during grammar acquisition as compared to Experiment 1. Thus, we did not find evidence for the idea that semantics facilitates grammar acquisition, which seems to support the view of an independent syntactic processing component.  相似文献   

13.
Black and Wilensky (1979) have made serious methodological errors in analyzing story grammars, and in the process they have committed additional errors in applying formal language theory. Our arguments involve clarifying certain aspects of knowledge representation crucial to a proper treatment of story understanding. Particular criticisms focus on the following shortcomings of their presentation: 1) an erroneous statement from formal language theory, 2) misapplication of formal language theory to story grammars, 3) unsubstantiated and doubtful analogies with English grammar, 4) various non sequiturs concerning the generation of non-stories, 5) a false claim based on the artificial distinction between syntax and semantics, and 6) misinterpretation of the role of story grammars in story understanding. We conclude by suggesting appropriate criteria for the evaluation of story grammars.  相似文献   

14.
This study reports on functional morpheme (I, D, and C) production in the spontaneous speech of five pairs of children who have undergone hemispherectomy, matching each pair for etiology and age at symptom onset, surgery, and testing. Our results show that following left hemispherectomy (LH), children evidence a greater error rate in the use of functional category elements than their right hemispherectomy (RH) counterparts. Nevertheless, error rates are surprisingly low and comparable across groups. We interpret these results as (a) weak empirical evidence for a left hemisphere advantage in acquisition of functional structure, (b) strong support that functional structure is a property of all human grammars, and (c) strong support that each isolated developing hemisphere has the potential to acquire a grammar embodying and constrained by highly specific structural principles defining human language.  相似文献   

15.
It is commonly held that implicit learning is based largely on familiarity. It is also commonly held that familiarity is not affected by intentions. It follows that people should not be able to use familiarity to distinguish strings from two different implicitly learned grammars. In two experiments, subjects were trained on two grammars and then asked to endorse strings from only one of the grammars. Subjects also rated how familiar each string felt and reported whether or not they used familiarity to make their grammaticality judgment. We found subjects could endorse the strings of just one grammar and ignore the strings from the other. Importantly, when subjects said they were using familiarity, the rated familiarity for test strings consistent with their chosen grammar was greater than that for strings from the other grammar. Familiarity, subjectively defined, is sensitive to intentions and can play a key role in strategic control.  相似文献   

16.
Denis Béchet 《Studia Logica》2007,87(2-3):199-224
The paper presents a way to transform pregroup grammars into contextfree grammars using functional composition. The same technique can also be used for the proof-nets of multiplicative cyclic linear logic and for Lambek calculus allowing empty premises.  相似文献   

17.
In [2], Bar-Hillel, Gaifman, and Shamir prove that the simple phrase structure grammars (SPGs) defined by Chomsky are equivalent in a certain sense to Bar-Hillel's bidirectional categorial grammars (BCGs). On the other hand, Cohen [3] proves the equivalence of the latter ones to what the calls free categorial grammars (FCGs). They are closely related to Lambek's syntactic calculus which, in turn, is based on the idea due to Ajdukiewicz [1]. For the reasons which will be discussed in the last section, Cohen's proof seems to be at least incomplete. This paper yields a direct proof of the equivalence ofFCGs andSPGs.Allatum est die S Marlii 1976  相似文献   

18.
We evaluate the “story grammar” approach to story understanding from three perspectives. We first examine the formal properties of the grammars and find only one to be formally adequate. We next evaluate the grammars empirically by asking whether they generate all simple stories and whether they generate only stories. We find many stories that they do not generate and one major class of nonstory that they do generate. We also evaluate the grammars' potential as comprehension models and find that they would add nothing to semantic models that focus on the story content. Hence we advocate a story content oriented approach to studying story understanding instead of the structural story grammar approach.  相似文献   

19.
The study described the abilities of a group of 10 aphasics and 10 normals to produce narrative and procedural discourse. The experimental tasks included telling stories, producing summaries, giving morals to the stories, and producing procedures. The variables examined in the investigation included features of sentential grammars, such as amount of embedding, and features of discourse grammars, such as occurrence of elements of superstructure in narrative. Additionally, raters assessed the content and clarity of the discourses. The results showed that aphasics produced well-structured narrative and procedural discourse. Aphasics' discourse errors differed only in degree, not qualitatively, from those of normals. The language of the aphasics' discourses was reduced in both complexity and amount. It was found that the aphasics had difficulties in producing summaries and giving morals for the stories when compared with the normals. Both the content and clarity of the discourses produced by the aphasics were rated lower than those produced by the normals.  相似文献   

20.
Parsing to Learn     
Learning a language by parameter setting is almost certainly less onerous than composing a grammar from scratch. But recent computational modeling of how parameters are set has shown that it is not at all the simple mechanical process sometimes imagined. Sentences must be parsed to discover the properties that select between parameter values. But the sentences that drive learning cannot be parsed with the learner's current grammar. And there is not much point in parsing them with just one new grammar. They must apparently be parsed with all possible grammars, in order to find out which one is most successful at licensing the language. The research task is to reconcile this with the fact that the human sentence parsing mechanism, even in adults, has only very limited parallel parsing capacity. I have proposed that all possible grammars can be folded into one, if parameter values are fragments of sentential tree structures that the parser can make use of where necessary to assign a structure to an input sentence. However, the problem of capacity limitations remains. The combined grammar will afford multiple analyses for some sentences, too many to be computed on-line. I propose that the parser computes only one analysis per sentence but can detect ambiguity, and that the learner makes use of unambiguous input only. This provides secure information but relatively little of it, particularly at early stages of learning where few grammars have been excluded and ambiguity is rife. I consider three solutions: improving the parser's ability to extract unambiguous information from partially ambiguous sentences, assuming default parameter values to temporarily eliminate ambiguity, reconfiguring the parameters so that some are subordinate to others and do not present themselves to the learner until the others have been set. A more radical alternative is to give up the quest for error-free learning and permit parameters to be set without regard for whether the parser may have overlooked an alternative analysis of the sentence. If it can be assumed that the human parser keeps a running tally of the parameter values it has accessed, then the learner would do nothing other than parse sentences for comprehension, as adults do. The most useful parameter values would become more and more easily accessed; the noncontributors would drop out of the running. There would be no learning mechanism at all, over and above the parser. But how accurate this system would be remains to be established.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号