首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

2.
People gesture a great deal when speaking, and research has shown that listeners can interpret the information contained in gesture. The current research examines whether learners can also use co‐speech gesture to inform language learning. Specifically, we examine whether listeners can use information contained in an iconic gesture to assign meaning to a novel verb form. Two experiments demonstrate that adults and 2‐, 3‐, and 4‐year‐old children can infer the meaning of novel intransitive verbs from gestures when no other source of information is present. The findings support the idea that gesture might be a source of input available to language learners.  相似文献   

3.
A key assumption in language comprehension is that biases in behavioral data, such as the tendency to interpretJohn said that Mary left yesterday to mean thatyesterday modifies the syntactically local verb left, not the distant verbsaid, reflect inherent biases in the language comprehension system. In the present article, an alternative production-distribution-comprehension (PDC) account is pursued; this account states that comprehension biases emerge from different interpretation frequencies in the language, which themselves emerge from pressures on the language production system to produce some structures more than others. In two corpus analyses and two self-paced reading experiments, we investigated these claims for verb modification ambiguities, for which phrase length is hypothesized to shape production. The results support claims that tendencies to produce short phrases before long ones create distributional regularities for modification ambiguities in the language and that learning over these regularities shapes comprehenders’ interpretations of modification ambiguities. Implications for the PDC and other accounts are discussed.  相似文献   

4.
Across languages of the world, some grammatical patterns have been argued to be more common than expected by chance. These are sometimes referred to as (statistical) language universals. One such universal is the correlation between constituent order freedom and the presence of a case system in a language. Here, we explore whether this correlation can be explained by a bias to balance production effort and informativity of cues to grammatical function. Two groups of learners were presented with miniature artificial languages containing optional case marking and either flexible or fixed constituent order. Learners of the flexible order language used case marking significantly more often. This result parallels the typological correlation between constituent order flexibility and the presence of case marking in a language and provides a possible explanation for the historical development of Old English to Modern English, from flexible constituent order with case marking to relatively fixed order without case marking. In addition, learners of the flexible order language conditioned case marking on constituent order, using more case marking with the cross‐linguistically less frequent order, again mirroring typological data. These results suggest that some cross‐linguistic generalizations originate in functionally motivated biases operating during language learning.  相似文献   

5.
We report on a study investigating 3–5‐year‐old children's use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, bat (flying mammal) and bat (sports equipment). They were then asked to re‐tell these stories to a second experimenter. The data were coded for the means that children used during attempts at disambiguation: speech, gesture, or a combination of the two. The results indicated that the 3‐year‐old children rarely disambiguated the two senses, mainly using deictic pointing gestures during attempts at disambiguation. In contrast, the 4‐year‐old children attempted to disambiguate the two senses more often, using a larger proportion of iconic gestures than the other children. The 5‐year‐old children used less iconic gestures than the 4‐year‐olds, but unlike the 3‐year‐olds, were able to disambiguate the senses through the verbal channel. The results highlight the value of gesture to the development of children's language and communication skills.  相似文献   

6.
Interpretation biases, in which ambiguous information is interpreted negatively, have been hypothesized to place adolescent females at greater risk of developing anxiety and mood disorders than same‐aged males. We tested the hypothesis that adolescent girls interpret ambiguous scenarios more negatively, and/or less positively, than same‐aged males using the Adolescent Interpretation and Belief Questionnaire (N = 67, 11–15 years old). We also tested whether adolescent girls and boys differed in judging positive or negative interpretations to be more believable and whether the scenario content (social vs. non‐social) affected any sex difference in interpretation bias. The results showed that girls had higher average negative interpretation scores than boys, with no sex differences in positive interpretation scores. Girls and boys did not differ on which interpretation they found to be most believable. Both sexes reported that positive interpretations were less likely to come to mind, and were less believable, for social than for non‐social scenarios. These results provide preliminary evidence for sex differences in interpretation biases in adolescence and support the hypothesis that social scenarios are a specific source of anxiety to this age group. A greater understanding of the aetiology of interpretation biases will potentially enhance sex‐ and age‐specific interventions for anxiety and mood disorders.  相似文献   

7.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.  相似文献   

8.
A well‐known typological observation is the dominance of subject‐initial word orders, SOV and SVO, across the world's languages. Recent findings from gestural language creation paradigms offer possible explanations for the prevalence of SOV. When asked to gesture transitive events with an animate agent and inanimate patient, gesturers tend to produce SOV order, regardless of their native language biases. Interestingly, when the patient is animate, gesturers shift away from SOV to use of other orders, like SVO and OSV. Two competing hypotheses have been proposed for this switch: the noisy channel account (Gibson et al., 2013) and the role conflict account (Hall, Mayberry, & Ferreira, 2013). We set out to distinguish between these two hypotheses, disentangling event reversibility and patient animacy, by looking at gestural sequences for events with two inanimate participants (inanimate‐inanimate, reversible). We replicated the previous findings of a preference for SOV order when describing animate‐inanimate, irreversible events as well as a decrease in the use of SOV when presented with animate‐animate, reversible events. Accompanying the drop in SOV, in a novel condition we observed an increase in the use of SVO and OSV orders when describing events involving two animate entities. In sum, we find that the observed avoidance of SOV order in gestural language creation paradigms when the event includes an animate agent and patient is driven by the animacy of the participants rather than the reversibility of the event. We suggest that findings from gestural creation paradigms are not automatically linkable to spoken language typology.  相似文献   

9.
Discussions of biblical interpretation often proceed under one of two assumptions. Readers’ interpretations are primarily formed (1) inductively, according to the Bible's objective content, or (2) through the lens of preformed ideologies and biases. We assessed the influence of these two factors using two survey experiments with undergraduates. In study 1 (N = 214), participants were randomly assigned one of two nearly identical translations of Ephesians 5:22-28 (a famous passage describing gendered marital submission), with the only difference being that one translation included verse 21 in which Christians are told to “submit to one another.” Participants did not perceive a different message about gendered submission between translations, nor were they more likely to interpret either as misogynistic. However, gender ideology and religious importance did predict interpretation. Study 2 (N = 217) essentially replicated study 1 (using different translations of Ephesians 5:21-28), but one version replaced all “subjection” language with “commitment” language. Participants were significantly more likely to perceive a complementarian message from the translation that referenced “subjection” and they were also more likely to perceive it as misogynistic. Again, gender ideology and religious characteristics predicted interpretation. Findings suggest bias shapes interpretation, but more extreme content modifications (e.g., removing/changing key terms) can also influence interpretation.  相似文献   

10.
We examined how people use their knowledge of events to recover thematic role structure during the interpretation of noun-noun phrases. All phrases included one noun that was a good-agent/poor-patient (prosecutor) in a particular event (accuse), and the other noun was a good-patient/poor-agent (defendant) for the same event. If people interpret the noun-noun phrases by inverting the nouns and applying a thematic relation (see Downing, 1977; Levi, 1978), phrases should be interpreted more easily when the head nouns typically are good agents and the modifiers are good patients for specific events. Two experiments supported these predictions. Furthermore, the results indicated that in the less preferred thematic order (agent-patient), people often generated interpretations in which the modifiers became the focus of the interpretations. This finding suggests that violating thematic role preferences is one constraint on when the inversion process occurs during noun-noun interpretation.  相似文献   

11.
Children's overextensions of spatial language are often taken to reveal spatial biases. However, it is unclear whether extension patterns should be attributed to children's overly general spatial concepts or to a narrower notion of conceptual similarity allowing metaphor‐like extensions. We describe a previously unnoticed extension of spatial expressions and use a novel method to determine its origins. English‐ and Greek‐speaking 4‐ and 5‐year‐olds used containment expressions (e.g., English into, Greek mesa) for events where an object moved into another object but extended such expressions to events where the object moved behind or under another object. The pattern emerged in adult speakers of both languages and also in speakers of 10 additional languages. We conclude that learners do not have an overly general concept of Containment. Nevertheless, children (and adults) perceive similarities across Containment and other types of spatial scenes, even when these similarities are obscured by the conventional forms of the language.  相似文献   

12.
How might a human communication system be bootstrapped in the absence of conventional language? We argue that motivated signs play an important role (i.e., signs that are linked to meaning by structural resemblance or by natural association). An experimental study is then reported in which participants try to communicate a range of pre‐specified items to a partner using repeated non‐linguistic vocalization, repeated gesture, or repeated non‐linguistic vocalization plus gesture (but without using their existing language system). Gesture proved more effective (measured by communication success) and more efficient (measured by the time taken to communicate) than non‐linguistic vocalization across a range of item categories (emotion, object, and action). Combining gesture and vocalization did not improve performance beyond gesture alone. We experimentally demonstrate that gesture is a more effective means of bootstrapping a human communication system. We argue that gesture outperforms non‐linguistic vocalization because it lends itself more naturally to the production of motivated signs.  相似文献   

13.
Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, which have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also used the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although cospeech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.  相似文献   

14.
Young children engage in essentialist reasoning about natural kinds, believing that many traits are innately determined. This study investigated whether personal experience with second language acquisition could alter children's essentialist biases. In a switched‐at‐birth paradigm, 5‐ and 6‐year‐old monolingual and simultaneous bilingual children expected that a baby's native language, an animal's vocalizations, and an animal's physical traits would match those of a birth rather than of an adoptive parent. We predicted that sequential bilingual children, who had been exposed to a new language after age 3, would show greater understanding that languages are learned. Surprisingly, sequential bilinguals showed reduced essentialist beliefs about all traits: they were significantly more likely than other children to believe that human language, animal vocalizations, and animal physical traits would be learned through experience rather than innately endowed. These findings suggest that bilingualism in the preschool years can profoundly change children's essentialist biases.  相似文献   

15.
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment ( Culbertson, Smolensky, & Legendre, 2012 ) targeting the learning of word‐order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross‐linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word‐order pattern relating nouns, adjectives, and numerals.  相似文献   

16.
Non‐anxious college students first performed a semantic‐judgement task that was designed to train either threat‐related or threat‐unrelated interpretations of threat‐ambiguous homographs (e.g. mug). Next they performed an ostensibly separate transfer task of constructing personal mental images for single words, in a series that included new, threat‐ambiguous homographs. In two experiments, the number of threat‐related interpretations in the transfer task significantly increased following threat‐related experience during the training phase, compared to other training conditions. We conclude that interpretive biases typically shown by anxious people can be established in non‐anxious students in ways that generalize to novel tasks and materials. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

17.
Children's gesture production precedes and predicts language development, but the pathways linking these domains are unclear. It is possible that gesture production assists in children's developing word comprehension, which in turn supports expressive vocabulary acquisition. The present study examines this mediation pathway in a population with variability in early communicative abilities—the younger siblings of children with autism spectrum disorder (ASD; high‐risk infants, HR). Participants included 92 HR infants and 28 infants at low risk (LR) for ASD. A primary caregiver completed the MacArthur‐Bates Communicative Development Inventory (Fenson, et al., 1993) at 12, 14, and 18 months, and HR infants received a diagnostic evaluation for ASD at 36 months. Word comprehension at 14 months mediated the relationship between 12‐month gesture and 18‐month word production in LR and HR infants (ab = 0.263; p < 0.01). For LR infants and HR infants with no diagnosis or language delay, gesture was strongly associated with word comprehension (as = 0.666; 0.646; 0.561; ps < 0.01). However, this relationship did not hold for infants later diagnosed with ASD (a = 0.073; p = 0.840). This finding adds to a growing literature suggesting that children with ASD learn language differently. Furthermore, this study provides an initial step toward testing the developmental pathways by which infants transition from early actions and gestures to expressive language.  相似文献   

18.
Two sentence processing experiments on a dative NP ambiguity in Korean demonstrate effects of phrase length on overt and implicit prosody. Both experiments controlled non-prosodic length factors by using long versus short proper names that occurred before the syntactically critical material. Experiment 1 found that long phrases induce different prosodic phrasing than short phrases in a read-aloud task and change the preferred interpretation of globally ambiguous sentences. It also showed that speakers who have been told of the ambiguity can provide significantly different prosody for the two interpretations, for both lengths. Experiment 2 verified that prosodic patterns found in first-pass pronunciations predict self-paced reading patterns for silent reading. The results extend the coverage of the Implicit Prosody Hypothesis [Fodor, J Psycholinguist Res 27:285–319, 1998; Prosodic disambiguation in silent reading. In M. Hirotani (Ed.), NELS 32 (pp. 113–132). Amherst, MA: GLSA Publications, 2002] to another construction and to Korean. They further indicate that strong syntactic biases can have rapid effects on the formulation of implicit prosody.  相似文献   

19.
PurposeThe aim of this study was to examine the relationship between frequency of gesture use and language with a consideration for the effect of age and setting on frequency of gesture use in prelinguistic typically developing children.MethodParticipants included a total of 54 typically developing infants and toddlers between the ages of 9 months and 15 months separated into two age ranges, 9-12 months and 12-15 months. All participants were administered the Mullen’s Scale of Early Learning and two gesture samples were obtained: one sample in a structured setting and the other in an unstructured setting. Gesture samples were coded by research assistants blind to the purpose of the research study and total frequency and frequencies for the following gesture types were calculated: behavior regulation, social interaction, and joint attention (Bruner, 1983).ResultsResults indicated that both age and setting have a significant effect on frequency of gesture use and frequency of gesture is correlated to receptive and expressive language abilities; however, these relationships are dependent upon the gesture type examined.ConclusionsThese findings further our understanding of the relationship between gesture use and language and support the concept that frequency of gesture is related to language abilities. This is meaningful because gestures are one of the first forms of intentional communication, allowing for early identification of language abilities at a young age.  相似文献   

20.
The gestures children produce predict the early stages of spoken language development. Here we ask whether gesture is a global predictor of language learning, or whether particular gestures predict particular language outcomes. We observed 52 children interacting with their caregivers at home, and found that gesture use at 18 months selectively predicted lexical versus syntactic skills at 42 months, even with early child speech controlled. Specifically, number of different meanings conveyed in gesture at 18 months predicted vocabulary at 42 months, but number of gesture+speech combinations did not. In contrast, number of gesture+speech combinations, particularly those conveying sentence‐like ideas, produced at 18 months predicted sentence complexity at 42 months, but meanings conveyed in gesture did not. We can thus predict particular milestones in vocabulary and sentence complexity at age by watching how children move their hands two years earlier.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号