首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
《认知与教导》2013,31(3):201-219
Is the information that gesture provides about a child's understanding of a task accessible not only to experimenters who are trained in coding gesture but also to untrained observers? Twenty adults were asked to describe the reasoning of 12 different children, each videotaped responding to a Piagetian conservation task. Six of the children on the videotape produced gestures that conveyed the same information as their nonconserving spoken explanations, and 6 produced gestures that conveyed different information from their nonconserving spoken explanations. The adult observers displayed more uncertainty in their appraisals of children who produced different information in gesture and speech than in their appraisals of children who produced the same information in gesture and speech. Moreover, the adults were able to incorporate the information conveyed in the children's gestures into their own spoken appraisals of the children's reasoning. These data suggest that, even without training, adults form impressions of children's knowledge based not only on what children say with their mouths but also on what they say with their hands.  相似文献   

3.
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.  相似文献   

4.
Shintel H  Nusbaum HC 《Cognition》2007,105(3):681-690
Language is generally viewed as conveying information through symbols whose form is arbitrarily related to their meaning. This arbitrary relation is often assumed to also characterize the mental representations underlying language comprehension. We explore the idea that visuo-spatial information can be analogically conveyed through acoustic properties of speech and that such information is integrated into an analog perceptual representation as a natural part of comprehension. Listeners heard sentences describing objects, spoken at varying speaking rates. After each sentence, participants saw a picture of an object and judged whether it had been mentioned in the sentence. Participants were faster to recognize the object when motion implied by speaking rate matched the motion implied by the picture. Results suggest that visuo-spatial referential information can be analogically conveyed and represented.  相似文献   

5.
Huettig F  Altmann GT 《Cognition》2005,96(1):B23-B32
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632-1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.  相似文献   

6.
Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using Cantonese language as a testing case in the present study. A word-spotting task was used in which listeners were instructed to spot any Cantonese word from a series of nonsense sound sequences. We found that it was easier for the native Cantonese listeners to spot the target word in the nonsense sound sequences with high transitional probability phoneme combinations than those with low transitional probability phoneme combinations. These results concluded that native Cantonese listeners did make use of the transitional probability information to recognize the spoken word in speech.  相似文献   

7.
When asked to explain their solutions to a problem, both adults and children gesture as they talk. These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, we use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well‐studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture‐speech mismatches—explanations in which speech conveyed one path and gesture another—at these theoretically defined choice points than they produced at non‐choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem‐solving.  相似文献   

8.
Knowing a word affects the fundamental perception of the sounds within it   总被引:4,自引:0,他引:4  
Understanding spoken language is an exceptional computational achievement of the human cognitive apparatus. Theories of how humans recognize spoken words fall into two categories: Some theories assume a fully bottom-up flow of information, in which successively more abstract representations are computed. Other theories, in contrast, assert that activation of a more abstract representation (e.g., a word) can affect the activation of smaller units (e.g., phonemes or syllables). The two experimental conditions reported here demonstrate the top-down influence of word representations on the activation of smaller perceptual units. The results show that perceptual processes are not strictly bottom-up: Computations at logically lower levels of processing are affected by computations at logically more abstract levels. These results constrain and inform theories of the architecture of human perceptual processing of speech.  相似文献   

9.
Teachers gesture when they teach, and those gestures do not always convey the same information as their speech. Gesture thus offers learners a second message. To determine whether learners take advantage of this offer, we gave 160 children in the third and fourth grades instruction in mathematical equivalence. Children were taught either one or two problem-solving strategies in speech accompanied by no gesture, gesture conveying the same strategy, or gesture conveying a different strategy. The children were likely to profit from instruction with gesture, but only when it conveyed a different strategy than speech did. Moreover, two strategies were effective in promoting learning only when the second strategy was taught in gesture, not speech. Gesture thus has an active hand in learning.  相似文献   

10.
A critical property of the perception of spoken words is the transient ambiguity of the speech signal. In localist models of speech perception this ambiguity is captured by allowing the parallel activation of multiple lexical representations. This paper examines how a distributed model of speech perception can accommodate this property. Statistical analyses of vector spaces show that coactivation of multiple distributed representations is inherently noisy, and depends on parameters such as sparseness and dimensionality. Furthermore, the characteristics of coactivation vary considerably, depending on the organization of distributed representations within the mental lexicon. This view of lexical access is supported by analyses of phonological and semantic word representations, which provide an explanation of a recent set of experiments on coactivation in speech perception (Gaskell & Marslen–Wilson, 1999).  相似文献   

11.
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects (“cat”). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the same door-opening role in word learning for children with autism spectrum disorder (ASD) and Down syndrome (DS), who show delayed vocabulary development and who differ in the strength of gesture production. To answer this question, we observed 23 18-month-old TD children, 23 30-month-old children with ASD, and 23 30-month-old children with DS 5 times over a year during parent–child interactions. Children in all 3 groups initially expressed a greater proportion of referents uniquely in gesture than in speech. Many of these unique gestures subsequently entered children’s spoken vocabularies within a year—a pattern that was slightly less robust for children with DS, whose word production was the most markedly delayed. These results indicate that gesture is as fundamental to vocabulary development for children with developmental disorders as it is for TD children.  相似文献   

12.
This article examines two issues: the role of gesture in the communication of spatial information and the relation between communication and mental representation. Children (8-10 years) and adults walked through a space to learn the locations of six hidden toy animals and then explained the space to another person. In Study 1, older children and adults typically gestured when describing the space and rarely provided spatial information in speech without also providing the information in gesture. However, few 8-year-olds communicated spatial information in speech or gesture. Studies 2 and 3 showed that 8-year-olds did understand the spatial arrangement of the animals and could communicate spatial information if prompted to use their hands. Taken together, these results indicate that gesture is important for conveying spatial relations at all ages and, as such, provides us with a more complete picture of what children do and do not know about communicating spatial relations.  相似文献   

13.
When people talk, they gesture. We show that gesture introduces action information into speakers' mental representations, which, in turn, affect subsequent performance. In Experiment 1, participants solved the Tower of Hanoi task (TOH1), explained (with gesture) how they solved it, and solved it again (TOH2). For all participants, the smallest disk in TOH1 was the lightest and could be lifted with one hand. For some participants (no-switch group), the disks in TOH2 were identical to those in TOH1. For others (switch group), the disk weights in TOH2 were reversed (so that the smallest disk was the heaviest and could not be lifted with one hand). The more the switch group's gestures depicted moving the smallest disk one-handed, the worse they performed on TOH2. This was not true for the no-switch group, nor for the switch group in Experiment 2, who skipped the explanation step and did not gesture. Gesturing grounds people's mental representations in action. When gestures are no longer compatible with the action constraints of a task, problem solving suffers.  相似文献   

14.
The present study investigates whether knowledge about the intentional relationship between gesture and speech influences controlled processes when integrating the two modalities at comprehension. Thirty-five adults watched short videos of gesture and speech that conveyed semantically congruous and incongruous information. In half of the videos, participants were told that the two modalities were intentionally coupled (i.e., produced by the same communicator), and in the other half, they were told that the two modalities were not intentionally coupled (i.e., produced by different communicators). When participants knew that the same communicator produced the speech and gesture, there was a larger bi-lateral frontal and central N400 effect to words that were semantically incongruous versus congruous with gesture. However, when participants knew that different communicators produced the speech and gesture--that is, when gesture and speech were not intentionally meant to go together--the N400 effect was present only in right-hemisphere frontal regions. The results demonstrate that pragmatic knowledge about the intentional relationship between gesture and speech modulates controlled neural processes during the integration of the two modalities.  相似文献   

15.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

16.
People frequently gesture when problem‐solving, particularly on tasks that require spatial transformation. Gesture often facilitates task performance by interacting with internal mental representations, but how this process works is not well understood. We investigated this question by exploring the case of mental abacus (MA), a technique in which users not only imagine moving beads on an abacus to compute sums, but also produce movements in gestures that accompany the calculations. Because the content of MA is transparent and readily manipulated, the task offers a unique window onto how gestures interface with mental representations. We find that the size and number of MA gestures reflect the length and difficulty of math problems. Also, by selectively interfering with aspects of gesture, we find that participants perform significantly worse on MA under motor interference, but that perceptual feedback is not critical for success on the task. We conclude that premotor processes involved in the planning of gestures are critical to mental representation in MA.  相似文献   

17.
The gestures children produce predict the early stages of spoken language development. Here we ask whether gesture is a global predictor of language learning, or whether particular gestures predict particular language outcomes. We observed 52 children interacting with their caregivers at home, and found that gesture use at 18 months selectively predicted lexical versus syntactic skills at 42 months, even with early child speech controlled. Specifically, number of different meanings conveyed in gesture at 18 months predicted vocabulary at 42 months, but number of gesture+speech combinations did not. In contrast, number of gesture+speech combinations, particularly those conveying sentence‐like ideas, produced at 18 months predicted sentence complexity at 42 months, but meanings conveyed in gesture did not. We can thus predict particular milestones in vocabulary and sentence complexity at age by watching how children move their hands two years earlier.  相似文献   

18.
Previous findings have suggested that number processing involves a mental representation of numerical magnitude. Other research has shown that sensory experiences are part and parcel of the mental representation (or “simulation”) that individuals construct during reading. We aimed at exploring whether arithmetic word-problem solving entails the construction of a mental simulation based on a representation of numerical magnitude. Participants were required to solve word problems and to perform an intermediate figure discrimination task that matched or mismatched, in terms of magnitude comparison, the mental representations that individuals constructed during problem solving. Our results showed that participants were faster in the discrimination task and performed better in the solving task when the figures matched the mental representations. These findings provide evidence that an analog magnitude-based mental representation is routinely activated during word-problem solving, and they add to a growing body of literature that emphasizes the experiential view of language comprehension.  相似文献   

19.
We present a novel subliminal priming technique that operates in the auditory modality. Masking is achieved by hiding a spoken word within a stream of time-compressed speechlike sounds with similar spectral characteristics. Participants were unable to consciously identify the hidden words, yet reliable repetition priming was found. This effect was unaffected by a change in the speaker's voice and remained restricted to lexical processing. The results show that the speech modality, like the written modality, involves the automatic extraction of abstract word-form representations that do not include nonlinguistic details. In both cases, priming operates at the level of discrete and abstract lexical entries and is little influenced by overlap in form or semantics.  相似文献   

20.
The gestures that spontaneously occur in communicative contexts have been shown to offer insight into a child’s thoughts. The information gesture conveys about what is on a child’s mind will, of course, only be accessible to a communication partner if that partner can interpret gesture. Adults were asked to observe a series of children who participated ‘live’ in a set of conservation tasks and gestured spontaneously while performing the tasks. Adults were able to glean substantive information from the children’s gestures, information that was not found anywhere in their speech. ‘Gesture-reading’ did, however, have a cost – if gesture conveyed different information from speech, it hindered the listener’s ability to identify the message in speech. Thus, ordinary listeners can and do extract information from a child’s gestures, even gestures that are unedited and fleeting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号