首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Khemlani et al. (2018) mischaracterize logic in the course of seeking to show that mental model theory (MMT) can accommodate a form of inference (, let us label it) they find in a high percentage of their subjects. We reveal their mischaracterization and, in so doing, lay a landscape for future modeling by cognitive scientists who may wonder whether human reasoning is consistent with, or perhaps even capturable by, reasoning in a logic or family thereof. Along the way, we note that the properties touted by Khemlani et al. as innovative aspects of MMT-based modeling (e.g., nonmonotonicity) have for decades been, in logic, acknowledged and rigorously specified by families of (implemented) logics. Khemlani et al. (2018) further declare that is “invalid in any modal logic.” We demonstrate this to be false by our introduction (Appendix A) of a new propositional modal logic (within a family of such logics) in which is provably valid, and by the implementation of this logic. A second appendix, B, partially answers the two-part question, “What is a formal logic, and what is it for one to capture empirical phenomena?”  相似文献   

3.
In numerous experimental contexts, gesturing has been shown to lighten a speaker's cognitive load. However, in all of these experimental paradigms, the gestures have been directed to items in the "here-and-now." This study attempts to generalize gesture's ability to lighten cognitive load. We demonstrate here that gesturing continues to confer cognitive benefits when speakers talk about objects that are not present, and therefore cannot be directly indexed by gesture. These findings suggest that gesturing confers its benefits by more than simply tying abstract speech to the objects directly visible in the environment. Moreover, we show that the cognitive benefit conferred by gesturing is greater when novice learners produce gestures that add to the information expressed in speech than when they produce gestures that convey the same information as speech, suggesting that it is gesture's meaningfulness that gives it the ability to affect working memory load.  相似文献   

4.
Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives. A key aspect of narrative development is tracking story referents, specifying who did what to whom. Adults track referents primarily in speech by introducing a story character with a noun phrase and then following the same referent with a pronoun—a strategy that presents challenges for young children. We ask whether young children can track story referents initially in communications that combine gesture and speech by using character viewpoint in gesture to introduce new story characters, before they are able to do so exclusively in speech using nouns followed by pronouns. Our analysis of 4- to 6-year-old children showed that children introduced new characters in gesture+speech combinations with character viewpoint gestures at an earlier age than conveying the same referents exclusively in speech with the use of nominal phrases followed by pronouns. Results show that children rely on viewpoint in gesture to convey who did what to whom as they take their first steps into narratives.  相似文献   

5.
When asked to explain their solutions to a problem, both adults and children gesture as they talk. These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, we use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well‐studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture‐speech mismatches—explanations in which speech conveyed one path and gesture another—at these theoretically defined choice points than they produced at non‐choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem‐solving.  相似文献   

6.
This study examined the effect of a prior bout of exercise on implicit cognition. Specifically, we examined whether a prior bout of moderate intensity exercise affected performance on a statistical learning task in healthy adults. A total of 42 participants were allocated to one of three conditions—a control group, a group that exercised for 15 min prior to the statistical learning task, and a group that exercised for 30 min prior to the statistical learning task. The participants in the exercise groups cycled at 60% of their respective O2max. Each group demonstrated significant statistical learning, with similar levels of learning among the three groups. Contrary to previous research that has shown that a prior bout of exercise can affect performance on explicit cognitive tasks, the results of the current study suggest that the physiological stress induced by moderate‐intensity exercise does not affect implicit cognition as measured by statistical learning.  相似文献   

7.
Mathematical cognition research has largely emphasized concepts that can be directly perceived or grounded in visuospatial referents. These include concrete number systems like natural numbers, integers, and rational numbers. Here, we investigate how a more abstract number system, the irrationals denoted by radical expressions like , is understood across three tasks. Performance on a magnitude comparison task suggests that people interpret irrational numbers (specifically, the radicands of radical expressions) as natural numbers. Strategy self‐reports during a number line estimation task reveal that the spatial locations of irrationals are determined by referencing neighboring perfect squares. Finally, perfect squares facilitate the evaluation of arithmetic expressions. These converging results align with a constellation of related phenomena spanning tasks and number systems of varying complexity. Accordingly, we propose that the task‐specific recruitment of more concrete representations to make sense of more abstract concepts (referential processing) is an important mechanism for teaching and learning mathematics.  相似文献   

8.
Co‐thought gestures are understudied as compared to co‐speech gestures yet, may provide insight into cognitive functions of gestures that are independent of speech processes. A recent study with adults showed that co‐thought gesticulation occurred spontaneously during mental preparation of problem solving. Moreover, co‐thought gesturing (either spontaneous or instructed) during mental preparation was effective for subsequent solving of the Tower of Hanoi under conditions of high cognitive load (i.e., when visual working memory capacity was limited and when the task was more difficult). In this preregistered study ( https://osf.io/dreks/ ), we investigated whether co‐thought gestures would also spontaneously occur and would aid problem‐solving processes in children (N = 74; 8–12 years old) under high load conditions. Although children also spontaneously used co‐thought gestures during mental problem solving, this did not aid their subsequent performance when physically solving the problem. If these null results are on track, co‐thought gesture effects may be different in adults and children.  相似文献   

9.
The gestures that accompany speech are more than just arbitrary hand movements or communicative devices. They are simulated actions that can both prime and facilitate speech and cognition. This study measured participants’ reaction times for naming degraded images of objects when simultaneously adopting a gesture that was either congruent with the target object, incongruent with it, and when not making any hand gesture. A within‐subjects design was used, with participants (N= 122) naming 10 objects under each condition. Participants named the objects significantly faster when adopting a congruent gesture than when not gesturing at all. Adopting an incongruent gesture resulted in significantly slower naming times. The findings are discussed in the context of the intrapersonal cognitive and facilitatory effects of gestures and underline the relatedness between language, action, and cognition.  相似文献   

10.
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.  相似文献   

11.
Gesture and early bilingual development   总被引:1,自引:0,他引:1  
The relationship between speech and gestural proficiency was investigated longitudinally (from 2 years to 3 years 6 months, at 6-month intervals) in 5 French-English bilingual boys with varying proficiency in their 2 languages. Because of their different levels of proficiency in the 2 languages at the same age, these children's data were used to examine the relative contribution of language and cognitive development to gestural development. In terms of rate of gesture production, rate of gesture production with speech, and meaning of gesture and speech, the children used gestures much like adults from 2 years on. In contrast, the use of iconic and beat gestures showed differential development in the children's 2 languages as a function of mean length of utterance. These data suggest that the development of these kinds of gestures may be more closely linked to language development than other kinds (such as points). Reasons why this might be so are discussed.  相似文献   

12.
GESTURE, SPEECH, AND LEXICAL ACCESS:   总被引:1,自引:0,他引:1  
Abstract— In a within-subjects design that varied whether speakers were allowed to gesture and the difficulty of lexical access, speakers were videotaped as they described animated action cartoons to a listener. When speakers were permitted to gesture, they gestured more often during phrases with spatial content than during phrases with other content. Speech with spatial content was less fluent when speakers could not gesture than when they could gesture, speech with nonspatial content was not affected by gesture condition. Preventing gesturing increased the relative frequency of nonjuncture filled pauses in speech with spatial content, but not in speech with other content. Overall, the effects of preventing speakers from gesturing resembled those of increasing the difficulty of lexical access by other means, except that the effects of gesture restriction were specific to speech with spatial content. The findings support the hypothesis that gestural accompaniments to spontaneous speech can facilitate access to the mental lexicon  相似文献   

13.
Explaining Math: Gesturing Lightens the Load   总被引:4,自引:0,他引:4  
Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. We asked adults and children to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture. Gesturing appeared to save the speakers' cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task. It is widely accepted that gesturing reflects a speaker's cognitive state, but our observations suggest that, by reducing cognitive load, gesturing may also play a role in shaping that state.  相似文献   

14.
Teachers gesture when they teach, and those gestures do not always convey the same information as their speech. Gesture thus offers learners a second message. To determine whether learners take advantage of this offer, we gave 160 children in the third and fourth grades instruction in mathematical equivalence. Children were taught either one or two problem-solving strategies in speech accompanied by no gesture, gesture conveying the same strategy, or gesture conveying a different strategy. The children were likely to profit from instruction with gesture, but only when it conveyed a different strategy than speech did. Moreover, two strategies were effective in promoting learning only when the second strategy was taught in gesture, not speech. Gesture thus has an active hand in learning.  相似文献   

15.
Let be the knowledge space derived from an attribution function σ on Q. Under an assumption for σ, this paper gives some necessary and sufficient conditions such that is discriminative. It also discusses the resolubility of σ when Q is an infinite set. More precisely, this paper proves that σ is not resoluble if Q is uncountable, and gives a necessary and sufficient condition such that σ is resoluble when is -well-graded. By way of applications of these results, discriminativeness and resolubility are discussed around the merge of skill multimaps and the meshing of the delineated knowledge spaces.  相似文献   

16.
Previous research has shown a strong positive association between right-handed gesturing and vocabulary development. However, the causal nature of this relationship remains unclear. In the current study, we tested whether gesturing with the right hand enhances linguistic processing in the left hemisphere, which is contralateral to the right hand. We manipulated the gesture hand children used in pointing tasks to test whether it would affect their performance. In either a linguistic task (verb learning) or a non-linguistic control task (memory), 131 typically developing right-handed 3-year-olds were encouraged to use either their right hand or left hand to respond. While encouraging children to use a specific hand to indicate their responses had no effect on memory performance, encouraging children to use the right hand to respond, compared to the left hand, significantly improved their verb learning performance. This study is the first to show that manipulating the hand with which children are encouraged to gesture gives them a linguistic advantage. Language lateralization in healthy right-handed children typically involves a dominant left hemisphere. Producing right-handed gestures may therefore lead to increased activation in the left hemisphere which may, in turn, facilitate forming and accessing lexical representations. It is important to note that this study manipulated gesture handedness among right-handers and does therefore not support the practice of encouraging children to become right-handed in manual activities.

Research Highlights

  • Right-handed 3-year-olds were instructed to point to indicate their answers exclusively with their right or left hand in either a memory or verb learning task.
  • Right-handed pointing was associated with improved verb generalization performance, but not improved memory performance.
  • Thus, gesturing with the right hand, compared to the left hand, gives right-handed 3-year-olds an advantage in a linguistic but not a non-linguistic task.
  • Right-handed pointing might lead to increased activation in the left hemisphere and facilitate forming and accessing lexical representations.
  相似文献   

17.
Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movement, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture.  相似文献   

18.
We studied how gesture use changes with culture, age and increased spoken language competence. A picture-naming task was presented to British (N = 80) and Finnish (N = 41) typically developing children aged 2–5 years. British children were found to gesture more than Finnish children and, in both cultures, gesture production decreased after the age of two. Two-year-olds used more deictic than iconic gestures than older children, and gestured more before the onset of speech, rather than simultaneously or after speech. The British 3- and 5-year-olds gestured significantly more when naming praxic (manipulable) items than non-praxic items. Our results support the view that gesture serves a communicative and intrapersonal function, and the relative function may change with age. Speech and language therapists and psychologists observe the development of children’s gestures and make predictions on the basis of their frequency and type. To prevent drawing erroneous conclusions about children’s linguistic development, it is important to understand developmental and cultural variations in gesture use.  相似文献   

19.
This study investigated whether gesturing classes (baby sign) affected parental frustration and stress, as advertised by many commercial products. The participants were 178 mother–infant dyads, divided into a gesture group (n=89) and a non‐gesture group (n=89), based on whether they had attended baby sign classes or not. Mothers completed a background demographics questionnaire and the Parenting Stress Index. Gesturing mothers had higher total stress scores, with higher scores on the child domain, despite having similar backgrounds to non‐gesturing mothers. There was no relationship between the frequency or duration of gesture use and stress scores. It is suggested that gesturing mothers had higher pre‐existing stress and were attracted to gesture classes because of the promoted benefits, which include stress reduction, although class attendance did not alleviate their stress. The possibility that attending gesturing classes made mothers view their infant in a more negative way, due to their heightened expectations not being met, is also discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号