首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 15 毫秒
1.
2.
3.
Jie Gao 《Ratio》2021,34(1):20-32
Self‐deception is typically considered epistemically irrational, for it involves holding certain doxastic attitudes against strong counter‐evidence. Pragmatic encroachment about epistemic rationality says that whether it is epistemically rational to believe, withhold belief or disbelieve something can depend on perceived practical factors of one's situation. In this paper I argue that some cases of self‐deception satisfy what pragmatic encroachment considers sufficient conditions for epistemic rationality. As a result, we face the following dilemma: either we revise the received view about self‐deception or we deny pragmatic encroachment on epistemic rationality. I suggest that the dilemma can be solved if we pay close attention to the distinction between ideal and bounded rationality. I argue that the problematic cases fail to meet standards of ideal rationality but exemplify bounded rationality. The solution preserves pragmatic encroachment on bounded rationality, but denies it on ideal rationality.  相似文献   

4.
Infants employ sophisticated mechanisms to acquire their first language, including some that rely on taking the perspective of adults as speakers or listeners. When do infants first show awareness of what other people understand? We tested 14‐month‐old infants in two experiments measuring event‐related potentials. In Experiment 1, we established that infants produce the N400 effect, a brain signature of semantic violations, in a live object naming paradigm in the presence of an adult observer. In Experiment 2, we induced false beliefs about the labeled objects in the adult observer to test whether infants keep track of the other person's comprehension. The results revealed that infants reacted to the semantic incongruity heard by the other as if they encountered it themselves: they exhibited an N400‐like response, even though labels were congruous from their perspective. This finding demonstrates that infants track the linguistic understanding of social partners. A video abstract of this article can be viewed at https://youtu.be/pQUv8yFhnbk .  相似文献   

5.
Gaze is considered a crucial component of early communication between an infant and her caregiver. When communicatively addressed, infants respond aptly to others’ gaze by following its direction. However, experience with face‐to‐face contact varies across cultures, begging the question whether infants’ competencies in receiving others’ communicative gaze signals are universal or culturally specific . We used eye‐tracking to assess gaze‐following responses of 5‐ to 7‐month olds in Vanuatu, where face‐to‐face parent–infant interactions are less prevalent than in Western populations. We found that—just like Western 6‐month‐olds studied previously—5‐ to ‐7‐month‐olds living in Vanuatu followed gaze only, when communicatively addressed. That is, if presented gaze shifts were preceded by infant‐directed speech, but not if they were preceded by adult‐directed speech. These results are consistent with the notion that early infant gaze following is tied to infants’ early emerging communicative competencies and rooted in universal mechanisms rather than being dependent on cultural specificities of early socialization.  相似文献   

6.
7.
Audio‐visual associative learning – at least when linguistic stimuli are employed – is known to rely on core linguistic skills such as phonological awareness. Here we ask whether this would also be the case in a task that does not manipulate linguistic information. Another question of interest is whether executive skills, often found to support learning, may play a larger role in a non‐linguistic audio‐visual associative task compared to a linguistic one. We present a new task that measures learning when having to associate non‐linguistic auditory signals with novel visual shapes. Importantly, our novel task shares with linguistic processes such as reading acquisition the need to associate sounds with arbitrary shapes. Yet, rather than phonemes or syllables, it uses novel environmental sounds – therefore limiting direct reliance on linguistic abilities. Five‐year‐old French‐speaking children (N = 76, 39 girls) were assessed individually in our novel audio‐visual associative task, as well as in a number of other cognitive tasks evaluating linguistic abilities and executive functions. We found phonological awareness and language comprehension to be related to scores in the audio‐visual associative task, while no correlation with executive functions was observed. These results underscore a key relation between foundational language competencies and audio‐visual associative learning, even in the absence of linguistic input in the associative task.  相似文献   

8.
Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non‐linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4‐ to 11‐year‐olds and adults. Eighty‐eight 4‐ to 11‐year‐olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non‐linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4‐ to 5‐, 6‐ to 9‐ and 10‐ to 11‐year‐olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio‐emotional competence.  相似文献   

9.
10.
In this study, the interdependencies among phonological awareness, verbal working memory components, and early numerical skills in children 1 year before school entry are addressed. Early numerical skills were conceptualized as quantity‐number competencies (QNC) at both basic (QNC Level 1) and advanced (QNC Level 2) levels. In a sample of 1,343 children aged 5 and 6, structural equation modelling provided support for the isolated number words hypothesis (Krajewski & Schneider, 2009, JExp. Child Psychol., 103, 516–531). This hypothesis claims that phonological awareness contributes to the acquisition of QNC Level 1, such as learning the number word sequence, but not of QNC Level 2, which requires the linkage of number words to quantities. In addition, phonological awareness relied on verbal working memory, especially with regard to the phonological loop, central executive, and episodic buffer. The results were congruent with the idea that phonological awareness mediates the impact of verbal working memory on QNCs. The relationships between verbal working memory, phonological awareness, and QNCs were comparable in monolingual and bilingual children.  相似文献   

11.
12.
People tend to underestimate subtraction and overestimate addition outcomes and to associate subtraction with the left side and addition with the right side. These two phenomena are collectively labeled 'operational momentum' (OM) and thought to have their origins in the same mechanism of 'moving attention along the mental number line'. OM in arithmetic has never been tested in children at the preschool age, which is critical for numerical development. In this study, 3–5 years old were tested with non‐symbolic addition and subtraction tasks. Their level of understanding of counting principles (CP) was assessed using the give‐a‐number task. When the second operand's cardinality was 5 or 6 (Experiment 1), the child's reaction time was shorter in addition/subtraction tasks after cuing attention appropriately to the right/left. Adding/subtracting one element (Experiment 2) revealed a more complex developmental pattern. Before acquiring CP, the children showed generalized overestimation bias. Underestimation in addition and overestimation in subtraction emerged only after mastering CP. No clear spatial‐directional OM pattern was found, however, the response time to rightward/leftward cues in addition/subtraction again depended on stage of mastering CP. Although the results support the hypothesis about engagement of spatial attention in early numerical processing, they point to at least partial independence of the spatial‐directional and magnitude OM. This undermines the canonical version of the number line‐based hypothesis. Mapping numerical magnitudes to space may be a complex process that undergoes reorganization during the period of acquisition of symbolic representations of numbers. Some hypotheses concerning the role of spatial‐numerical associations in numerical development are proposed.  相似文献   

13.
There is considerable evidence that labeling supports infants' object categorization. Yet in daily life, most of the category exemplars that infants encounter will remain unlabeled. Inspired by recent evidence from machine learning, we propose that infants successfully exploit this sparsely labeled input through “semi‐supervised learning.” Providing only a few labeled exemplars leads infants to initiate the process of categorization, after which they can integrate all subsequent exemplars, labeled or unlabeled, into their evolving category representations. Using a classic novelty preference task, we introduced 2‐year‐old infants (n = 96) to a novel object category, varying whether and when its exemplars were labeled. Infants were equally successful whether all exemplars were labeled (fully supervised condition) or only the first two exemplars were labeled (semi‐supervised condition), but they failed when no exemplars were labeled (unsupervised condition). Furthermore, the timing of the labeling mattered: when the labeled exemplars were provided at the end, rather than the beginning, of familiarization (reversed semi‐supervised condition), infants failed to learn the category. This provides the first evidence of semi‐supervised learning in infancy, revealing that infants excel at learning from exactly the kind of input that they typically receive in acquiring real‐world categories and their names.  相似文献   

14.
Topographically similar verbal responses may be functionally independent forms of operant behavior. For example, saying yes or no may have different functions based on the environmental conditions in effect. The present study extends previous research on both the assessment and acquisition of yes and no responses across contexts in children with language deficits and further examined the functional independence of topographically similar responses. All participants in the present study acquired yes and no responses within verbal operants (e.g., mands). However, generalization of the responses across novel verbal operants (e.g., tacts to intraverbals) did not occur without additional training, thus supporting Skinner's (1957) assertion of functional independence of verbal operants.  相似文献   

15.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

16.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号