首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
荆伟  方俊明  赵微 《心理学报》2014,46(3):385-395
本文利用眼动追踪技术在基线、一致和矛盾3种实验条件下考察感知觉线索和社会性线索在自闭症谱系障碍儿童词语习得中的相对作用。行为数据结果表明, 此类儿童在矛盾条件下选择枯燥物体作为新异词语的所指对象, 这说明社会性线索较之于感知觉线索具有优势作用; 而他们在基线和一致条件下选择有趣物体作为新异词语的所指对象, 且一致条件的词语习得成绩优于基线条件, 这说明社会性线索较之于感知觉线索具有促进作用。眼动数据结果表明, 此类儿童在脸部注视模式和视线追随行为上与普通儿童存在差异。这说明, 虽然社会性线索在此类儿童与普通儿童的词语习得中具有相同的相对作用, 但他们获取社会性信息的方式与普通儿童存在差异。  相似文献   

2.
At 14 months, children appear to struggle to apply their fairly well-developed speech perception abilities to learning similar sounding words (e.g., bih/dih; Stager & Werker, 1997). However, variability in nonphonetic aspects of the training stimuli seems to aid word learning at this age. Extant theories of early word learning cannot account for this benefit of variability. We offer a simple explanation for this range of effects based on associative learning. Simulations suggest that if infants encode both noncontrastive information (e.g., cues to speaker voice) and meaningful linguistic cues (e.g., place of articulation or voicing), then associative learning mechanisms predict these variability effects in early word learning. Crucially, this means that despite the importance of task variables in predicting performance, this body of work shows that phonological categories are still developing at this age, and that the structure of noninformative cues has critical influences on word learning abilities.  相似文献   

3.
本研究采用眼动技术考察18名自闭症儿童利用社会性注意线索习得词语的能力。结果表明:1)自闭症儿童具有利用他人视线习得词语的能力;2)不同视线对自闭症儿童的词语习得产生不同的影响;3)自闭症儿童视线追随行为的潜在机制与普通儿童存在差异,自闭症儿童的视线追随行为可能是意识性的诱发行为而普通儿童则是反射性的自发行为。  相似文献   

4.
Adults and adolescents form negative first impressions of ASD adults and children. We examined the first impression ratings of primary school children (6–9 years) of their ASD peers. 146 school children rated either silent videos, speech or transcribe speech from 14 actors (7 ASD, 7 TD). The ASD actors were rated more negatively than the typically developing actors on all three stimulus types. Children with ASD are likely to be judged more negatively than their peers at the very start of their formal education. Contrary to previous research, for primary school children, the content of the speech was judged as negatively as the delivery of the speech.  相似文献   

5.
于文勃  梁丹丹 《心理科学进展》2018,26(10):1765-1774
词是语言的基本结构单位, 对词语进行切分是语言加工的重要步骤。口语语流中的切分线索来自于语音、语义和语法三个方面。语音线索包括概率信息、音位配列规则和韵律信息, 韵律信息中还包括词重音、时长和音高等内容, 这些线索的使用在接触语言的早期阶段就逐渐被个体所掌握, 而且在不同的语言背景下有一定的特异性。语法和语义线索属于较高级的线索机制, 主要作用于词语切分过程的后期。后续研究应从语言的毕生发展和语言的特异性两个方面考察口语语言加工中的词语切分线索。  相似文献   

6.
Twenty‐seven 6‐ to 15‐year‐old children with autism spectrum disorder (ASD) and 32 typically developing (TD) children were questioned about their participation in a set of activities after a 2‐week delay and again after a 2‐month delay using a best practice interview protocol. Interviews were coded for completeness with respect to the gist of the event, the number of narrative details provided, and accuracy. Results indicated that children with ASD did not differ from TD peers on any dimensions of memory after both delays. Specifically, both groups of children provided equivalently complete accounts on both occasions. However, children in both groups provided significantly fewer narrative details about the event in the second interview, and the accuracy rates were lower. The findings indicate that, like TD children, children with ASD can provide meaningful and reliable testimony about an event they personally experienced, but several aspects of their memory reports deteriorate over time.  相似文献   

7.
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects (“cat”). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the same door-opening role in word learning for children with autism spectrum disorder (ASD) and Down syndrome (DS), who show delayed vocabulary development and who differ in the strength of gesture production. To answer this question, we observed 23 18-month-old TD children, 23 30-month-old children with ASD, and 23 30-month-old children with DS 5 times over a year during parent–child interactions. Children in all 3 groups initially expressed a greater proportion of referents uniquely in gesture than in speech. Many of these unique gestures subsequently entered children’s spoken vocabularies within a year—a pattern that was slightly less robust for children with DS, whose word production was the most markedly delayed. These results indicate that gesture is as fundamental to vocabulary development for children with developmental disorders as it is for TD children.  相似文献   

8.
Computational modeling and eye‐tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences activation of semantically related concepts during spoken word recognition (Apfelbaum, Blumstein, & McMurray, 2011). The model made a novel prediction: Semantic input modulates the effect of phonological neighbors on target word processing, producing an approximately inverted‐U‐shaped pattern with a high phonological density advantage at an intermediate level of semantic input—in contrast to the typical disadvantage for high phonological density words in spoken word recognition. This prediction was confirmed with a new analysis of the Apfelbaum et al. data and in a visual world paradigm experiment with preview duration serving as a manipulation of strength of semantic input. These results are consistent with our previous claim that strongly active neighbors produce net inhibitory effects and weakly active neighbors produce net facilitative effects.  相似文献   

9.
Over 30 years ago, it was suggested that difficulties in the ‘auditory organization’ of word forms in the mental lexicon might cause reading difficulties. It was proposed that children used parameters such as rhyme and alliteration to organize word forms in the mental lexicon by acoustic similarity, and that such organization was impaired in developmental dyslexia. This literature was based on an ‘oddity’ measure of children's sensitivity to rhyme (e.g. wood, book, good) and alliteration (e.g. sun, sock, rag). The ‘oddity’ task revealed that children with dyslexia were significantly poorer at identifying the ‘odd word out’ than younger children without reading difficulties. Here we apply a novel modelling approach drawn from auditory neuroscience to study the possible sensory basis of the auditory organization of rhyming and non‐rhyming words by children. We utilize a novel Spectral‐Amplitude Modulation Phase Hierarchy (S‐AMPH) approach to analysing the spectro‐temporal structure of rhyming and non‐rhyming words, aiming to illuminate the potential acoustic cues used by children as a basis for phonological organization. The S‐AMPH model assumes that speech encoding depends on neuronal oscillatory entrainment to the amplitude modulation (AM) hierarchy in speech. Our results suggest that phonological similarity between rhyming words in the oddity task depends crucially on slow (delta band) modulations in the speech envelope. Contrary to linguistic assumptions, therefore, auditory organization by children may not depend on phonemic information for this task. Linguistically, it is assumed that ‘book’ does not rhyme with ‘wood’ and ‘good’ because the final phoneme differs. However, our auditory analysis suggests that the acoustic cues to this phonological dissimilarity depend primarily on the slower amplitude modulations in the speech envelope, thought to carry prosodic information. Therefore, the oddity task may help in detecting reading difficulties because phonological similarity judgements about rhyme reflect sensitivity to slow amplitude modulation patterns. Slower amplitude modulations are known to be detected less efficiently by children with dyslexia.  相似文献   

10.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

11.
ABSTRACT

While many studies point to a positive relationship between phonological skills and reading in English, little is known about these relationships for children learning to read in Arabic. Arabic orthography is considered deep if it is not vowelized but shallow if it is vowelized. The aim of this study was to examine the relationships among reading ability, phonological, semantic, orthographic and syntactic skills in Arabic. The participants were 143 Arab children, aged 8‐11, in Arab villages of central Israel. They were administered working memory, visual, oral close, phonological, word recognition, spelling, orthographic, and word attack tests. The results showed that word recognition test was highly correlated with phonological skills, semantic processing, syntactic knowledge and short‐term memory. Poor readers showed a significant lag in the development of these skills, the problems being most significant at phonological and semantic levels and less so at the visual levels. The similarities and differences between the acquisition of reading skills in Arabic and English are discussed.  相似文献   

12.
The self‐teaching hypothesis describes how children progress toward skilled sight‐word reading. It proposes that children do this via phonological recoding with assistance from contextual cues, to identify the target pronunciation for a novel letter string, and in so doing create an opportunity to self‐teach new orthographic knowledge. We present a new computational implementation of self‐teaching within the dual‐route cascaded (DRC) model of reading aloud, and we explore how decoding and contextual cues can work together to enable accurate self‐teaching under a variety of circumstances. The new model (ST‐DRC) uses DRC’s sublexical route and the interactivity between the lexical and sublexical routes to simulate phonological recoding. Known spoken words are activated in response to novel printed words, triggering an opportunity for orthographic learning, which is the basis for skilled sight‐word reading. ST‐DRC also includes new computational mechanisms for simulating how contextual information aids word identification, and it demonstrates how partial decoding and ambiguous context interact to achieve irregular‐word learning. Beyond modeling orthographic learning and self‐teaching, ST‐DRC’s performance suggests new avenues for empirical research on how difficult word classes such as homographs and potentiophones are learned.  相似文献   

13.
Children (4 to 6 years of age) were taught to associate printed 3- or 4-letter abbreviations, or cues, with spoken words (e.g., bfr for beaver). All but 1 of the letters in the cue corresponded to phonemes in the spoken target word. Two types of cues were constructed: phonetic cues, in which the medial letter was phonetically similar to the target word, and control cues, in which the central phoneme was phonetically dissimilar. In Experiment 1, children learned the phonetic cues better than the control cues, and learning correlated with measures of phonological skill and knowledge of the meanings of the words taught. In Experiment 2, the target words differed on a semantic variable-imageability-and learning was influenced by both the phonetic properties of the cue and the imageability of the words used.  相似文献   

14.
The present study was designed to teach conversational speech using text‐message prompts to children with autism spectrum disorder (ASD) in home play settings with siblings and peers. A multiple baseline design across children was used. Children learned conversational speech through the text‐message prompts, and the behavior generalized across peers and settings. Maintenance of treatment gains was seen at 1‐month follow‐up probes. Social validity measures indicated that parents of typically developing children viewed the participants' conversational speech as much improved after the intervention. Results are discussed in terms of the efficacy of text‐message prompts as a promising way to improve conversational speech for children with ASD.  相似文献   

15.
Sleep is known to support the neocortical consolidation of declarative memory, including the acquisition of new language. Autism spectrum disorder (ASD) is often characterized by both sleep and language learning difficulties, but few studies have explored a potential connection between the two. Here, 54 children with and without ASD (matched on age, nonverbal ability and vocabulary) were taught nine rare animal names (e.g., pipa). Memory was assessed via definitions, naming and speeded semantic decision tasks immediately after learning (pre‐sleep), the next day (post‐sleep, with a night of polysomnography between pre‐ and post‐sleep tests) and roughly 1 month later (follow‐up). Both groups showed comparable performance at pre‐test and similar levels of overnight change on all tasks; but at follow‐up children with ASD showed significantly greater forgetting of the unique features of the new animals (e.g., pipa is a flat frog). Children with ASD had significantly lower central non‐rapid eye movement (NREM) sigma power. Associations between spindle properties and overnight changes in speeded semantic decisions differed by group. For the TD group, spindle duration predicted overnight changes in responses to novel animals but not familiar animals, reinforcing a role for sleep in the stabilization of new semantic knowledge. For the ASD group, sigma power and spindle duration were associated with improvements in responses to novel and particularly familiar animals, perhaps reflecting more general sleep‐associated improvements in task performance. Plausibly, microstructural sleep atypicalities in children with ASD and differences in how information is prioritized for consolidation may lead to cumulative consolidation difficulties, compromising the quality of newly formed semantic representations in long‐term memory.  相似文献   

16.
Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words and/or integrating them with existing knowledge. Nineteen boys with ASD and 19 typically developing (TD) boys matched on age and vocabulary knowledge showed similar improvements in recognition and recall of novel words (e.g. ‘biscal’) 24 hours after training, suggesting an intact ability to consolidate explicit knowledge of new spoken word forms. TD children showed competition effects for existing neighbors (e.g. ‘biscuit’) after 24 hours, suggesting that the new words had been integrated with existing knowledge over time. In contrast, children with ASD showed immediate competition effects that were not significant after 24 hours, suggesting a qualitative difference in the time course of lexical integration. These results are considered from the perspective of the dual‐memory systems framework.  相似文献   

17.
侯文文  苏怡 《心理科学进展》2022,30(11):2558-2569
孤独症语言障碍的表现之一是词汇发展滞后, 可能与其注意和记忆损伤有关。当前研究结果表明, 孤独症儿童在学习词汇时难以利用社会注意提供的有效信息, 且其注意易受到无关刺激干扰, 这可能导致其形成的物体-词汇的联结不稳定, 影响其进一步将这种联结整合到心理词典并保存在记忆中。未来研究应探究联合注意影响孤独症儿童词汇学习的发展轨迹和机制, 儿童的词汇知识对其词汇记忆的影响, 并关注自然场景中孤独症儿童的词汇学习过程和个体差异。  相似文献   

18.
The nature and generality of the developmental association between phonological short‐term memory and vocabulary knowledge was explored in two studies. Study 1 investigated whether the link between vocabulary and verbal memory arises from the requirement to articulate memory items at recall or from earlier processes involved in the encoding and storage of the verbal material. Four‐year‐old children were tested on immediate memory measures which required either spoken recall (nonword repetition and digit span) or recognition of a sequence of nonwords. The phonological memory–vocabulary association was found to be as strong for the serial recognition as recall‐based measures, favouring the view that it is phonological short‐term memory capacity rather than speech output skills which constrain word learning. In Study 2, the association between phonological memory skills and vocabulary knowledge was found to be strong in teenaged as well as younger children, indicating that phonological memory constraints on word learning remain significant throughout childhood. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
Research into emotional responsiveness in Autism Spectrum Disorder (ASD) has yielded mixed findings. Some studies report uniform, flat and emotionless expressions in ASD; others describe highly variable expressions that are as or even more intense than those of typically developing (TD) individuals. Variability in findings is likely due to differences in study design: some studies have examined posed (i.e., not spontaneous expressions) and others have examined spontaneous expressions in social contexts, during which individuals with ASD—by nature of the disorder—are likely to behave differently than their TD peers. To determine whether (and how) spontaneous facial expressions and other emotional responses are different from TD individuals, we video-recorded the spontaneous responses of children and adolescents with and without ASD (between the ages of 10 and 17 years) as they watched emotionally evocative videos in a non-social context. Researchers coded facial expressions for intensity, and noted the presence of laughter and other responsive vocalizations. Adolescents with ASD displayed more intense, frequent and varied spontaneous facial expressions than their TD peers. They also produced significantly more emotional vocalizations, including laughter. Individuals with ASD may display their emotions more frequently and more intensely than TD individuals when they are unencumbered by social pressure. Differences in the interpretation of the social setting and/or understanding of emotional display rules may also contribute to differences in emotional behaviors between groups.  相似文献   

20.
Children tend to produce words earlier when they are connected to a variety of other words along the phonological and semantic dimensions. Though these semantic and phonological connectivity effects have been extensively documented, little is known about their underlying developmental mechanism. One possibility is that learning is driven by lexical network growth where highly connected words in the child's early lexicon enable learning of similar words. Another possibility is that learning is driven by highly connected words in the external learning environment, instead of highly connected words in the early internal lexicon. The present study tests both scenarios systematically in both the phonological and semantic domains across 10 languages. We show that phonological and semantic connectivity in the learning environment drives growth in both production- and comprehension-based vocabularies, even controlling for word frequency and length. This pattern of findings suggests a word learning process where children harness their statistical learning abilities to detect and learn highly connected words in the learning environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号