首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The top 5 favorite boys' and girls' names from each state of the USA in 2000 and 2003 were analyzed in terms of the emotional associations of their component sounds and sound pronounceability. These were significantly and variously correlated with a historical factor (year), geographic factors (compass directions), and a political factor (percentage of the popular vote cast for President Bush in 2004). The expected stereotypical sex differences were observed: girls' names were longer, more pleasant, less active, and easier to pronounce (p < .01). It was possible to predict emotional associations and pronounceability (R2 = .27-.48, p < .01) on the basis of historical, geographical, and political variables.  相似文献   

2.
The 10 most popular boys' and girls' names for most years of the 20th century were studied by Whissell in terms of the emotional associations of their sounds and their pronounceability. A set of historical and socioeconomic variables, namely, war, depression, the advent of the birth control pill, inflation, and year predicted component scores for name length, emotionality, and pronounceability. There were significant low-to-medium strength correlations among predictors and criteria, and prediction was significant in four of the six models. For example, the inclusion of Positive Emotional sounds in women's names was predicted with R2 = .73 from a formula emphasizing the advent of the pill (beta = 1.58) and year (beta = -1.56).  相似文献   

3.
Three experiments were carried out to examine the cues that are used in learning to read and spell new words. In a reading task (Experiment 1), even preschoolers who could not read simple real words were able to benefit from print-sound relationships that were based on letter names. They found it easier to learn that the made-up word TM was pronounced as 'team" (name condition) than that TM was pronounced as "tame" (sound condition) or as "wide" (visual condition). The letter-name strategy persisted among college students (Experiment 2). In a spelling task (Experiment 3), prereaders and novice readers again did better in the name condition than in the sound condition. The ability to use relationships based on letter sounds emerged later than the ability to use relationships based on letter names. However, sound-based relationships were used to a greater extent in spelling than in reading.  相似文献   

4.
Two experiments tested the common assumption that knowing the letter names helps children learn basic letter-sound (grapheme-phoneme) relation because most names contain the relevant sounds. In Experiment 1 (n=45), children in an experimental group learned English letter names for letter-like symbols. Some of these names contained the corresponding letter sounds, whereas others did not. Following training, children were taught the sounds of these same "letters." Control children learned the same six letters, but with meaningful real-word labels unrelated to the sounds learned in the criterion letter-sound phase. Differences between children in the experimental and control groups indicated that letter-name knowledge had a significant impact on letter-sound learning. Furthermore, letters with names containing the relevant sound facilitated letter-sound learning, but not letters with unrelated names. The benefit of letter-name knowledge was found to depend, in part, on skill at isolating phonemes in spoken syllables. A second experiment (n=20) replicated the name-to-sound facilitation effect with a new sample of kindergarteners who participated in a fully within-subject design in which all children learned meaningless pseudoword names for letters and with phoneme class equated across related and unrelated conditions.  相似文献   

5.
This study investigated knowledge of letter names and letter sounds, their learning, and their contributions to word recognition. Of 123 preschoolers examined on letter knowledge, 65 underwent training on both letter names and letter sounds in a counterbalanced order. Prior to training, children were more advanced in associating letters with their names than with their sounds and could provide the sound of a letter only if they could name it. However, children learned more easily to associate letters with sounds than with names. Training just on names improved performance on sounds, but the sounds produced were extended (CV) rather than phonemic. Learning sounds facilitated later learning of the same letters' names, but not vice versa. Training either on names or on sounds improved word recognition and explanation of printed words. Results are discussed with reference to cognitive and societal factors affecting letter knowledge acquisition, features of the Hebrew alphabet and orthography, and educational implications.  相似文献   

6.
Learning about letters is an important foundation for literacy development. Should children be taught to label letters by conventional names, such as /bi/ for b, or by sounds, such as /b/? We queried parents and teachers, finding that those in the United States stress letter names with young children, whereas those in England begin with sounds. Looking at 5- to 7-year-olds in the two countries, we found that U.S. children were better at providing the names of letters than were English children. English children outperformed U.S. children on letter-sound tasks, and differences between children in the two countries declined with age. We further found that children use the first-learned set of labels to inform the learning of the second set. As a result, English and U.S. children made different types of errors in letter-name and letter-sound tasks. The children's invented spellings also differed in ways reflecting the labels they used for letters.  相似文献   

7.
Preschool-age children (N = 58) were randomly assigned to receive instruction in letter names and sounds, letter sounds only, or numbers (control). Multilevel modeling was used to examine letter name and sound learning as a function of instructional condition and characteristics of both letters and children. Specifically, learning was examined in light of letter name structure, whether letter names included cues to their respective sounds, and children’s phonological processing skills. Consistent with past research, children receiving letter name and sound instruction were most likely to learn the sounds of letters whose names included cues to their sounds regardless of phonological processing skills. Only children with higher phonological skills showed a similar effect in the control condition. Practical implications are discussed.  相似文献   

8.
English-speaking children spell letters correctly more often when the letters' names are heard in the word (e.g., B in beach vs. bone). Hebrew letter names have been claimed to be less useful in this regard. In Study 1, kindergartners were asked to report and spell initial and final letters in Hebrew words that included full (CVC), partial (CV), and phonemic (C) cues derived from these letter names (e.g., kaftor, kartis, kibepsilonl, spelled with /kaf/). Correct and biased responses increased with length of congruent and incongruent cues, respectively. In Study 2, preschoolers and kindergartners were asked to report initial letters with monosyllabic or disyllabic names (e.g., /kaf/ or /samepsilonx/, respectively) that included the cues described above. Correct responses increased with cue length; the effect was stronger with monosyllabic letter names than with disyllabic letter names, probably because the cue covered a larger ratio of the letter name. Phonological awareness was linked to use of letter names.  相似文献   

9.
10.
The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions.  相似文献   

11.
Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.  相似文献   

12.
Hemispheric asymmetry in the perception of emotional sounds   总被引:1,自引:0,他引:1  
Three experiments were conducted to investigate hemispheric asymmetry for the perception of emotional sounds. Pairs of human nonspeech sounds were presented dichotically in a forced choice recognition task. Under divided attention conditions (Experiments 1 and 2) an left ear advantage (LEA) emerged during the second block of trials. Performance accuracy for the left and right ears was equal during the first block of trials. Under selective attention conditions (Experiment 3) an LEA emerged during the first block of trials. The results suggest that attention influences the rate of development of the laterality effect but not the direction of the effect.  相似文献   

13.
14.
In everyday life we often listen to one sound, such as someone's voice, in a background of competing sounds. To do this, we must assign simultaneously occurring frequency components to the correct source, and organize sounds appropriately over time. The physical cues that we exploit to do so are well-established; more recent research has focussed on the underlying neural bases, where most progress has been made in the study of a form of sequential organization known as "auditory streaming". Listeners' sensitivity to streaming cues can be captured in the responses of neurons in the primary auditory cortex, and in EEG wave components with a short latency (< 200ms). However, streaming can be strongly affected by attention, suggesting that this early processing either receives input from non-auditory areas, or feeds into processes that do.  相似文献   

15.
具身的情绪:情绪研究的新范式   总被引:1,自引:0,他引:1       下载免费PDF全文
在计算机隐喻的影响下,认知加工情绪理论成为现代情绪理论的主要支柱。计算机隐喻陷入困境后,具身认知的观点逐渐被人们接纳,具身认知在情绪研究中的应用——具身的情绪可以解除情绪研究的传统局限,摆脱情绪非心即身的两难选择,从而开启了情绪研究的新范式。  相似文献   

16.
17.
18.
In four experiments, participants named target pictures that were accompanied by distractor pictures with phonologically related or unrelated names. Across experiments, the type of phonological relationship between the targets and the related distractors was varied: They were homophones (e.g., bat [animal/baseball]), or they shared word-initial segments (e.g., dog-doll) or word-final segments (e.g., ball-wall). The participants either named the objects after an extensive familiarization and practice phase or without any familiarization or practice. In all of the experiments, the mean target-naming latency was shorter in the related than in the unrelated condition, demonstrating that the phonological form of the name of the distractor picture became activated. These results are best explained within a cascaded model of lexical access--that is, under the assumption that the recognition of an object leads to the activation of its name.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号