首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A naturalistic in-home investigation of maternal teaching strategies while viewing an educational program (Sesame-Street) and an entertainment program (a situation comedy) was conducted. Fourteen mothers and their pre-school children tape-recorded their coversations while jointly viewing the two television programs. Mothers tooks the role of teacher during both types of programs, but they talked more and asked proportionally more questions about the content of Sesame Street than they did about the content of the situation comedy. Likewise, during Sesame Street, mothers talked more about educationally relevant concepts such as size, color, and number. Children, while watching Sesame Street, engaged in ore labeling of educationally relevant concepts than they did while viewing the situation comedy. During the situation comedy, mothers explained why television characters performed particular behaviors and assigned traits and emotions to characters more frequently than they did during Sesame Street. The findings suggest that the parent may play an important role in helping children maximally utilize television as a teacher.  相似文献   

2.
The current study investigated the impact of different television programming on the social interactions and toy play of preschool children. Same-sex pairs of young children were observed during three types of television programs: cartoons, Sesame Street, and situation comedies. Children were also observed when the television screen was black. Children visually attended to the cartoons the most, Sesame Street less often, and the situation comedy the least. Cartoons dramatically depressed social interaction. Sesame Street elicited the most verbal imitation. Both Sesame Street and the situation comedy allowed the children to divide activity among their peers, the toys, and the television program. Girls verbally imitated program content more than boys. This pattern of findings remained after the children's visual attention to the television was statistically controlled. Several developmental trends were detected. The image of children “mesmerized” in front of the television set, foresaking social interactions and active involvement with their object environment, held true for only one type of programming, namely, cartoons. During the other programs, the children remained active and socially involved.  相似文献   

3.
Preschool children's visual attention to nearly three hours of a heterogeneous sample of children's programing was examined in relationship to the presence of 37 simple visual and auditory attributes of television programs. A factor analysis of the attributes indicated that they were largely independent, with the exception of two factors, which were labeled “women and children” and “puppets.” Attributes and factors that were positively related to attention were the puppet factor, women and children factor, auditory changes, peculiar voices, movement, cuts, sound effects, laughter, and applause. Attributes that were negatively related to attention were adult male voices, extended zooms and pans, eye contact, and still shots. Other attributes had both positive and negative effects on attention depending on whether the child was looking at the TV at the time the attribute occurred. It is suggested that attributes are positive or negative to the degree to which they signal informative comprehensible content.  相似文献   

4.
A review of previous studies on children's comprehension of visual formal features did not warrant predictions about children's understanding of the formal features as used in three items from Sesamstraat, the Dutch version of Sesame Street. Therefore, a study was designed in which 45 children in the age range of 4–6 years watched the items and were interviewed. In the first item, a split screen was used to visualize simultaneity of actions. The second item used a subjective camera to suggest the construction of a home video by one of the characters. Version 1 of this item did not show the character while making home video, whereas Version 2 did show the character while filming. The third item contained a dissolve (Version 1) or a cut (Version 2) to indicate the end of a dream. A general conclusion was that although older children (mean AGE=5.9 years) understood the visual formal features better than younger children (mean AGE=4.4 years), the extent to which children in both age groups understood the visual formal features appeared to vary between items and versions. Most children in both age groups understood the split screen in the first item. The younger age group did not understand the subjective camera if the making of the home video was not shown. Both the younger and the older age group understood the version with the cut better than the original version with the dissolve. In the discussion, the question was addressed what the findings of this study and previous studies teach us about children's understanding of visual formal features in general.  相似文献   

5.
A variety of perceptual correspondences between auditory and visual features have been reported, but few studies have investigated how rhythm, an auditory feature defined purely by dynamics relevant to speech and music, interacts with visual features. Here, we demonstrate a novel crossmodal association between auditory rhythm and visual clutter. Participants were shown a variety of visual scenes from diverse categories and asked to report the auditory rhythm that perceptually matched each scene by adjusting the rate of amplitude modulation (AM) of a sound. Participants matched each scene to a specific AM rate with surprising consistency. A spatial-frequency analysis showed that scenes with greater contrast energy in midrange spatial frequencies were matched to faster AM rates. Bandpass-filtering the scenes indicated that greater contrast energy in this spatial-frequency range was associated with an abundance of object boundaries and contours, suggesting that participants matched more cluttered scenes to faster AM rates. Consistent with this hypothesis, AM-rate matches were strongly correlated with perceived clutter. Additional results indicated that both AM-rate matches and perceived clutter depend on object-based (cycles per object) rather than retinal (cycles per degree of visual angle) spatial frequency. Taken together, these results suggest a systematic crossmodal association between auditory rhythm, representing density in the temporal domain, and visual clutter, representing object-based density in the spatial domain. This association may allow for the use of auditory rhythm to influence how visual clutter is perceived and attended.  相似文献   

6.
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.  相似文献   

7.
《Media Psychology》2013,16(2):165-190
Thirty years after its broadcast premiere, Sesame Street continues to pursue its mission of entertaining and educating children around the world. This article assesses the impact of the U.S. Sesame Street and several international Sesame Street coproductions, through a review of research on the series' effects on children's academic skills and social behavior: Consistent patterns of data collected over 30 years indicate that Sesame Street holds significant positive effects for its viewers across a broad range of subject areas. Measurable effects can endure for as long as 10 to 12 years, and many have been found to be consistent across countries and cultures as well.  相似文献   

8.
The effects of talker variability on visual speech perception were tested by having subjects speechread sentences from either single-talker or mixed-talker sentence lists. Results revealed that changes in talker from trial to trial decreased speechreading performance. To help determine whether this decrement was due to talker change--and not a change in superficial characteristics of the stimuli--Experiment 2 tested speechreading from visual stimuli whose images were tinted by a single color, or mixed colors. Results revealed that the mixed-color lists did not inhibit speechreading performance relative to the single-color lists. These results are analogous to findings in the auditory speech literature and suggest that, like auditory speech, visual speech operations include a resource-demanding component that is influenced by talker variability.  相似文献   

9.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker’s voice was dubbed onto a videotape containing a female talker’s face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker’s voice was dubbed onto a male face and a female talker’s voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effectwas not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

10.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker's voice was dubbed onto a videotape containing a female talker's face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker's voice was dubbed onto a male face and a female talker's voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effect was not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

11.
12.
This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.  相似文献   

13.
Delayed auditory feedback disrupts the production of speech, causing an increase in speech duration as well as many articulatory errors. To determine whether prolonged exposure to delayed auditory feedback IDAFI leads to adaptive compensations in speech production, 10 subjects were exposed in separate experimental sessions to both incremental and constantdelay exposure conditions. Significant adaptation occurred for syntactically structured stimuli in the form of increased speaking rates. After DAF was removed, aftereffects were apparent for all stimulus types in terms of increased speech rates. A carry-over effect from the first to the second experimental session was evident as long as 29 days after the first session. The use of strategies to overcome DAF and the differences between adaptation to DAF and adaptation to visual rearrangement are discussed.  相似文献   

14.
Recent police shootings of unarmed suspects have brought to bear harsh criticism of law enforcement use of deadly force. Two studies sought to investigate perceptions of police misuse of deadly force. Study 1 showed that as number of officers decreased and number of shots increased, perceptions of misuse of force were augmented. Number of shots per officer significantly predicted perceptions of misuse of force. Study 2 investigated the effects of social dominance orientation, blind patriotism, and right-wing authoritarianism. Results showed a significant interaction between number of officers, number of shots fired, and social dominance orientation. This personality variable was an especially strong predictor of misuse of force in situations involving the largest number of shots fired per officer.  相似文献   

15.
16.
Sound symbolism refers to non-arbitrary mappings between the sounds of words and their meanings and is often studied by pairing auditory pseudowords such as “maluma” and “takete” with rounded and pointed visual shapes, respectively. However, it is unclear what auditory properties of pseudowords contribute to their perception as rounded or pointed. Here, we compared perceptual ratings of the roundedness/pointedness of large sets of pseudowords and shapes to their acoustic and visual properties using a novel application of representational similarity analysis (RSA). Representational dissimilarity matrices (RDMs) of the auditory and visual ratings of roundedness/pointedness were significantly correlated crossmodally. The auditory perceptual RDM correlated significantly with RDMs of spectral tilt, the temporal fast Fourier transform (FFT), and the speech envelope. Conventional correlational analyses showed that ratings of pseudowords transitioned from rounded to pointed as vocal roughness (as measured by the harmonics-to-noise ratio, pulse number, fraction of unvoiced frames, mean autocorrelation, shimmer, and jitter) increased. The visual perceptual RDM correlated significantly with RDMs of global indices of visual shape (the simple matching coefficient, image silhouette, image outlines, and Jaccard distance). Crossmodally, the RDMs of the auditory spectral parameters correlated weakly but significantly with those of the global indices of visual shape. Our work establishes the utility of RSA for analysis of large stimulus sets and offers novel insights into the stimulus parameters underlying sound symbolism, showing that sound-to-shape mapping is driven by acoustic properties of pseudowords and suggesting audiovisual cross-modal correspondence as a basis for language users' sensitivity to this type of sound symbolism.  相似文献   

17.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.  相似文献   

18.
Hollnagel, E. The rate of internal speech in paced rehearsal. Scand. J. Psychol., 1973, 14, 241–243.-The purpose of the experiment was to measure the rate of internal speech in rehearsal. The method used was paced rehearsal, i. e. the subjects should synchronize their internal speech with an external signal. The known rate of the external signal, the pace rate, was used as an ndication of the rate of internal speech. For each subject the maximum pace rate was found, and the corresponding maximum rate of internal speech was calculated. The mean of the maximum rates of internal speech for 17 subjects was 126 ms/syllable. This is considerably faster than the results obtained from experimental studies up to now.  相似文献   

19.
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker's face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.  相似文献   

20.
Jordan TR  Abedipour L 《Perception》2010,39(9):1283-1285
Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号