首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This experiment tested the hypothesis that syntactic constituents in American Sign Language (ASL) serve as perceptual units. We adapted the strategy first employed by Fodor and Bever in 1965 in a study of the psychological reality of linguistic speech segments. Four deaf subjects were shown ASL sign sequences constructed to contain a single constituent break. The dependent measure was the subjective location of a light flash occurring during the sign sequence. The prediction that the flashes would be attracted to the constituent boundary was supported for two of the subjects, while the other two showed random placement of the flash location on either side of the constituent boundary. The two subjects not performing in the predicted direction were more proficient in English (written) than the two giving the effect. It was suggested that this relatively greater proficiency may have interfered in some way with the ASL syntax to produce the results obtained.  相似文献   

2.
Dynamic properties influence the perception of facial expressions   总被引:8,自引:0,他引:8  
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.  相似文献   

3.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

4.
Observers' perceptions of the three-dimensional structure of smoothly curved surfaces defined by patterns of image shading were investigated under varying conditions of illumination. In five experiments, observers judged the global orientation and the motion of the simulated surfaces from both static and dynamic patterns of image shading. We found that perceptual performance was more accurate with static than with dynamic displays. Dynamic displays evoked systematic biases in perceptual performance when the surface and the illumination source were simulated as rotating in opposite directions. In these conditions, the surface was incorrectly perceived as rotating in the same direction as the illumination source. Conversely, the orientation of the simulated surfaces was perceived correctly when the frames making up the apparent-motion sequences of the dynamic displays were presented as static images. In Experiment 6, moreover, the results obtained with the computer-generated displays were replicated with solid objects.  相似文献   

5.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

6.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

7.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

8.
Recent experiments have suggested that seeing a familiar face move provides additional dynamic information to the viewer, useful in the recognition of identity. In four experiments, repetition priming was used to investigate whether dynamic information is intrinsic to the underlying face representations. The results suggest that a moving image primes more effectively than a static image, even when the same static image is shown in the prime and the test phases (Experiment 1). Furthermore, when moving images are presented in the test phase (Experiment 2), there is an advantage for moving prime images. The most priming advantage is found with naturally moving faces, rather than with those shown in slow motion (Experiment 3). Finally, showing the same moving sequence at prime and test produced more priming than that found when different moving sequences were shown (Experiment 4). The results suggest that dynamic information is intrinsic to the face representations and that there is an advantage to viewing the same moving sequence at prime and test.  相似文献   

9.
Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.  相似文献   

10.
We present three experiments to identify the specific information sources that skilled participants use to make recognition judgements when presented with dynamic, structured stimuli. A group of less skilled participants acted as controls. In all experiments, participants were presented with filmed stimuli containing structured action sequences. In a subsequent recognition phase, participants were presented with new and previously seen stimuli and were required to make judgements as to whether or not each sequence had been presented earlier (or were edited versions of earlier sequences). In Experiment 1, skilled participants demonstrated superior sensitivity in recognition when viewing dynamic clips compared with static images and clips where the frames were presented in a nonsequential, randomized manner, implicating the importance of motion information when identifying familiar or unfamiliar sequences. In Experiment 2, we presented normal and mirror-reversed sequences in order to distort access to absolute motion information. Skilled participants demonstrated superior recognition sensitivity, but no significant differences were observed across viewing conditions, leading to the suggestion that skilled participants are more likely to extract relative rather than absolute motion when making such judgements. In Experiment 3, we manipulated relative motion information by occluding several display features for the duration of each film sequence. A significant decrement in performance was reported when centrally located features were occluded compared to those located in more peripheral positions. Findings indicate that skilled participants are particularly sensitive to relative motion information when attempting to identify familiarity in dynamic, visual displays involving interaction between numerous features.  相似文献   

11.
We present three experiments to identify the specific information sources that skilled participants use to make recognition judgements when presented with dynamic, structured stimuli. A group of less skilled participants acted as controls. In all experiments, participants were presented with filmed stimuli containing structured action sequences. In a subsequent recognition phase, participants were presented with new and previously seen stimuli and were required to make judgements as to whether or not each sequence had been presented earlier (or were edited versions of earlier sequences). In Experiment 1, skilled participants demonstrated superior sensitivity in recognition when viewing dynamic clips compared with static images and clips where the frames were presented in a nonsequential, randomized manner, implicating the importance of motion information when identifying familiar or unfamiliar sequences. In Experiment 2, we presented normal and mirror-reversed sequences in order to distort access to absolute motion information. Skilled participants demonstrated superior recognition sensitivity, but no significant differences were observed across viewing conditions, leading to the suggestion that skilled participants are more likely to extract relative rather than absolute motion when making such judgements. In Experiment 3, we manipulated relative motion information by occluding several display features for the duration of each film sequence. A significant decrement in performance was reported when centrally located features were occluded compared to those located in more peripheral positions. Findings indicate that skilled participants are particularly sensitive to relative motion information when attempting to identify familiarity in dynamic, visual displays involving interaction between numerous features.  相似文献   

12.
Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause. Bimodal bilinguals produced more ASL-appropriate facial expressions than did nonsigners and synchronized their expressions with the onset of the corresponding English clauses. This result provides evidence for a dual-language architecture in which grammatical information can be integrated up to the level of phonological implementation. Overall, participants produced more raised brows than furrowed brows, which can convey negative affect. Bimodal bilinguals suppressed but did not completely inhibit ASL facial grammar when it conflicted with conventional facial gestures. We conclude that morphosyntactic elements from two languages can be articulated simultaneously and that complete inhibition of the nonselected language is difficult.  相似文献   

13.
Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.  相似文献   

14.
15.
We tested hearing 6- and 10-month-olds' ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-na?ve infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer's facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children's production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.  相似文献   

16.
Lane, Boyes-Braem, and Bellugi (1976) suggested that American Sign Language (ASL) was perceived according to distinctive features, as is the case with speech. They advanced a binary model of distinctive features for one component of ASL signs, handshape. To test the validity of this model, three native users of ASL and three English speakers who knew no ASL participated in two experiments. In the first, subjects identified ASL handshapes obscured by visual noise, and confusion frequencies yielded similarity scores for all possible pairs of handshapes. Pairs that shared more features according to the model produced higher similarity scores for both groups of subjects. In the second experiment, subjects discriminated a subset of all possible handshape pairs in a speeded “same-different” task; discrimination accuracy and reaction times yielded a d’ and a ds value, respectively, for each pair. Pairs that shared more features according to a slightly revised version of the model produced lower discrimination indices for both groups. While the binary model was supported, a model in which handshape features varied continuously in two dimensions was more consistent with all sets of data. Both models describe hearing and deaf performance equally well, suggesting that linguistic experience does little to alter perception of the visual features germane to handshape identification and discrimination.  相似文献   

17.
The relationship between knowledge of American Sign Language (ASL) and the ability to encode facial expressions of emotion was explored. Participants were 55 college students, half of whom were intermediate-level students of ASL and half of whom had no experience with a signed language. In front of a video camera, participants posed the affective facial expressions of happiness, sadness, fear, surprise, anger, and disgust. These facial expressions were randomized onto stimulus tapes that were then shown to 60 untrained judges who tried to identify the expressed emotions. Results indicated that hearing subjects knowledgeable in ASL were generally more adept than were hearing nonsigners at conveying emotions through facial expression. Results have implications for better understanding the nature of nonverbal communication in hearing and deaf individuals.  相似文献   

18.
In order to help illuminate general ways in which language users process inflected items, two groups of native signers of American Sign Language (ASL) were asked to recall lists of inflected and uninflected signs. Despite the simultaneous production of base and inflection in ASL, subjects transposed inflections across base forms, recalling the base forms in the correct serial positions, or transposed base forms, recalling the inflections in the correct serial positions. These rearrangements of morphological components within lists occurred significantly more often than did rearrangements of whole forms (base plus inflection). These and other patterns of errors converged to suggest that ASL signers remembered inflected signs componentially in terms of a base and an inflection, much as the available evidence suggests is true for users of spoken language. Componential processing of regularly inflected forms would thus seem to be independent of particular transmission systems and of particular mechanisms for combining lexical and inflectional material.  相似文献   

19.
Does knowledge of language transfer across language modalities? For example, can speakers who have had no sign language experience spontaneously project grammatical principles of English to American Sign Language (ASL) signs? To address this question, here, we explore a grammatical illusion. Using spoken language, we first show that a single word with doubling (e.g., trafraf) can elicit conflicting linguistic responses, depending on the level of linguistic analysis (phonology vs. morphology). We next show that speakers with no command of a sign language extend these same principles to novel ASL signs. Remarkably, the morphological analysis of ASL signs depends on the morphology of participants' spoken language. Speakers of Malayalam (a language with rich reduplicative morphology) prefer XX signs when doubling signals morphological plurality, whereas no such preference is seen in speakers of Mandarin (a language with no productive plural morphology). Our conclusions open up the possibility that some linguistic principles are amodal and abstract.  相似文献   

20.
Groups of deaf subjects, exposed to tachistoscopic bilateral presentation of English words and American Sign Language (ASL) signs, showed weaker right visual half-field (VHF) superiority for words than hearing comparison groups with both a free-recall and matching response. Deaf subjects showed better, though nonsignificant, recognition of left VHF signs with bilateral presentation of signs but shifted to superior right VHF response to signs when word-sign combinations were presented. Cognitive strategies and hemispheric specialization for ASL are discussed as possible factors affecting half-field asymmetry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号