首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
2.
In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the preceding conversation remark; on the remaining trials the target did not name the speech act that the remark performed. In both experiments, lexical decisions were facilitated for targets representing the speech act performed with the prior utterance, but only when the target was presented to the left visual field (and hence initially processed by the RH) and not when presented to the right visual field. This effect occurred at both short (Experiment 1: 250 ms) and long (Experiment 2: 1000 ms) delays. The results demonstrate the critical role played by the RH in conversation processing.  相似文献   

3.
Older and younger participants read sentences about objects and were then shown a picture of an object that either matched or mismatched the implied shape of the object in the sentence. Participants' response times were recorded when they judged whether the object had been mentioned in the sentence. Responses were faster in the shape-matching condition for all participants, but the mismatch effect was stronger for older than for younger adults, even when the larger variability of the older group's response times was controlled for. These results suggest that older adults may construct stronger situation models than younger adults.  相似文献   

4.
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.  相似文献   

5.
We examined the effect of localized brain lesions on processing of the basic speech acts (BSAs) of question, assertion, request, and command. Both left and right cerebral damage produced significant deficits relative to normal controls, and left brain damaged patients performed worse than patients with right-sided lesions. This finding argues against the common conjecture that the right hemisphere of most right-handers plays a dominant role in natural language pragmatics. In right-hemisphere damaged patients, there was no correlation between location and extent of lesion in perisylvian cortex and performance on BSAs. By contrast, processing of the different BSAs by left hemisphere-damaged patients was strongly affected by perisylvian lesion location, with each BSA showing a distinct pattern of localization. This finding raises the possibility that the classical left perisylvian localization of language functions, as measured by clinical aphasia batteries, partly reflects the localization of the BSAs required to perform these functions.  相似文献   

6.
Everyday speech is littered with disfluency, often correlated with the production of less predictable words (e.g., Beattie & Butterworth [Beattie, G., & Butterworth, B. (1979). Contextual probability and word frequency as determinants of pauses in spontaneous speech. Language and Speech, 22, 201-211.]). But what are the effects of disfluency on listeners? In an ERP experiment which compared fluent to disfluent utterances, we established an N400 effect for unpredictable compared to predictable words. This effect, reflecting the difference in ease of integrating words into their contexts, was reduced in cases where the target words were preceded by a hesitation marked by the word er. Moreover, a subsequent recognition memory test showed that words preceded by disfluency were more likely to be remembered. The study demonstrates that hesitation affects the way in which listeners process spoken language, and that these changes are associated with longer-term consequences for the representation of the message.  相似文献   

7.
Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.  相似文献   

8.
9.
10.
The functional specificity of different brain areas recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the three other conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

11.
The functional specificity of different brain regions recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the other three conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

12.
According to Pulvermüller (1999), words are represented in the brain by cell assemblies (Hebb, 1949) distributed over different areas, depending on semantic properties of the word. For example, a word with strong visual associations will be represented by a cell assembly involving neurons in the visual cortex, while a word suggesting action will selectively activate neurons in the motor areas. The present work aims to test the latter hypothesis by means of behavioural measures. Specifically it tests the prediction that there should be a selective influence (in terms either of interference or priming) of performed/observed movements on the performance (reaction times and accuracy) of lexical decision involving words with a strong action association. Similarly, a selective influence of visual images on lexical decision involving words with strong visual associations should be observed. Two experiments were carried out. Results provided partial support for the hypothesis.  相似文献   

13.
Newborns, a few hours after birth, already encounter many different faces, talking or silently moving. How do they process these faces and which cues are important in early face recognition? In a series of six experiments, newborns were familiarized with an unfamiliar face in different contexts (photographs, talking, silently moving, and with only external movements of the head with speech sound). At test, they saw the familiar and a new faces either in photographs, silently moving, or talking. A novelty preference was evidenced at test when photographs were presented in the two phases. This result supports those already evidenced in several studies. A familiarity preference appeared only when the face was seen talking in the familiarization phase and in a photograph or talking again at test. This suggests that the simultaneous presence of speech sound, and rigid and nonrigid movements present in a talking face enhances recognition of interactive faces at birth.  相似文献   

14.
15.
The ability of anterior aphasics and patients with right-hemisphere damage to comprehend both the literal and nonliteral readings of indirect speech acts was examined. Subjects viewed videotaped episodes in which one actor asked another “Can you X?” and the second actor responded with either an action or a simple “Yes.” Subjects judged whether the response was appropriate given its context. Anterior aphasics could comprehend the nonliteral but not the literal reading, supporting models that posit that people have direct access to nonliteral but conventional readings. Patients with right-hemisphere damage could appreciate the direct reading, but failed to distinguish between appropriate and inappropriate action-responses. This finding suggests that it may be possible to dissociate the pragmatic and syntactic aspects of comprehension of indirect speech acts.  相似文献   

16.
The study of processes underlying the interpretation of language often produces evidence that they are complete and occur incrementally. However, computational linguistics has shown that interpretations are often effective even if they are underspecified. We present evidence that similar underspecified representations are used by humans during comprehension, drawing on a scattered and varied literature. We also show how linguistic properties of focus, subordination and focalization can control depth of processing, leading to underspecified representations. Modulation of degrees of specification might provide a way forward in the development of models of the processing underlying language understanding.  相似文献   

17.
The modification of subjects' attention by two prosodic features was investigated using their reaction times to a nonspeech stimulus which coincided with these features. By using a foreign language unfamiliar to the subjects (Czechoslovakian), the influence of semantic and syntactic knowledge was controlled. The results indicate that for native speakers of English intonation fall is a relatively more important cue to the perceptual segmentation of speech than is pause.  相似文献   

18.
We investigated how the reach-to-grasp movement is influenced by the presence of another person (friend or non-friend), who was either invisible (behind) or located in different positions with respect to an object and to the agent, and by the perspective conveyed by linguistic pronouns (“I”, “You”). The interaction between social relationship and relative position influenced the latency of both maximal fingers aperture and velocity peak, showing shorter latencies in the presence of a non-friend than in the presence of a friend. However, whereas the relative position of a non-friend did not affect the kinematics of the movement, the position of a friend mattered: latencies were significantly shorter with friends only in positions allowing them to easily reach for the object. Finally, the investigation of the overall reaching movement time showed an interaction between the speaker and the pronoun: participants reached the object more quickly when the other spoke, particularly if she used the “I” pronoun. This suggests that speaking, and particularly using the “I” pronoun, evokes a potential action. Implications of the results for embodied cognition are discussed.  相似文献   

19.
The aim of the current study was to examine how emotional expressions displayed by the face and body influence the decision to approach or avoid another individual. In Experiment 1, we examined approachability judgments provided to faces and bodies presented in isolation that were displaying angry, happy, and neutral expressions. Results revealed that angry expressions were associated with the most negative approachability ratings, for both faces and bodies. The effect of happy expressions was shown to differ for faces and bodies, with happy faces judged more approachable than neutral faces, whereas neutral bodies were considered more approachable than happy bodies. In Experiment 2, we sought to examine how we integrate emotional expressions depicted in the face and body when judging the approachability of face-body composite images. Our results revealed that approachability judgments given to face-body composites were driven largely by the facial expression. In Experiment 3, we then aimed to determine how the categorization of body expression is affected by facial expressions. This experiment revealed that body expressions were less accurately recognized when the accompanying facial expression was incongruent than when neutral. These findings suggest that the meaning extracted from a body expression is critically dependent on the valence of the associated facial expression.  相似文献   

20.
Human newborns discriminate languages from different rhythmic classes, fail to discriminate languages from the same rhythmic class, and fail to discriminate languages when the utterances are played backwards. Recent evidence showing that cotton-top tamarins discriminate Dutch from Japanese, but not when utterances are played backwards, is compatible with the hypothesis that rhythm discrimination is based on a general perceptual mechanism inherited from a primate ancestor. The present study further explores the rhythm hypothesis for language discrimination by testing languages from the same and different rhythmic class. We find that tamarins discriminate Polish from Japanese (different rhythmic classes), fail to discriminate English and Dutch (same rhythmic class), and fail to discriminate backwards utterances from different and same rhythmic classes. These results provide further evidence that language discrimination in tamarins is facilitated by rhythmic differences between languages, and suggest that, in humans, this mechanism is unlikely to have evolved specifically for language.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号