首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention.  相似文献   

2.
The authors examined word skipping in reading in 2 experiments. In Experiment 1, skipping rates were higher for a preview of a predictable word than for a visually similar nonword, indicating there is full recognition in parafoveal vision. In Experiment 2, foveal load was manipulated by varying the frequency of the word preceding either a 3-letter target word or a misspelled preview. There was again a higher skipping rate for a correct preview and a lower skipping rate when there was a high foveal load, but there was no interaction, and the pattern of effects in fixation times was the same as in the skipping data. Experiment 2 also showed significant skipping of nonwords similar to the target word, indicating skipping based on partial information.  相似文献   

3.
Eye movements were monitored as subjects read sentences containing high- or low-predictable target words. The extent to which target words were predictable from prior context was varied: Half of the target words were predictable, and the other half were unpredictable. In addition, the length of the target word varied: The target words were short (4-6 letters), medium (7-9 letters), or long (10-12 letters). Length and predictability both yielded strong effects on the probability of skipping the target words and on the amount of time readers fixated the target words (when they were not skipped). However, there was no interaction in any of the measures examined for either skipping or fixation time. The results demonstrate that word predictability (due to contextual constraint) and word length have strong and independent influences on word skipping and fixation durations. Furthermore, because the long words extended beyond the word identification span, the data indicate that skipping can occur on the basis of partial information in relation to word identity.  相似文献   

4.
Skilled readers are able to derive meaning from a stream of visual input with remarkable efficiency. In this article, we present the first evidence that statistical information latent in the linguistic environment can contribute to an account of reading behavior. In two eye-tracking studies, we demonstrate that the transitional probabilities between words have a measurable influence on fixation durations, and using a simple Bayesian statistical model, we show that lexical probabilities derived by combining transitional probability with the prior probability of a word's occurrence provide the most parsimonious account of the eye movement data. We suggest that the brain is able to draw upon statistical information in order to rapidly estimate the lexical probabilities of upcoming words: a computationally inexpensive mechanism that may underlie proficient reading.  相似文献   

5.
Earlier research has established that speakers usually fixate the objects they name and that the viewing time for an object depends on the time necessary for object recognition and for the retrieval of its name. In three experiments, speakers produced pronouns and noun phrases to refer to new objects and to objects already known. Speakers looked less frequently and for shorter periods at the objects to be named when they had very recently seen or heard of these objects than when the objects were new. Looking rates were higher and viewing times longer in preparation of noun phrases than in preparation of pronouns. If it is assumed that there is a close relationship between eye gaze and visual attention, these results reveal (1) that speakers allocate less visual attention to given objects than to new ones and (2) that they allocate visual attention both less often and for shorter periods to objects they will refer to by a pronoun than to objects they will name in a full noun phrase. The experiments suggest that linguistic processing benefits, directly or indirectly, from allocation of visual attention to the referent object.  相似文献   

6.
Finding a probable explanation for observed symptoms is a highly complex task that draws on information retrieval from memory. Recent research suggests that observed symptoms are interpreted in a way that maximizes coherence for a single likely explanation. This becomes particularly clear if symptom sequences support more than one explanation. However, there are no existing process data available that allow coherence maximization to be traced in ambiguous diagnostic situations, where critical information has to be retrieved from memory. In this experiment, we applied memory indexing, an eye-tracking method that affords rich time-course information concerning memory-based cognitive processing during higher order thinking, to reveal symptom processing and the preferred interpretation of symptom sequences. Participants first learned information about causes and symptoms presented in spatial frames. Gaze allocation to emptied spatial frames during symptom processing and during the diagnostic response reflected the subjective status of hypotheses held in memory and the preferred interpretation of ambiguous symptoms. Memory indexing traced how the diagnostic decision developed and revealed instances of hypothesis change and biases in symptom processing. Memory indexing thus provided direct online evidence for coherence maximization in processing ambiguous information.  相似文献   

7.
We know that from mid-childhood onwards most new words are learned implicitly via reading; however, most word learning studies have taught novel items explicitly. We examined incidental word learning during reading by focusing on the well-documented finding that words which are acquired early in life are processed more quickly than those acquired later. Novel words were embedded in meaningful sentences and were presented to adult readers early (day 1) or later (day 2) during a five-day exposure phase. At test adults read the novel words in semantically neutral sentences. Participants’ eye movements were monitored throughout exposure and test. Adults also completed a surprise memory test in which they had to match each novel word with its definition. Results showed a decrease in reading times for all novel words over exposure, and significantly longer total reading times at test for early than late novel words. Early-presented novel words were also remembered better in the offline test. Our results show that order of presentation influences processing time early in the course of acquiring a new word, consistent with partial and incremental growth in knowledge occurring as a function of an individual’s experience with each word.  相似文献   

8.
Using a non‐alphabetic language (e.g., Chinese), the present study tested a novel view that semantic information at the sublexical level should be activated during handwriting production. Over 80% of Chinese characters are phonograms, in which semantic radicals represent category information (e.g., 椅 ‘chair,’ 桃 ‘peach,’ 橙 ‘orange’ are related to plants) while phonetic radicals represent phonetic information (e.g., 狼 ‘wolf,’ 朗 ‘brightness,’ 郎 ‘male,’ are all pronounced /lang/). Under different semantic category conditions at the lexical level (semantically related in Experiment 1; semantically unrelated in Experiment 2), the orthographic relatedness and semantic relatedness of semantic radicals in the picture name and its distractor were manipulated under different SOAs (i.e., stimulus onset asynchrony, the interval between the onset of the picture and the onset of the interference word). Two questions were addressed: (1) Is it possible that semantic information could be activated in the sublexical level conditions? (2) How are semantic and orthographic information dynamically accessed in word production? Results showed that both orthographic and semantic information were activated under the present picture‐word interference paradigm, dynamically under different SOAs, which supported our view that discussions on semantic processes in the writing modality should be extended to the sublexical level. The current findings provide possibility for building new orthography‐phonology‐semantics models in writing.  相似文献   

9.
In reading, it is well established that word processing can begin in the parafovea while the eyes are fixating the previous word. However, much less is known about the processing of information to the left of fixation. In two experiments, this issue was explored by combining a gaze-contingent display procedure preventing parafoveal preview and a letter detection task. All words were displayed as a series of xs until the reader fixated them, thereby preventing forward parafoveal processing, yet enabling backward parafoveal or postview processing. Results from both experiments revealed that readers were able to detect a target letter embedded in a word that was skipped. In those cases, the letter could only have been identified in postview (to the left of fixation), and detection rate decreased as the distance between the target letter and the eyes' landing position increased. Most importantly, for those skipped words, the typical missing-letter effect was observed with more omissions for target letters embedded in function than in content words. This can be taken as evidence that readers can extract basic prelexical information, such as the presence of a letter, in the parafoveal area to the left of fixation. Implications of these results are discussed in relation to models of eye movement control in reading and also in relation to models of the missing-letter effect.  相似文献   

10.
Inattentional blindness studies have shown that an unexpected object may go unnoticed if it does not share the property specified in the task instructions. Our aim was to demonstrate that observers develop an attentional set for a property not specified in the task instructions if it allows easier performance of the primary task. Three experiments were conducted using a dynamic selective-looking paradigm. Stimuli comprised four black squares and four white diamonds, so that shape and colour varied together. Task instructions specified shape but observers developed an attentional set for colour, because we made the black–white discrimination easier than the square–diamond discrimination. None of the observers instructed to count bounces by squares reported an unexpected white square, whereas two-thirds of observers instructed to count bounces by diamonds did report the white square. When attentional set departs from task instructions, you may fail to see what you were told to look for.  相似文献   

11.
In two experiments, readers' use of spatial memory was examined by asking them to determine whether an individually shown probe word had appeared in a previously read sentence (Experiment 1) or had occupied a right or left sentence location (Experiment 2). Under these conditions, eye movements during the classification task were generally directed toward the right, irrespective of the location of the relevant target in the previously read sentence. In two additional experiments, readers' knowledge of prior sentence content was examined either without (Experiment 3) or with (Experiment 4) an explicit instruction to move the eyes to a target word in that sentence. Although regressions into the prior sentence were generally directed toward the target, they rarely reached it. In the absence of accurate spatial memories, readers reached previously read target words in two distinct steps--one that moved the eyes in the general vicinity of the target, and one that homed in on it.  相似文献   

12.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.  相似文献   

13.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.  相似文献   

14.
Handwriting, a complex motor process involves the coordination of both the upper limb and visual system. The gaze behavior that occurs during the handwriting process is an area that has been little studied. This study investigated the eye-movements of adults during writing and reading tasks. Eye and handwriting movements were recorded for six different words over three different tasks. The results compared reading and handwriting the same words, a between condition comparison and a comparison between the two handwriting tasks. Compared to reading, participants produced more fixations during handwriting tasks and the average fixation durations were longer. When reading fixations were found to be mostly around the center of word, whereas fixations when writing appear to be made for each letter in a written word and were located around the base of letters and flowed in a left to right direction. Between the two writing tasks more fixations were made when words were written individually compared to within sentences, yet fixation durations were no different. Correlation of the number of fixations made to kinematic variables revealed that horizontal size and road length held a strong correlation with the number of fixations made by participants.  相似文献   

15.
Six experiments explored the role of phonology in the activation of word meanings when words were embedded in meaningful texts. Specifically, the studies examined whether participants detected the substitution of a homophone mate for a contextually appropriate homophone. The frequency of the incorrect homophone, the frequency of the correct homophone, and the predictability of the correct homophone were manipulated. Also, the impact of reading skill was examined. When correct homophones were not predictable and participants had a range of reading abilities, the evidence indicated that phonology plays a role in activating the meanings of low-frequency words only. When the performance of good and poor readers was examined separately, the evidence indicated that good readers primarily activate the meanings of words using the direct route, whereas poor readers primarily activate the meanings of words using the phonological route.  相似文献   

16.
In two studies, we examined how expressions of guilt and shame affected person perception. In the first study, participants read an autobiographical vignette in which the writer did something wrong and reported feeling either guilt, shame, or no emotion. The participants then rated the writer's motivations, beliefs, and traits, as well as their own feelings toward the writer. The person expressing feelings of guilt or shame was perceived more positively on a number of attributes, including moral motivation and social attunement, than the person who reported feeling no emotion. In the second study, the writer of the vignette reported experiencing (or not experiencing) cognitive and motivational aspects of guilt or shame. Expressing a desire to apologise (guilt) or feelings of worthlessness (private shame) resulted in more positive impressions than did reputational concerns (public shame) or a lack of any of these feelings. Our results indicate that verbal expressions of moral emotions such as guilt and shame influence perception of moral character as well as likeability.  相似文献   

17.
The fairy tale of Rumpelstiltskin is studied in conjunction with related tales to provide a fuller understanding of its meaning than previous studies have done. The content of these tales depicts ambivalence about childhood magic and ambivalence about adult reality, which are uneasily resolved by eliminating magic and dwelling in a disenchanted world. Enchantment is retained formally in the telling of the story. To the extent that transference is enchantment, similar conflicts occur during psychoanalysis: is the patient better off adhering to transference or relinquishing it--or can it be integrated with day-to-day experience?  相似文献   

18.
In two studies, we examined how expressions of guilt and shame affected person perception. In the first study, participants read an autobiographical vignette in which the writer did something wrong and reported feeling either guilt, shame, or no emotion. The participants then rated the writer's motivations, beliefs, and traits, as well as their own feelings toward the writer. The person expressing feelings of guilt or shame was perceived more positively on a number of attributes, including moral motivation and social attunement, than the person who reported feeling no emotion. In the second study, the writer of the vignette reported experiencing (or not experiencing) cognitive and motivational aspects of guilt or shame. Expressing a desire to apologise (guilt) or feelings of worthlessness (private shame) resulted in more positive impressions than did reputational concerns (public shame) or a lack of any of these feelings. Our results indicate that verbal expressions of moral emotions such as guilt and shame influence perception of moral character as well as likeability.  相似文献   

19.
Perspective goals, such as studying a map to learn a route through an environment or the overall layout of an environment, produce memory congruent with the goal‐directed rather than the studied perspective. One explanation for this finding is that perspective goals guide attention towards actively gathering relevant information during learning. A second explanation is that information is automatically organized into a goal‐congruent spatial model that guides retrieval. Both explanations predict goal‐congruent memory, but only the former one predicts eye movement differences during study. The present experiment investigated the effect of perspective goals on eye movement during map study and the flexibility of resulting spatial memories. Results demonstrate eye movements towards goal‐congruent map elements during learning, and lasting memory effects at test. These findings carry implications for the design of adaptive hand‐held and in‐vehicle navigation interfaces that accommodate for varied user goals. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The distractor pictures affected the latencies of gaze shifting and vocal naming. The magnitude of the phonological effects increased linearly with latency, excluding lapses of attention as the cause of the effects. In Experiment 2, no distractor effects were obtained when both pictures were named. When pictures with superimposed distractor words were named or the words were read in Experiment 3, the words influenced the latencies of gaze shifting and picture naming, but the pictures yielded no such latency effects in word reading. The picture-word asymmetry was obtained even with equivalent reading and naming latencies. The picture-picture effects suggest that activation spreads continuously from concepts to phonological forms, whereas the picture-word asymmetry indicates that the amount of activation is limited and task dependent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号