首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., "strawberry"), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing "strawberry"? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   

2.
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., “spinach”; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.  相似文献   

3.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   

4.
The nature of children’s early lexical processing was investigated by asking what information 36-month-olds access and use when instructed to find a known but absent referent. Children readily retrieved stored knowledge about characteristic color, i.e., when asked to find an object with a typical color (e.g., strawberry), children tended to fixate more upon an object that had the same (e.g., red plane) as opposed to a different (e.g., yellow plane) color. They did so regardless of the fact that they had plenty of time to recognize the pictures for what they are, i.e., planes and not strawberries. These data represent the first demonstration that language-mediated shifts of overt attention in young children can be driven by individual stored visual attributes of known words that mismatch on most other dimensions. The finding suggests that lexical processing and overt attention are strongly linked from an early age.  相似文献   

5.
People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing—for example, actually hearing “chair” compared to simply thinking about a chair can temporarily make the visual system a better “chair detector”. Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.  相似文献   

6.
Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 3 2 array and identified and named a target picture on the basis of either category (e.g., “What is the name of the musical instrument?”) or visual-form (e.g., “What is the name of the circular object?”) instructions. There were more fixations on visualform competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, &; Meyer, 1999).  相似文献   

7.
Search targets are typically remembered much better than other objects even when they are viewed for less time. However, targets have two advantages that other objects in search displays do not have: They are identified categorically before the search, and finding them represents the goal of the search task. The current research investigated the contributions of both of these types of information to the long-term visual memory representations of search targets. Participants completed either a predefined search or a unique-object search in which targets were not defined with specific categorical labels before searching. Subsequent memory results indicated that search target memory was better than distractor memory even following ambiguously defined searches and when the distractors were viewed significantly longer. Superior target memory appears to result from a qualitatively different representation from those of distractor objects, indicating that decision processes influence visual memory.  相似文献   

8.
While visual saliency may sometimes capture attention, the guidance of eye movements in search is often dominated by knowledge of the target. How is the search for an object influenced by the saliency of an adjacent distractor? Participants searched for a target amongst an array of objects, with distractor saliency having an effect on response time and on the speed at which targets were found. Saliency did not predict the order in which objects in target-absent trials were fixated. The within-target landing position was distributed around a modal position close to the centre of the object. Saliency did not affect this position, the latency of the initial saccade, or the likelihood of the distractor being fixated, suggesting that saliency affects the allocation of covert attention and not just eye movements.  相似文献   

9.
The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children's prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing.  相似文献   

10.
Whether or not lexical access from print requires spatial attention has been debated intensively for the last 30 years. Studies involving colour naming generally find evidence that “unattended” words are processed. In contrast, reading-based experiments do not find evidence of distractor processing. One theory ascribes the discrepancy to weaker attentional demands for colour identification. If colour naming does not capture all of a subject's attention, the remaining attentional resources can be deployed to process the distractor word. The present study combined exogenous spatial cueing with colour naming and reading aloud separately and found that colour naming is less sensitive to the validity of a spatial cue than is reading words aloud. Based on these results, we argue that colour naming studies do not effectively control attention so that no conclusions about unattended distractor processing can be drawn from them. Thus we reiterate the consistent conclusion drawn from reading aloud and lexical decision studies: There is no word identification without (spatial) attention.  相似文献   

11.
In visual search tasks, subjects look for a target among a variable number of distractor items. If the target is defined by a conjunction of two different features (e.g., color × orientation), efficient search is possible when parallel processing of information about color and about orientation is used to “guid” the deployment of attention to the target. Another type of conjunction search has targets defined by two instances of one type of feature (e.g., a conjunction of two colors). In this case, search is inefficient when the target is an item defined by parts of two different colors but much more efficient if the target can be described as a whole item of one color with a part of another color (Wolfe, Friedman-Hill, & Bilsky, 1994). In this paper, we show that the same distinction holds for size. “Part— whole” size × size conjunction searches are efficient; “part-part” searches are not (Experiments 1–3). In contrast, all orientation × orientation searches are inefficient (Experiments 4–6). This difference between preattentive processing of color and size, on the one hand, and orientation, on the other, may reflect structural relationships between features in real-world objects.  相似文献   

12.
Temperature concepts and colour are commonly associated (i.e., red is “hot” and blue is “cold”), although their direction of influence (unidirectional, bidirectional) is unknown. Semantic Stroop effects, whereby words like fire influence colour categorization, suggest automatic semantic processing influences colour processing. The experiential framework of language comprehension indicates abstract concepts like temperature words simulate concrete experiences in their representation, where expressions like “red-hot” suggest colour processing influences conceptual processing. Participants categorized both colour (Experiment 1: red, blue; Experiment 2: red, green, blue) and word-meaning with matched lists of hot and cold meaning words in each colour. In Experiments 1 and 2, semantic categorization showed congruency effects across hot and cold words, while colour categorization showed facilitation only with hot words in Experiment 2. This asymmetry reflects a more consistent influence of colour categorization on semantic categorization than the reverse, suggesting experiential grounding effects may be more robust than the effects of semantic processing on colour processing.  相似文献   

13.
Prior research into the impact of encoding tasks on visual memory (Castelhano & Henderson, 2005) indicated that incidental and intentional encoding tasks led to similar memory performance. The current study investigated whether different encoding tasks impacted visual memories equally for all types of objects in a conjunction search (e.g., targets, colour distractors, object category distractors, or distractors unrelated to the target). In sequences of pictures, participants searched for prespecified targets (e.g., green apple; Experiment 1), memorized all objects (Experiment 2), searched for specified targets while memorizing all objects (Experiment 3), searched for postidentified targets (Experiment 4), or memorized all objects with one object prespecified (Experiment 5). Encoding task significantly improved visual memory for targets and led to worse memory for unrelated distractors, but did not influence visual memory of distractors that were related to the target's colour or object category. The differential influence of encoding task indicates that the relative importance of the object both positively and negatively influences the memory retained.  相似文献   

14.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

15.
The current study assessed the extent to which the use of referential prosody varies with communicative demand. Speaker–listener dyads completed a referential communication task during which speakers attempted to indicate one of two color swatches (one bright, one dark) to listeners. Speakers' bright sentences were reliably higher pitched than dark sentences for ambiguous (e.g., bright red versus dark red) but not unambiguous (e.g., bright red versus dark purple) trials, suggesting that speakers produced meaningful acoustic cues to brightness when the accompanying linguistic content was underspecified (e.g., “Can you get the red one?”). Listening partners reliably chose the correct corresponding swatch for ambiguous trials when lexical information was insufficient to identify the target, suggesting that listeners recruited prosody to resolve lexical ambiguity. Prosody can thus be conceptualized as a type of vocal gesture that can be recruited to resolve referential ambiguity when there is communicative demand to do so.  相似文献   

16.
In a colour variation of the Deese–Roediger–McDermott (DRM) false memory paradigm, participants studied lists of words critically related to a nonstudied colour name (e.g., “blood, cherry, scarlet, rouge … ”); they later showed false memory for the critical colour name (e.g., “red”). Two additional experiments suggest that participants generate colour imagery in response to such colour-related DRM lists. First, participants claim to experience colour imagery more often following colour-related than standard non-colour-related DRM lists; they also rate their colour imagery as more vivid following colour-related lists. Second, participants exhibit facilitative priming for critical colours in a dot selection task that follows words in the colour-related DRM list, suggesting that colour-related DRM lists prime participants for the actual critical colours themselves. Despite these findings, false memory for critical colour names does not extend to the actual colours themselves (font colours). Rather than leading to source confusion about which colours were self-generated and which were studied, presenting the study lists in varied font colours actually worked to reduce false memory overall. Results are interpreted within the framework of the visual distinctiveness hypothesis.  相似文献   

17.
Individual differences in children's online language processing were explored by monitoring their eye movements to objects in a visual scene as they listened to spoken sentences. Eleven skilled and 11 less-skilled comprehenders were presented with sentences containing verbs that were either neutral with respect to the visual context (e.g., Jane watched her mother choose the cake, where all of the objects in the scene were choosable) or supportive (e.g., Jane watched her mother eat the cake, where the cake was the only edible object). On hearing the supportive verb, the children made fast anticipatory eye movements to the target object (e.g., the cake), suggesting that children extract information from the language they hear and use this to direct ongoing processing. Less-skilled comprehenders did not differ from controls in the speed of their anticipatory eye movements, suggesting normal sensitivity to linguistic constraints. However, less-skilled comprehenders made a greater number of fixations to target objects, and these fixations were of a duration shorter than those observed in the skilled comprehenders, especially in the supportive condition. This pattern of results is discussed in terms of possible processing limitations, including difficulties with memory, attention, or suppressing irrelevant information.  相似文献   

18.
Because of the strong associations between verbal labels and the visual objects that they denote, hearing a word may quickly guide the deployment of visual attention to the named objects. We report six experiments in which we investigated the effect of hearing redundant (noninformative) object labels on the visual processing of multiple objects from the named category. Even though the word cues did not provide additional information to the participants, hearing a label resulted in faster detection of attention probes appearing near the objects denoted by the label. For example, hearing the word chair resulted in more effective visual processing of all of the chairs in a scene relative to trials in which the participants attended to the chairs without actually hearing the label. This facilitation was mediated by stimulus typicality. Transformations of the stimuli that disrupted their association with the label while preserving the low-level visual features eliminated the facilitative effect of the labels. In the final experiment, we show that hearing a label improves the accuracy of locating multiple items matching the label, even when eye movements are restricted. We posit that verbal labels dynamically modulate visual processing via top-down feedback--an instance of linguistic labels greasing the wheels of perception.  相似文献   

19.
Predicates like “coloring-the-star” denote events that have a temporal duration and a culmination point (telos). When combined with perfective aspect (e.g., “Valeria has colored the star”), a culmination inference arises implying that the action has stopped, and the star is fully colored. While the perfective aspect is known to constrain the conceptualization of the event as telic, many reading studies have demonstrated that readers do not make early commitments as to whether the event is bounded or unbounded. A few visual-world studies tested the processing of telic predicates during online sentence processing, demonstrating an early integration of aspectual and temporal cues. By employing the visual-world paradigm, we tested the incremental processing of the perfective aspect in Italian in two eye-tracking studies in which listeners heard durative predicates in the perfective form in a scenario showing a completed and a non-completed event. Differently from previous studies, we compared telic durative predicates such as “coloring-the-star” to punctual predicates such as “lighting-the-candle.” While for punctual predicates, the inferences of telicity (the event has a telos) and of culmination (the telos is reached) are lexically encoded in the perfective verb, for durative predicates, the degree of event completion (visually encoded) needs to be integrated with perfective aspect (linguistically encoded) for the culmination inference derivation. By modulating the interaction of visual and linguistic stimuli across the two experiments, we show that the verb's perfective aspect triggers the culmination inference incrementally during sentence processing, offering novel evidence for the continuous integration of linguistic processing with real-world visual information.  相似文献   

20.
The claim that face perception is mediated by a specialized “face module” that proceeds automatically, independently of attention (e.g., Kanwisher, 2000) can be reconciled with load theory claims that visual perception has limited capacity (e.g., Lavie, 1995) by hypothesizing that face perception has face-specific capacity limits. We tested this hypothesis by comparing the effects of face and nonface perceptual load on distractor face processing. Participants searched a central array of either faces or letter strings for a pop star versus politician's face or name and made speeded classification responses. Perceptual load was varied through the relevant search set size. Response competition effects from a category-congruent or -incongruent peripheral distractor face were eliminated with more than two faces in the face search task, but were unaffected by perceptual load in the name search task. These results support the hypothesis that face perception has face-specific capacity limits and resolve apparent discrepancies in previous research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号