首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.  相似文献   

2.
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).  相似文献   

3.
The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an object-discrimination task and pattern masked with various scene-to-mask stimulus-onset asynchronies (SOAs). Full psychometric functions and reaction times (RTs) were measured. The authors found that (a) rotating the full scenes increased threshold SOA at intermediate rotation angles but not for inversion; (b) rotating object or context degraded classification performance in a similar manner; (c) semantically congruent contexts had negligible facilitatory effects on object classification compared with meaningless baseline contexts with a matching contrast structure, but incongruent contexts severely degraded performance; (d) any object-context incongruence (orientation or semantic) increased RTs at longer SOAs, indicating dependent processing of object and context; and (e) facilitatory effects of context emerged only when the context shortly preceded the object. The authors conclude that the effects of natural scene context on object classification are primarily inhibitory and discuss possible reasons.  相似文献   

4.
Altmann GT  Kamide Y 《Cognition》1999,73(3):247-264
Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.  相似文献   

5.
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the phonemic in English condition, the speech sounds represented two different phonemic categories in English, but represented the same phonemic category in Spanish. In the phonemic in Spanish condition, the speech sounds represented two different phonemic categories in Spanish, but represented the same phonemic categories in English. Results showed pre-attentive discrimination when the acoustics/phonetics of the speech sounds match the language context (e.g., phonemic in English condition during the English language context). The results suggest that language contexts can affect pre-attentive auditory change detection. Specifically, bilinguals’ mental processing of stop consonants relies on contextual linguistic information.  相似文献   

6.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

7.
The present study replicated the well-known demonstration by Altmann and Kamide (1999) that listeners make linguistically guided anticipatory eye movements, but used photographs of scenes rather than clip-art arrays as the visual stimuli. When listeners heard a verb for which a particular object in a visual scene was the likely theme, they made earlier looks to this object (e.g., looks to a cake upon hearing The boy will eat …) than when they heard a control verb (The boy will move …). New data analyses assessed whether these anticipatory effects are due to a linguistic effect on the targeting of saccades (i.e., the where parameter of eye movement control), the duration of fixations (i.e., the when parameter), or both. Participants made fewer fixations before reaching the target object when the verb was selectionally restricting (e.g., will eat). However, verb type had no effect on the duration of individual eye fixations. These results suggest an important constraint on the linkage between spoken language processing and eye movement control: Linguistic input may influence only the decision of where to move the eyes, not the decision of when to move them.  相似文献   

8.
Boundary extension (BE) refers to the tendency to remember a previously perceived scene with a greater spatial expanse. This phenomenon is described as resulting from different sources of information: external (i.e., visual) and internally driven (i.e., amodal, conceptual, and contextual) information. Although the literature has emphasized the role of top-down expectations to account for layout extrapolation, their effect has rarely been tested experimentally. In this research, we attempted to determine how visual context affects BE, as a function of scene exposure duration (long, short). To induce knowledge about visual context, the memorization phase of the camera distance paradigm was preceded by a preexposure phase, during which each of the to-be-memorized scenes was presented in a larger spatial framework. In an initial experiment, we examined the effect of contextual knowledge with presentation duration, allowing for in-depth processing of visual information during encoding (i.e., 15 s). The results indicated that participants exposed to the preexposure showed decreased BE, and displayed no directional memory error in some conditions. Because the effect of context is known to occur at an early stage of scene perception, in a second experiment we sought to determine whether the effect of a preview occurs during the first fixation on a visual scene. The results indicated that BE seems not to be modulated by this factor at very brief presentation durations. These results are discussed in light of current visual scene representation theories.  相似文献   

9.
Visual scenes contain information on both a local scale (e.g., a tree) and a global scale (e.g., a forest). The question of whether the visual system prioritizes local or global elements has been debated for over a century. Given that visual scenes often contain distinct individual objects, here we examine how regularities between individual objects prioritize local or global processing. Participants viewed Navon-like figures consisting of small local objects making up a global object, and were asked to identify either the shape of the local objects or the shape of the global object, as fast and accurately as possible. Unbeknown to the participants, local regularities (i.e., triplets) or global regularities (i.e., quadruples) were embedded among the objects. We found that the identification of the local shape was faster when individual objects reliably co-occurred immediately next to each other as triplets (local regularities, Experiment 1). This result suggested that local regularities draw attention to the local scale. Moreover, the identification of the global shape was faster when objects co-occurred at the global scale as quadruples (global regularities, Experiment 2). This result suggested that global regularities draw attention to the global scale. No participant was explicitly aware of the regularities in the experiments. The results suggest that statistical regularities can determine whether attention is directed to the individual objects or to the entire scene. The findings provide evidence that regularities guide the spatial scale of attention in the absence of explicit awareness.  相似文献   

10.
Although the use of semantic information about the world seems ubiquitous in every task we perform, it is not clear whether we rely on a scene’s semantic information to guide attention when searching for something in a specific scene context (e.g., keys in one’s living room). To address this question, we compared contribution of a scene’s semantic information (i.e., scene gist) versus learned spatial associations between objects and context. Using the flash-preview–moving-window paradigm Castelhano and Henderson (Journal of Experimental Psychology: Human Perception and Performance 33:753–763, 2007), participants searched for target objects that were placed in either consistent or inconsistent locations and were semantically consistent or inconsistent with the scene gist. The results showed that learned spatial associations were used to guide search even in inconsistent contexts, providing evidence that scene context can affect search performance without consistent scene gist information. We discuss the results in terms of hierarchical organization of top-down influences of scene context.  相似文献   

11.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

12.
Using cross-level data from 364 supervisor–subordinate dyads, we examined how relational exchange quality, perceived organizational support (POS), and organizational identification interrelate. We found subordinate POS mediates the relationship between leader-member exchange (i.e., LMX) and organizational identification. We also found the relational context matters—namely, the immediate supervisor’s relationship with his or her manager (i.e., leader–leader exchange, LLX). Our findings suggest higher quality LLX creates a spillover of resources and reduces the negative association between lower quality LMX and POS. Our study extends both social exchange and social identity theories. First, we delineate how relational exchange quality associates with one’s identity in the organization—placing POS as an integrative mechanism between exchange and identity. Second, we expand the purview of social exchange theory by including other proximal (and interpersonal) relationships as context for social exchange between the individual and organization. Limitations, future research directions, and practical implications are also discussed.  相似文献   

13.
Individual differences in children's online language processing were explored by monitoring their eye movements to objects in a visual scene as they listened to spoken sentences. Eleven skilled and 11 less-skilled comprehenders were presented with sentences containing verbs that were either neutral with respect to the visual context (e.g., Jane watched her mother choose the cake, where all of the objects in the scene were choosable) or supportive (e.g., Jane watched her mother eat the cake, where the cake was the only edible object). On hearing the supportive verb, the children made fast anticipatory eye movements to the target object (e.g., the cake), suggesting that children extract information from the language they hear and use this to direct ongoing processing. Less-skilled comprehenders did not differ from controls in the speed of their anticipatory eye movements, suggesting normal sensitivity to linguistic constraints. However, less-skilled comprehenders made a greater number of fixations to target objects, and these fixations were of a duration shorter than those observed in the skilled comprehenders, especially in the supportive condition. This pattern of results is discussed in terms of possible processing limitations, including difficulties with memory, attention, or suppressing irrelevant information.  相似文献   

14.
Action perception is selective in that observers attend to and encode certain dimensions of action over others. But how flexible is action perception in its selection of perceptual information? One possibility is that observers consistently attend to particular dimensions of action over others across different contexts. Another possibility, tested here, is that observers flexibly vary their attention to different dimensions of action based on the context in which action occurs. We investigated 9.5-month-old infants’ and adults’ ability to attend to drop height under varying contexts—aiming to drop an object into a narrow container versus a wide container. We predicted differential attention to increases in aiming height for the narrow container versus the wide container because an increase in aiming height has a differential effect on success (i.e., getting the object into the container) depending on the width of the container. Both adults and infants showed an asymmetry in their attention to aiming height as a function of context; in the wide container condition increases and decreases in aiming height were equally detectable, whereas in the narrow container condition observers more readily discriminated increases over decreases in aiming height. These results indicate that action perception is both selective and flexible according to context, aiding in action prediction and infants’ social–cognitive development more broadly.  相似文献   

15.
红色作为独特的知觉刺激,它不仅是视觉符号更是社会互动符号。红色心理功能的探索在近年来取得了巨大的进展。本文具体阐述了红色在成就、体育竞技以及两性关系情境中的心理功能,总结了红色心理功能的相关理论和解释。作者在回顾西方已有研究的同时,也总结了自身在红色心理功能中国化研究方面的探索。最后,本文提出在未来红色心理功能研究中,研究者应该关注对红色心理功能机制的探索和文化特异性的分析。  相似文献   

16.
Monitoring the environment for the occurrence of prospective memory (PM) targets is a resource-demanding process that produces cost (e.g., slower responding) to ongoing activities. However, research suggests that individuals are able to monitor strategically by using contextual cues to reduce monitoring in contexts in which PM targets are not expected to occur. In the current study, we investigated the processes supporting context identification (i.e., determining whether or not the context is appropriate for monitoring) by testing the context cue focality hypothesis. This hypothesis predicts that the ability to monitor strategically depends on whether the ongoing task orients attention to the contextual cues that are available to guide monitoring. In Experiment 1, participants performed an ongoing lexical decision task and were told that PM targets (TOR syllable) would only occur in word trials (focal context cue condition) or in items starting with consonants (nonfocal context cue condition). In Experiment 2, participants performed an ongoing first letter judgment (consonant/vowel) task and were told that PM targets would only occur in items starting with consonants (focal context cue condition) or in word trials (nonfocal context cue condition). Consistent with the context cue focality hypothesis, strategic monitoring was only observed during focal context cue conditions in which the type of ongoing task processing automatically oriented attention to the relevant features of the contextual cue. These findings suggest that strategic monitoring is dependent on limited-capacity processing resources and may be relatively limited when the attentional demands of context identification are sufficiently high.  相似文献   

17.
It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test the extent to which scene and object selective areas are sensitive to perceived distance information independently from their category-selectivity and retinotopic location. We conducted two studies that used a distance illusion (i.e., the Ponzo lines) and showed that scene regions (the parahippocampal place area, PPA, and transverse occipital sulcus, TOS) are biased toward perceived distal stimuli, whereas the lateral occipital (LO) object region is biased toward perceived proximal stimuli. These results suggest that the ventral visual cortex plays a role in representing distance information, extending recent findings on the sensitivity of these regions to location information. More broadly, our findings imply that distance information is inherent to object recognition.  相似文献   

18.
19.
The aim of this self-paced reading study was to investigate the role of grammatical and context-based gender in assigning an antecedent to a pronoun where the antecedent is an epicene or a bigender noun. In Italian, epicene nouns (e.g., vittima, victim) have grammatical gender, whereas bigender nouns (e.g., assistente, assistant) do not have grammatical gender but instead acquire it from the context in which they occur. We devised three different types of context: incongruent contexts (i.e., contexts containing a gender bias that differed from the grammatical gender of the epicene), congruent contexts (i.e., contexts where the gender bias and grammatical gender coincided), and neutral contexts. In the case of epicenes, pronoun resolution was driven by grammatical gender; in the case of bigenders it was driven by the gender assigned by context. The results are discussed in the light of current models of anaphor resolution.  相似文献   

20.
Recent research [e.g., Carrozzo, M., Stratta, F., McIntyre, J., & Lacquaniti, F. (2002). Cognitive allocentric representations of visual space shape pointing errors. Experimental Brain Research 147, 426-436; Lemay, M., Bertrand, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8, 16-32] reported that egocentric and allocentric visual frames of reference can be integrated to facilitate the accuracy of goal-directed reaching movements. In the present investigation, we sought to specifically examine whether or not a visual background can facilitate the online, feedback-based control of visually-guided (VG), open-loop (OL), and memory-guided (i.e. 0 and 1000 ms of delay: D0 and D1000) reaches. Two background conditions were examined in this investigation. In the first background condition, four illuminated LEDs positioned in a square surrounding the target location provided a context for allocentric comparisons (visual background: VB). In the second condition, the target object was singularly presented against an empty visual field (no visual background: NVB). Participants (N=14) completed reaching movements to three midline targets in each background (VB, NVB) and visual condition (VG, OL, D0, D1000) for a total of 240 trials. VB reaches were more accurate and less variable than NVB reaches in each visual condition. Moreover, VB reaches elicited longer movement times and spent a greater proportion of the reaching trajectory in the deceleration phase of the movement. Supporting the benefit of a VB for online control, the proportion of endpoint variability explained by the spatial location of the limb at peak deceleration was less for VB as opposed to NVB reaches. These findings suggest that participants are able to make allocentric comparisons between a VB and target (visible or remembered) in addition to egocentric limb and VB comparisons to facilitate online reaching control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号