首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In four experiments, we examined the role of auditory transients and auditory short-term memory in perceiving changes in a complex auditory scene comprising multiple auditory objects. Participants were presented pairs of complex auditory scenes that were composed of a maximum of four animal calls delivered in free field; participants were instructed to decide whether the two scenes were the same or different (Experiments 1, 2, and 4). Changes to the second scene consisted of either the addition or the deletion of one animal call. Contrary to intuitive predictions based on results from the visual change blindness literature, substantial deafness to the change emerged without regard to whether the scenes were separated by 500 msec of masking white noise or by 500 msec of silence (Experiment 1). In fact, change deafness was not even modulated by having the two scenes presented contiguously (i.e., 0-msec interval) or separated by 500 msec of silence (Experiments 2 and 4). This result suggests that change-related auditory transients played little or no role in change detection in complex auditory scenes. Instead, the main determinant of auditory change perception (and auditory change deafness) appears to have been the capacity of auditory short-term memory (Experiments 3 and 4). Taken together, these findings indicate that the intuitive parallels between visual and auditory change perception should be reconsidered.  相似文献   

2.
《Acta psychologica》2013,142(2):168-176
In a one-shot change detection task, we investigated the relationship between semantic properties (high consistency, i.e., diagnosticity, versus inconsistency with regard to gist) and perceptual properties (high versus low salience) of objects in guiding attention in visual scenes and in constructing scene representations. To produce the change an object was added or deleted in either the right or left half of coloured drawings of daily-life events. Diagnostic object deletions were more accurately detected than inconsistent ones, indicating rapid inclusion into early scene representation for the most predictable objects. Detection was faster and more accurate for high salience than for low salience changes. An advantage was found for diagnostic object changes in the high salience condition, although it was limited to additions when considering response speed. For inconsistent objects of high salience, deletions were detected faster than additions. These findings may indicate that objects are primarily selected on a perceptual basis with subsequent and supplementary effect of semantic consistency, in the sense of facilitation due to object diagnosticity or lengthening of processing time due to inconsistency.  相似文献   

3.
Prime pictures portraying pleasant or unpleasant scenes were briefly presented (150-ms display; SOAs of 300 or 800 ms), followed by probe pictures either congruent or incongruent in emotional valence. In an evaluative decision task, participants responded whether the probe was emotionally positive or negative. Affective priming was reflected in shorter response latencies for congruent than for incongruent prime-probe pairs. Although this effect was enhanced by perceptual similarity between the prime and the probe, it also occurred for probes that were physically different, and the effect generalized across semantic categories (animals vs. people). It is concluded that affective priming is a genuine phenomenon, in that it occurs as a function of stimulus emotional content, in the absence of both perceptual similarity and semantic category relatedness between the prime and the probe.  相似文献   

4.
Sentence imagery effects in recall are predicted by both perceptual and semantic elaboration models. The former attributes superior recall of high-imagery sentences to the addition of perceptual network components to an existing semantic network; the latter claims that additions of semantic components are involved. In order to identify the responsible components, free associates were generated to otherwise similar high- and low-imagery sentences in a short-term memory task. In accordance with the perceptual elaboration model, associates differed in rated imagery, but not in number. In a second study, the causal role of perceptual elaboration in recall was investigated by using high- and low-imagery sentence associates as recall cues. Differential effects of cue imagery were found for high-imagery sentences, indicating that perceptual codes are in part responsible for superior high-imagery sentence recall. Evidence is presented that perceptual and semantic network components are involved in a processing trade-off, and the adequacy of present network models to explain it is discussed.  相似文献   

5.
In the current study, the authors investigated whether the ground dominance effect (the use of ground surface information for the perceptual organization of scenes) varied with age. In Experiment 1, a scene containing a ground, a ceiling, and 2 vertical posts was presented. The scene was either in its normal orientation or rotated to the side. In Experiment 2, a blue dot was attached to each post, with location varied from bottom to top of the posts. In Experiment 3, a scene similar to that in Experiment 1 was presented in different locations in visual field. Observers judged which of the 2 objects (posts in Experiments 1 and 3, blue dots in Experiment 2) appeared to be closer. The results indicated that both younger (mean age = 22 years) and older observers (mean age = 73 years) responded consistently with the ground dominance effect. However, the magnitude of the effect decreased for older observers. These results suggest a decreased use of ground surface information by older observers for the perceptual organization of scene layout.  相似文献   

6.
The processing advantage for words in the right visual field (RVF) has often been assigned to parallel orthographic analysis by the left hemisphere and sequential by the right. The authors investigated this notion using the Reicher-Wheeler task to suppress influences of guesswork and an eye-tracker to ensure central fixation. RVF advantages obtained for all serial positions and identical U-shaped serial-position curves obtained for both visual fields (Experiments 1-4). These findings were not influenced by lexical constraint (Experiment 2) and were obtained with masked and nonmasked displays (Experiment 3). Moreover, words and nonwords produced similar serial-position effects in each field, but only RVF stimuli produced a word-nonword effect (Experiment 4). These findings support the notion that left-hemisphere function underlies the RVF advantage but not the notion that each hemisphere uses a different mode of orthographic analysis.  相似文献   

7.
Previous research has shown that changes to scenes are often surprisingly hard to detect. The research reported here investigated the relationship between individual differences in attention and change detection. We did this by assessing participantś breadth of attention in a functional field of view task (FFOV) and relating this measure to the speed with which individuals detected changes in scenes. We also examined how the salience, meaningfulness, and eccentricity of the scene changes affected perceptual change performance. In order to broaden the range of individual differences in attentional breadth, both young and old adults participated in the study. A strong negative relationship was obtained between attentional breadth and the latency with which perceptual changes were detected; observers with broader attentional windows detected changes faster. Salience and eccentricity had large effects on change detection, but meaning aided the performance of young adults only and only when changes also had low salience.  相似文献   

8.
The experiment described here addresses itself to the serial position effect of recall, and in particular to the cause of primacy. Two of the three theories discussed here (selective rehearsal and proactive interference) postulate competition between items during the storage interval as a cause of primacy, acquisition being assumed to be equal for all items. The third theory (differential perceptual processing) places the locus of the effect prior to the storage stage, and does not hold interactions between items to be essential to the effect after encoding. The experiment used this distinction between the two classes of theories, storage of the whole presentation (and hence interactions during storage) not being required. Only one word from each list was recalled, and this item was indicated to S on presentation, thereby eliminating the necessity to attempt to encode all items. The storage-interaction theories predict no primacy for the recall of individual items in this experiment, but the initial member of the list was recalled more often than the third member. This example of primacy lends support to the argument that the trace strengths of items are not always equal immediately after presentation.  相似文献   

9.
The unification of mind: Integration of hemispheric semantic processing   总被引:1,自引:0,他引:1  
Seventy-six participants performed a visual half-field lexical decision task at two different stimulus onset asynchronies (50 or 750 ms). Word targets were primed either by a highly associated word (e.g., CLEAN-DIRTY), a weakly associated word (e.g., CLEAN-TIDY), or an unrelated word (e.g., CLEAN-FAMILY) projected to either the same or opposite visual field (VF) as the target. In the short SOA, RVF-left hemisphere primes resulted in high associate priming regardless of target location (ipsilateral or contralateral to the prime) whereas LVF-right hemisphere primes produced both high and low associate priming across both target location conditions. In the long SOA condition, contralateral priming patterns converged, demonstrating only high associate priming in both VF locations. The results of this study demonstrate the critical role of interhemispheric transfer in semantic processing and indicate a need to elaborate current models of semantic processing.  相似文献   

10.
Abstract

The semantic priming task is a valuable tool in the investigation of semantic memory impairments in patients with acquired disorders of language. This is because priming performance reflects automatic or implicit access to semantic information, unlike most other tests of semantic knowledge, which rely on explicit, voluntary access. Priming results are important for two main reasons: First, normal priming results may be observed in patients who perform poorly on other semantic memory tests, enabling us to distinguish between loss of, or damage to, information in semantic memory, and voluntary access to that information. Second, we can investigate the detailed pattern of loss and preservation of different types of semantic information, by charting the priming effects for different kinds of words, and different kinds of semantic relations between primes and targets.

We discuss the use of the priming task in this context, and address some of the theoretical and methodological criticisms that have been raised in connection with use of the priming task to address these issues. We then describe two recent studies in which we have employed semantic priming tasks, along with other more traditional methods, to investigate specific questions about the semantic memory deficits of three patients.  相似文献   

11.
Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules – a scene grammar – enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind – SCEGRAM – we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/.  相似文献   

12.
Two studies are reported which examine the contribution of linguistic factors to attribute inferences and semantic similarity judgements. For this purpose a new method is developed which allows us to examine the contribution of language as a symbolically shared system. The two studies show that a substantial amount of the variance in both attribute inferences and semantic similarity judgements is mediated by socially shared linguistic conventions. The implications of these findings and the methodology for social cognition, and some models of personality and affect are discussed.  相似文献   

13.
Auditory word comprehension was assessed in a series of 289 acute left hemisphere stroke patients. Participants decided whether an auditorily presented word matched a picture. On different trials, words were presented with a matching picture, a semantic foil, or a phonemic foil. Participants had significantly more trouble with semantic foils across all levels of impairment.  相似文献   

14.
Summary The study investigates the different contributions to semantic priming of two components of the semantic representation underlying a word. The two components are perceptually based information and conceptually based information. Perceptual information is based on physical attributes such as shape or color, while conceptual information consists of more abstract elements such as functional attributes. The question asked in this study was whether both components would produce an effect in semantic priming. Pairs of words either related because of a conceptual property (banana-apple), a perceptual property (ball-apple), or both because of a perceptual and conceptual property (cherry-apple) were presented as prime and target in a lexical decision and a word-naming task. The results showed independent contributions of perceptual and conceptual attributes to semantic priming.This research was supported by a grant from the Dutch Organization for the Advancement of Pure Research (ZWO). All three authors are senior authors and have contributed to all parts of the project.  相似文献   

15.
Two experiments examined the effect of activation of higher-level semantic representations on lower-level perceptual representations. A forced-choice discrimination paradigm was used, a method known to produce repetition blindness (RB) for words unconfounded by memory demands or response bias. In Experiment 1, equivalent reductions in RB (as measured by omission error rate and by d') occurred when successive word pairs were identical in: (1) form, pronunciation, and meaning (both uppercase versions of the same word); (2) pronunciation and meaning but not form (lowercase versus uppercase; lexical identity); and (3) pronunciation, but not form or meaning (homonyms; phonological identity), relative to when the words were unrelated on all dimensions. The RB effect was markedly attenuated, but not eliminated, when the words were semantically related. Similar results were obtained in Experiment 2 using a larger group of subjects. These findings show that higher-order semantic representations can have a top-down influence on judgements based on lower-order perceptual representations. The results are discussed within the framework of a cascade model of object processing in the human brain.  相似文献   

16.
17.
Four reading-related, information-processing tasks were administered to right-handed blind readers of braille who differed in level of reading skill and in preference for using the right hand or the left hand when required to read text with just one hand. The tasks were letter identification, same-different matching of letters that differed in tactual similarity, short-term memory for lists of words that varied in tactual and phonological similarity, and paragraph reading with and without a concurrent memory load of digits. The results showed interactions between hand preference and the hand that was actually used to read the stimulus materials, such that left preferrers were significantly faster and more accurate with their left hands than with their right hands whereas right preferrers were slightly but usually not significantly faster with their right hands than with their left hands. In all cases, the absolute magnitude of the left-hand advantage among left preferrers was substantially larger than the right-hand advantage among right preferrers. The results suggest that encoding strategies for dealing with braille are reflected in hand preference and that such strategies operate to modify an underlying but somewhat plastic superiority of the right hemisphere for dealing with the perceptual requirements of tactual reading. These requirements are not the same as those of visual reading, leading to some differences in patterns of hemispheric specialization between readers of braille and readers of print.  相似文献   

18.
An investigation of perceptual priming and semantic learning in the severely amnesic subject K.C. is reported. He was taught 64 three-word sentences and tested for his ability to produce the final word of each sentence. Despite a total lack of episodic memory, he exhibited (a) strong perceptual priming effects in word-fragment completion, which were retained essentially in full strength for 12 months, and (b) independent of perceptual priming, learning of new semantic facts, many of which were also retained for 12 months. K.C.'s semantic learning may be at least partly attributable to repeated study trials and minimal interference during learning. The findings suggest that perceptual priming and semantic learning are subserved by two memory systems different from episodic memory and that both systems (perceptual representation and semantic memory) are at least partially preserved in some amnesic subjects.  相似文献   

19.
In a task of the same form as the standard Stroop test, the relevant attribute was ellipse size and the required responses were the numbers 1 through 6 assigned to each of the ellipses in order of increasing size. The irrelevant attribute consisted of either alphabet letters or the numerical symbols 1 through 6 displayed in the center of each ellipse. The numerals produced more interference with the classification of the relevant attribute than the alphabet letters, supporting Klein’s (1964) results. In addition, the interference due to the irrelevant numerical symbols increased as the distance between the values of the relevant and irrelevant attributes was decreased. Since “distance” is a structural property of the number system, this indicated that the competing response tendencies aroused by the irrelevant numericals involved the semantic structure for numbers. The same results were obtained when numerical quantity, rather than ellipse size, was the relevant attribute.  相似文献   

20.
Four experiments tested for perceptual priming for written words in a semantic categorization task. Repetition priming was obtained for low-frequency words when unrelated categorizations were performed at study and test (Experiment 1), but it was not orthographically mediated given that written-to-written and spoken-to-written word priming was equivalent (Experiments 2 and 3). Furthermore, no priming was obtained between pictures and words (Experiment 4), suggesting that the nonorthographic priming was largely phonological rather than semantic. These results pose a challenge to standard perceptual theories of priming that should expect orthographic priming when words are presented in a visual format at study and test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号