首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Media Psychology》2013,16(2):145-163
Two experiments tested the hypothesis that visual encoding of television messages is a relatively automatic process, whereas verbal encoding is a relatively controlled process. Subjects viewed 30 messages crossed on Production Pacing (slow, medium, fast) and Arousing Content (calming, arousing). It was argued that as pacing and arousal increased, the resources required to process the messages would increase, which would interfere with the controlled process of verbal encoding but not with the automatic process of visual encoding. As expected, visual recognition was not affected by the increased resource requirements, but verbal recognition declined.  相似文献   

2.
The contribution of encoding deficits to verbal-learning difficulties known to be associated with temporal lobe dysfunction was investigated. Auditory and visual false-recognition tasks, which assess verbal encoding strategies, were given to patients with left-temporal-lobe (LTL) and right-temporal-lobe (RTL) surgical excisions and to a group of normal controls (NC). On both auditory and visual tasks, LTL patients made significantly more false-recognition errors than the other subjects on related, but not unrelated, words in the test list. The findings indicate that LTL patients are able to initially encode verbal material and that a breakdown in information processing occurs at a later stage. On the auditory tasks, the performance of RTL patients did not differ from that of NC subjects. However, on the visual tasks, RTL patients, as compared to both LTL and NC subjects, made fewer false-recognition errors. The performance of RTL patients, in contrast to LTL patients, could be interpreted as a reduced encoding of the visual attribute of verbal material. Another possible explanation considered was difficulty in familiarity discrimination.  相似文献   

3.
Memory for visually presented verbal and pictorial material was compared using stimuli chosen to minimize non-essential differences between the two types of material. Experiment I required retention of a short list; verbal and pictorial stimuli were remembered equally well. Experiment II required recall of single items after 30 s of backwards counting; recall was much superior for pictorial stimuli. The type of task appeared to affect encoding, with verbal encoding reported to be predominant in Experiment I and visual encoding, or imagery, common in Experiment II.  相似文献   

4.
5.
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.  相似文献   

6.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

7.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

8.
To the extent that all types of visual stimuli can be verbalized to some degree, verbal mediation is intrinsic in so-called "visual" memory processing. This impurity complicates the interpretation of visual memory performance, particularly in certain neurologically impaired populations (e.g., aphasia). The purpose of this study was to investigate the relative contributions of verbal mediation to recognition memory for visual stimuli that vary with respect to their amenability to being verbalized. In Experiment 1, subjects attempted to verbally describe novel figural designs during presentation and then identify them in a subsequent recognition memory test. Verbalizing these designs facilitated memory. Stimuli that were found to be easiest or most difficult to verbalize at the group level were retained for the second study. In Experiment 2, subjects evidenced superior recognition memory for the relatively easy to verbalize items. This advantage was attenuated in subjects who performed a concurrent verbal interference task during encoding, but not in those who performed an analogous visual interference task. These findings provide evidence that impoverished verbal mediation disproportionately impedes memory for visual material that is relatively easy to verbalize. Implications for the clinical assessment of visual memory are discussed.  相似文献   

9.
A series of five experiments examined the categorical perception previously found for color and facial expressions. Using a two-alternative forced-choice recognition memory paradigm, it was found that verbal interference selectively removed the defining feature of categorical perception. Under verbal interference, there was no longer the greater accuracy normally observed for cross-category judgments relative to within-category judgments. The advantage for cross-category comparisons in memory appeared to derive from verbal coding both at encoding and at storage. It thus appears that while both visual and verbal codes may be employed in the recognition memory for colors and facial expressions, subjects only made use of verbal coding when demonstrating categorical perception.  相似文献   

10.
It is shown that an irrelevant visual perception interferes more with verbal learning by means of imagery than does an irrelevant auditory perception. The relative interfering effects of these perceptions were reversed in a verbal learning task involving highly abstract materials. Such results implicate the existence of a true visual component in imaginal mediation. A theoretical model is presented in which a visual system and a verbal-auditory system are distinguished. The visual system controls visual perception and visual imagination. The verbal-auditory system controls auditory perception, auditory imagination, internal verbal representation, and speech. Attention can be more easily divided between the two systems than within either one taken by itself. Furthermore, the visual and verbal-auditory systems are functionally linked by information recoding operations. The application of mnemonic imagery appears to involve a recoding of initially verbal information into visual form, and then the encoding of a primarily visual schema into memory. During recall, the schema is decoded as a visual image, and then recoded once again into the verbal-auditory system. Evidence for such transformations is provided not only by the interference data, but also by an analysis of recall-errors made by Ss using mnemonic imagery.  相似文献   

11.
Herz RS 《Memory & cognition》2000,28(6):957-964
Two paired-associate memory experiments were conducted to investigate verbal coding in olfactory versus nonolfactory cognition. Experiment 1 examined the effects of switching/not switching odors and visual items to words between encoding and test sessions. Experiment 2 examined switching/not switching perceptual odors and verbal-imagine versions of odors with each other. Experiment 1 showed that memory was impaired for odors but not visual cues when they were switched to their verbal form at test. Experiment 2 revealed that memory was impaired for both odors and verbal-imagine cues when they were switched in format at test and that odor sensory imagery was not accessed by the instruction to imagine a smell. Together, these findings suggest that olfaction is distinguished from other sensory systems by the degree of verbal coding involved in associated cognitive processing.  相似文献   

12.
Dual-process accounts of working memory have suggested distinct encoding processes for verbal and visual information in working memory, but encoding for nonspeech sounds (e.g., tones) is not well understood. This experiment modified the sentence–picture verification task to include nonspeech sounds with a complete factorial examination of all possible stimulus pairings. Participants studied simple stimuli–pictures, sentences, or sounds–and encoded the stimuli verbally, as visual images, or as auditory images. Participants then compared their encoded representations to verification stimuli–again pictures, sentences, or sounds–in a two-choice reaction time task. With some caveats, the encoding strategy appeared to be as important or more important than the external format of the initial stimulus in determining the speed of verification decisions. Findings suggested that: (1) auditory imagery may be distinct from verbal and visuospatial processing in working memory; (2) visual perception but not visual imagery may automatically activate concurrent verbal codes; and (3) the effects of hearing a sound may linger for some time despite recoding in working memory. We discuss the role of auditory imagery in dual-process theories of working memory.  相似文献   

13.
The interval for interference in conscious visual imagery   总被引:2,自引:0,他引:2  
Three experiments are described that use dynamic visual noise (DVN) to interfere with words processed under visual and verbal processing instructions. In Experiment 1 DVN is presented to coincide with the encoding of the words or to coincide with the interval between encoding and recall. The results show that while DVN is a robust disruptor when it is applied during encoding to words processed under visual instruction, it has no effect during encoding when the words are processed under rote instruction. Moreover, DVN has no effect when it is applied during the retention interval, no matter what means are employed to encode the words. Experiment 2 extends these findings by again showing no effect of DVN during the retention interval, yet showing robust interference effects for visually processed words during recall. Finally, Experiment 3 demonstrates that the results of Experiments 1 and 2 cannot be explained by a difference in the time duration associated with application of DVN during the retention interval compared to during encoding and recall. Moreover, the differing decay functions for visually and verbally processed words during the intervals used in Experiment 3 suggest that any failure to cause interference is not because the two processing instructions resulted in words being retained in the same medium. The functions are consistent with word storage mechanisms reflecting appropriately verbal and visual properties. The results are discussed in terms of current models of visual working memory. It is argued that a full interpretation of the results requires a buffer mechanism as an important component of any model of visual working memory.  相似文献   

14.
Three experiments are described that use dynamic visual noise (DVN) to interfere with words processed under visual and verbal processing instructions. In Experiment 1 DVN is presented to coincide with the encoding of the words or to coincide with the interval between encoding and recall. The results show that while DVN is a robust disruptor when it is applied during encoding to words processed under visual instruction, it has no effect during encoding when the words are processed under rote instruction. Moreover, DVN has no effect when it is applied during the retention interval, no matter what means are employed to encode the words. Experiment 2 extends these findings by again showing no effect of DVN during the retention interval, yet showing robust interference effects for visually processed words during recall. Finally, Experiment 3 demonstrates that the results of Experiments 1 and 2 cannot be explained by a difference in the time duration associated with application of DVN during the retention interval compared to during encoding and recall. Moreover, the differing decay functions for visually and verbally processed words during the intervals used in Experiment 3 suggest that any failure to cause interference is not because the two processing instructions resulted in words being retained in the same medium. The functions are consistent with word storage mechanisms reflecting appropriately verbal and visual properties. The results are discussed in terms of current models of visual working memory. It is argued that a full interpretation of the results requires a buffer mechanism as an important component of any model of visual working memory.  相似文献   

15.
Early findings from Broca and Wernicke led to the classical view of hemispheric specialization, where the main idea relates to left-hemisphere language capabilities compared to right-hemisphere visual capabilities. Federmeier and Benjamin (2005) have suggested that semantic encoding for verbal information in the right hemisphere can be more effective when memory demands are higher. In light of this, our main goal was to study the effect of retention level of verbal information on hemispheric processes. However, regarding the cross-linguistic differences in orthography and their subsequent effects on word recognition (Frost, Katz, & Bentin, 1987), our intent was also to test prior predictions of Federmeier and Benjamin (2005) for a "shallow" orthography language, where words have a clear correspondence between graphemes and phonemes, as opposed to English, which is a "deep" orthography language. Portuguese concrete nouns were selected. The participants were submitted to a visual half-field word presentation using a continuous recognition memory paradigm. The retention level included 1, 2, 4, 8, 20 or 40 words. Results showed that recognition accuracy was higher for words studied in the right visual field, compared to those studied in the left visual field, when the retention interval included 2, 4, or 20 words. No significant differences were found for the remaining intervals. Further analysis on accuracy data for intermediate retention levels showed that recognition accuracy was higher for the 2 words retention level than for the levels including 4, 8, or 20 words; it was higher for left-hemisphere encoding as well. Our results also indicated that reaction times were slower for left-hemisphere encoding and for the 40 words retention level when compared to that of 20 words. In summary, the current results are in partial agreement with those of Federmeier and Benjamin (2005) and suggest different hemispheric memory strategies for the semantic encoding of verbal information.  相似文献   

16.
The present study investigated the role of verbal labeling and exposure duration in implicit memory for novel visual patterns. Encoding condition was varied in Experiment 1. Two encoding conditions discouraged verbal labeling and a third required it. In Experiment 2, exposure duration was manipulated to determine whether a new memory representation could be formed after a single 1-s exposure. The results suggest that verbal labeling is not necessary to support priming. Type of encoding did not affect implicit memory, but had a pronounced effect on explicit memory. Furthermore, a single 1-s exposure was sufficient to support priming, and priming was not further enhanced by longer stimulus exposures. In contrast, recognition performance was enhanced by a longer stimulus duration. Thus, priming effects with these novel figures are likely to be supported by newly acquired representations rather than by preexisting memory representations.  相似文献   

17.
Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize, Experiment 2; or merely to attend, Experiment 3) and subsequently were required to search for a target among different distractors, each embedded within a colored shape. In half of the trials, an object in the search array matched the prime, but this object never contained the target. Despite this, search was impaired relative to a neutral baseline in which the prime and search displays did not match. An interesting finding is that verbal primes were effective in generating the effects, and verbalization of visual primes elicited similar effects to those elicited when primes were held in WM. However, the effects were absent when primes were only attended. The data suggest that there is automatic encoding into WM when items are verbalized and that verbal as well as visual WM can guide visual attention.  相似文献   

18.
Four lists of Chinese words in a 2 × 2 factorial design of visual and acoustic similarity were used in a short-term memory experiment. In addition to a strong acoustic similarity effect, a highly significant visual similarity effect was also obtained. This was particularly pronounced in the absence of acoustic similarity in the words used. The results not only confirm acoustic encoding to be a basic process in short-term recall of verbal stimuli in a language other than English but also lend support to the growing evidence of visual encoding in short-term memory as the situation demands.  相似文献   

19.
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.  相似文献   

20.
These studies examined the role of spatial encoding in inducing perception-action dissociations in visual illusions. Participants were shown a large-scale Müller-Lyer configuration with hoops as its tails. In Experiment 1, participants either made verbal estimates of the extent of the Müller-Lyer shaft (verbal task) or walked the extent without vision, in an offset path (blind-walking task). For both tasks, participants stood a small distance away from the configuration, to elicit object-relative encoding of the shaft with respect to its hoops. A similar illusion bias was found in the verbal and motoric tasks. In Experiment 2, participants stood at one endpoint of the shaft in order to elicit egocentric encoding of extent. Verbal judgments continued to exhibit the illusion bias, whereas blind-walking judgments did not. These findings underscore the importance of egocentric encoding in motor tasks for producing perception-action dissociations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号