首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When two identical visual items are presented in rapid succession, people often fail to report the second instance when trying to recall both (e.g., Kanwisher, 1987). We investigated whether this temporal processing deficit is modulated by the spatial separation between the repeated stimuli within both audition and vision. In Experiment 1, lists of one to three digits were rapidly presented from loudspeaker cones arranged in a semicircle around the participant. Recall accuracy was lower when repeated digits were presented from different positions rather than from the same position, as compared to unrepeated control pairs, demonstrating that auditory repetition deafness (RD) is modulated by the spatial displacement between repeated items. A similar spatial modulation of visual repetition blindness (RB) was reported when pairs of masked letters were presented visually from either the same or different positions arranged on a semicircle around fixation (Experiment 2). These results cannot easily be accounted for by the token individuation hypothesis of RB (Kanwisher, 1987; Park & Kanwisher, 1994) and instead support a recognition failure account (Hochhaus & Johnston, 1996; Luo & Caramazza, 1995, 1996).  相似文献   

2.
Participants report briefly-presented words more accurately when two copies are presented, one in the left visual field (LVF) and another in the right visual field (RVF), than when only a single copy is presented. This effect is known as the 'redundant bilateral advantage' and has been interpreted as evidence for interhemispheric cooperation. We investigated the redundant bilateral advantage in dyslexic adults and matched controls as a means of assessing communication between the hemispheres in dyslexia. Consistent with previous research, normal adult readers in Experiment 1 showed significantly higher accuracy on a word report task when identical word stimuli were presented bilaterally, compared to unilateral RVF or LVF presentation. Dyslexics, however, did not show the bilateral advantage. In Experiment 2, words were presented above fixation, below fixation or in both positions. In this experiment both dyslexics and controls benefited from the redundant presentation. Experiment 3 combined whole words in one visual field with word fragments in the other visual field (the initial and final letters separated by spaces). Controls showed a bilateral advantage but dyslexics did not. In Experiments 1 and 3, the dyslexics showed significantly lower accuracy for LVF trials than controls, but the groups did not differ for RVF trials. The findings suggest that dyslexics have a problem of interhemispheric integration and not a general problem of processing two lexical inputs simultaneously.  相似文献   

3.
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.  相似文献   

4.
Letter identification is reduced when the target letter is surrounded by other, flanking letters. This visual crowding is known to be impacted by physical changes to the target and flanks, such as spatial frequency content, polarity, and interletter spacing. There is also evidence that visual crowding is reduced when the flanking letters and the target letter form a word. The research reported here investigated whether these two phenomena are independent of each other or whether the degree of visual crowding impacts the benefit of word context. Stimulus duration thresholds for letters presented alone and for the middle letters of 3-letter words and nonwords were determined for stimuli presented at the fovea and at the periphery. In Experiment 1, the benefit of word context was found to be the same at the fovea, where visual crowding is minimal, and at the periphery, where visual crowding is substantial. In Experiment 2, visual crowding was manipulated by changing the interletter spacing. Here, too, the benefit of word context was fairly constant for the two retinal locations (fovea or periphery), as well as with changes in interletter spacing. These data call into question both the idea that the benefit of word context is greater when stimulus quality is reduced (as is the case with visual crowding) and the idea that words are processed more effectively when they are presented at the fovea.  相似文献   

5.
We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results demonstrate that the crossmodal facilitation of participants' visual identification performance elicited by the presentation of a simultaneous sound occurs over a very narrow range of ISIs. This critical time-window lies just beyond the interval needed for participants to differentiate the target and mask as constituting two distinct perceptual events (Experiment 1) and can be dissociated from any facilitation elicited by making the visual target physically brighter (Experiment 2). When the sound is presented at the same time as the mask, a facilitatory, rather than an inhibitory effect on visual target identification performance is still observed (Experiment 3). We further demonstrate that the crossmodal facilitation of the visual target by the sound depends on the establishment of a reliable temporally coincident relationship between the two stimuli (Experiment 4); however, by contrast, spatial coincidence is not necessary (Experiment 5). We suggest that when visual and auditory stimuli are always presented synchronously, a better-consolidated object representation is likely to be constructed (than that resulting from unimodal visual stimulation).  相似文献   

6.
In the present investigation, the effects of spatial separation on the interstimulus onset intervals (ISOIs) that produce auditory and visual apparent motion were compared. In Experiment 1, subjects were tested on auditory apparent motion. They listened to 50-msec broadband noise pulses that were presented through two speakers separated by one of six different values between 0 degrees and 160 degrees. On each trial, the sounds were temporally separated by 1 of 12 ISOIs from 0 to 500 msec. The subjects were instructed to categorize their perception of the sounds as "single," "simultaneous," "continuous motion," "broken motion," or "succession." They also indicated the proper temporal sequence of each sound pair. In Experiments 2 and 3, subjects were tested on visual apparent motion. Experiment 2 included a range of spatial separations from 6 degrees to 80 degrees; Experiment 3 included separations from .5 degrees to 10 degrees. The same ISOIs were used as in Experiment 1. When the separations were equal, the ISOIs at which auditory apparent motion was perceived were smaller than the values that produced the same experience in vision. Spatial separation affected only visual apparent motion. For separations less than 2 degrees, the ISOIs that produced visual continuous motion were nearly equal to those which produced auditory continuous motion. For larger separations, the ISOIs that produced visual apparent motion increased.  相似文献   

7.
The hypothesis that the two cerebral hemispheres are specialized for processing different visual spatial frequencies was investigated in three experiments. No differences between the left and right visual fields were found for: (1) contrast-sensitivity functions measured binocularly with vertical gratings ranging from 0.5 to 12 cycles per degree (cpd); (2) visible persistence durations for 1- and 10-cpd gratings measured with a stimulus alternation method; and (3) accuracy (d') and reaction times to correctly identify digitally filtered letters as targets (L or H) or nontargets (T or F). One significant difference, however, was found: In Experiment 3, a higher decision criterion (beta) was used when filtered letters were identified in the right visual field than when they were identified in the left. The letters were filtered with annular, 1-octave band-pass filters with center spatial frequencies of 1, 2, 4, 8, and 16 cpd. Combining four center frequencies with three letter sizes (0.5 degrees, 1 degree, and 2 degrees high) made some stimuli equivalent in distal spatial frequency (cycles per object) and some equivalent in proximal spatial frequency (cycles per degree). The effective stimulus in the third experiment seemed to be proximal spatial frequency (cycles per degree) not distal (cycles per object). We conclude that each cerebral hemisphere processes visual spatial frequency information with equal accuracy but that different decision rules are used.  相似文献   

8.
Allocation of attention according to informativeness in visual recognition   总被引:2,自引:0,他引:2  
In visual identification, is visual attention attracted to more informative elements, i.e. to elements which are more critical for identification? This question was investigated by having subjects detect some visual probes while performing a primary task that involved identification. The probes were located in the neighbourhood of highly or poorly informative parts of the identified stimuli. Three experiments that followed this rationale were conducted. In Experiment I, it was found that when subjects searched for a target letter in lines of identical background letters, they detected more dots near the feature that distinguished between the target and the background letters. In Experiment 11, it was found that native Hebrew-speaking subjects detected more lines above a letter that distinguished between two English words. Experiment III showed that the effect was reduced but did not vanish when spatial uncertainty was introduced. On the whole, the data are interpreted as suggesting that more attention may indeed be directed to informative regions, and that this effect cannot be solely attributed to retinal factors.  相似文献   

9.
Target letters in briefly presented word displays are known to be better detected than when they are presented in anagram arrangements of the words’ letters. Target detection may have been higher for word displays either because Ss identified the words and then determined if a word possessed the target or because, in word displays, Ss could anticipate letters from the transitional probabilities (TRP) of letters in the language (TRP hypothesis). Detection in Experiment I was identical for words and for pseudowords, stimuli which were meaningless rearrangements of the words’ letters but which presented the words’ level of interletter TRP. Randomly rearranged displays, with lower TRP values, yielded lower detection rates. Experiment II showed that detection increased with TRP levels in nonword displays. The results support the TRP hypothesis and thus are consistent with a serial-scanning process in very short-term memory, but are also consistent with a special variant of a parallel process.  相似文献   

10.
The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to elucidate word recognition processes under the split fovea theory are described. The first experiment showed that when words were presented centrally, such that the initial letters were in the left visual field (LVF/RH), there were effects of orthographic neighborhood, i.e., there were faster responses to words with high rather than low orthographic neighborhoods for the initial letters ('lead neighbors'). This effect was limited to lead-neighbors but not end-neighbors (orthographic neighbors sharing the same final letters). When the same words were fully presented in the LVF/RH or right visual field (RVF/LH, Experiment 2), there was no effect of orthographic neighborhood size. We argue that the lack of an effect in Experiment 2 was due to exposure to all of the letters of the words, the words being matched for overall orthographic neighborhood count and the sub-parts no longer having a unique effect. We concluded that the orthographic activation found in Experiment 1 occurred because the initial letters of centrally presented words were projected to the RH. The results support the split fovea theory, where the RH has primacy in representing lead neighbors of a written word.  相似文献   

11.
The present study investigated the impact of inter-character spacing on saccade programming in beginning readers and dyslexic children. In two experiments, eye movements were recorded while dyslexic children, reading-age, and chronological-age controls, performed an oculomotor lateralized bisection task on words and strings of hashes presented either with default inter-character spacing or with extra spacing between the characters. The results of Experiment 1 showed that (1) only proficient readers had already developed highly automatized procedures for programming both left- and rightward saccades, depending on the discreteness of the stimuli and (2) children of all groups were disrupted (i.e., had trouble to land close to the beginning of the stimuli) by extra spacing between the characters of the stimuli, and particularly for stimuli presented in the left visual field. Experiment 2 was designed to disentangle the role of inter-character spacing and spatial width. Stimuli were made the same physical length in the default and extra-spacing conditions by having more characters in the default spacing condition. Our results showed that inter-letter spacing still influenced saccade programming when controlling for spatial width, thus confirming the detrimental effect of extra spacing for saccade programming. We conclude that the beneficial effect of increased inter-letter spacing on reading can be better explained in terms of decreased visual crowding than improved saccade targeting.  相似文献   

12.
Single items such as objects, letters or words are often presented in the right or left visual field to examine hemispheric differences in cognitive processing. However, in everyday life, such items appear within a visual context or scene that affects how they are represented and selected for attention. Here we examine processing asymmetries for a visual target within a frame of other elements (scene). We are especially interested in whether the allocation of visual attention affects the asymmetries, and in whether attention-related asymmetries occur in scenes oriented out of alignment with the viewer. In Experiment 1, visual field asymmetries were affected by the validity of a spatial precue in an upright frame. In Experiment 2, the same pattern of asymmetries occurred within frames rotated 90 degrees on the screen. In Experiment 3, additional sources of the spatial asymmetries were explored. We conclude that several left/right processing asymmetries, including some associated with the deployment of spatial attention, can be organized within scenes, in the absence of differential direct access to the two hemispheres.  相似文献   

13.
14.
Right-handed adults were asked to identify bilaterally presented linguistic stimuli under three experimental conditions. In Condition A, stimuli were three-letter pronounceable nonwords (such as TUP), and subjects were asked to report them by naming them. In Condition B, stimuli were three-letter pronounceable nonwords, and subjects were asked to report them as strings of letters. In Condition C, stimuli were more or less unpronounceable letter strings (such as UTP) created by rearranging the letters of pronounceable nonwords, and subjects reported them as strings of letters. Pronounceable nonwords were found to be better identified from the right visual hemifield irrespective of the way in which they were reported. Unpronounceable letter strings did not produce any visual hemifield difference. Nonwords are of interest because they can be seen as potential words that lack both specific semantic properties and entries in the subject's internal lexicon. The results of the experiment are consistent with the view that both the left and right cerebral hemispheres are able to identify letters but the left hemisphere is more sensitive to the pronounceability of the nonwords. This may happen either because the left hemisphere can make better use of resemblances to real words or because it has access to spelling to sound correspondence rules.  相似文献   

15.
Under conditions of sequential presentation, two words are matched more quickly than are a single letter and the first letter of a word. An exception to this whole-word advantage was reported in 1980 by Umansky and Chambers, who used word pairs as stimuli, and asked subjects to compare the entire words or the words’ first letters. Experiment 1 showed that the stimulus lists used by Umansky and Chambers may not have constrained subjects to process the displays differently for wholistic and component comparisons. In those studies, the two words were identical onsame trials for both wholistic and first-letter comparisons, so that first-letter decisions could have been based on wholistic information. In the present study, lists were constructed so that first-letter decisions could not be determined correctly by wholistic information (e.g., BLAME/BEACH), and the whole-word advantage was replicated. Experiment 2 tested whether wholistic comparisons are generally superior to component comparisons. For consonant strings, first-letter comparisons were made more quickly than were whole-string comparisons. These results are interpreted as support for hierarchical models of visual word processing.  相似文献   

16.
Mathey, Zagar, Doignon, and Seigneuric (2006) reported an inhibitory effect of syllabic neighbourhood in monosyllabic French words suggesting that syllable units mediate the access to lexical representations of monosyllabic stimuli. Two experiments were conducted to investigate the perception of syllable units in monosyllabic stimuli. The illusory conjunction paradigm was used to examine perceptual groupings of letters. Experiment 1 showed that potential syllables in monosyllabic French words (e.g., BI in BICHE) affected the pattern of illusory conjunctions. Experiment 2 indicated that the perceptual parsing in monosyllabic items was due to syllable information and orthographic redundancy. The implications of the data are discussed for visual word recognition processes in an interactive activation model incorporating syllable units and connected adjacent letters (IAS; Mathey et al., 2006).  相似文献   

17.
In three experiments, reaction times for same-different judgments were obtained for pairs of words, pronounceable nonwords (pseudowords), and unpronounceable nonwords. The stimulus strings were printed either in a single letter case or in one of several mixtures of upper- and lowercase letters. In Experiment 1, the stimuli were common one- and two-syllable words; in Experiment 2, the stimuli included both words and pseudowords; and in Experiment 3, words, pseudowords, and nonwords were used. The functional visual units for each string type were inferred from the effects that the number and placement of letter case transitions had onsame reaction time judgments. The evidence indicated a preference to encode strings in terms of multiletter perceptual units if they are present in the string. The data .also suggested that whole words can be used as functional visual units, although the extent of their use depends on contextual parameters such as knowledge that a word will be presented.  相似文献   

18.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

19.
Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.  相似文献   

20.
The cerebral balance of power: confrontation or cooperation?   总被引:5,自引:0,他引:5  
Two visual search experiments were carried out using as stimuli large letters made of small identical letters presented in right, or left, or central visual fields. Considering the spatial frequency contents of the stimuli as the critical variable, Experiment 1 showed that a left-field superiority could be obtained whenever a decision had to be made on a large (low frequency) letter alone, and a right-field advantage emerged when a small (high frequency) letter had to be processed. Experiment 2 showed that the two levels of structure of the stimulus were not encoded at the same rate and that at very brief exposure, only the large letter could be accurately identified. This was accompanied by a left-field superiority, whether or not the stimulus contained the target. These results are interpreted as revealing a differential sensitivity of the hemispheres to the spatial frequency contents of a visual image, the right hemisphere being more adept at processing early-available low frequencies and the left hemisphere operating more efficiently on later-available low frequencies. From these and other experiments reviewed, it is suggested that (a) cerebral lateralization of cognitive functions results from differences in sensorimotor resolution capacities of the hemispheres; (b) both hemispheres can process verbal and visuospatial information, analytically and holistically; (c) respective hemispheric competence is a function of the level of sensorimotor resolution required for processing the information available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号