共查询到20条相似文献,搜索用时 15 毫秒
1.
We examined the mechanisms that mediate the transfer of information from visual input to storage in memory. Observers performed two concurrent tasks, one of which required input into memory. We discovered that the processes involved in the transfer of information from sensory input into memory cause slowing in concurrent cognitive tasks (dual-task slowing). We used the dual-task slowing effect to demonstrate that memory encoding requires more time when more information is to be encoded and to show that dual-task slowing occurs long after the initial perceptual encoding of visual information (Exp. 1). These results suggest a late and central locus of interaction between the two tasks. Experiment 2 also used two concurrent tasks. However, we reversed the direction of interaction between them and produced a memory deficit from the execution of a concurrent task. Together the results suggest that the mechanisms that encode information into memory belong to a family of mechanisms that are involved in dual-task slowing phenomena and that have been studied under the rubric of the PRP effect (psychological refractory period). We were able to locate the most probable locus of the dual-task interactions to a process that appears necessary for memory encoding. We call this process short-term consolidation. Received: 20 July 1997 / Accepted: 17 March 1998 相似文献
2.
Given a constant stream of perceptual stimuli, how can the underlying invariances associated with a given input be learned? One approach consists of using generic truths about the spatiotemporal structure of the physical world as constraints on the types of quantities learned. The learning methodology employed here embodies one such truth: that perceptually salient properties (such as stereo disparity) tend to vary smoothly over time. Unfortunately, the units of an artificial neural network tend to encode superficial image properties, such as individual grey-level pixel values, which vary rapidly over time. However, if the states of units are constrained to vary slowly, then the network is forced to learn a smoothly varying function of the training data. We implemented this temporal-smoothness constraint in a backpropagation network which learned stereo disparity from random-dot stereograms. Temporal smoothness was formalized with the use of regularization theory by modifying the standard cost function minimised during training of a network. Temporal smoothness was found to be similar to other techniques for improving generalisation, such as early stopping and weight decay. However, in contrast to these, the theoretical underpinnings of temporal smoothing are intimately related to fundamental characteristics of the physical world. Results are discussed in terms of regularization theory and the physically realistic assumptions upon which temporal smoothing is based. 相似文献
3.
4.
Irwin Pollack 《Attention, perception & psychophysics》1973,13(2):276-280
Discrimination thresholds were measured for third-order Markov constraints within visual displays. The method permits the cancellation of adjacent second-order differences and of first-order differences. Excellent discrimination of third-order constraints is obtained despite the fact that the average conditional repetition probability and the average run length were invariant. The need to distinguish between local short-term and average long-term analyses for the visual system is briefly discussed. 相似文献
5.
Irwin Pollackt 《Attention, perception & psychophysics》1971,9(6):461-464
A method is described for constructing visual displays in which statistical constraints are encoded within two spatial dimensions without introducing one-dimensional linear constraints. Within each local group of four elements, the state of one element was determined with a given probability by the previously generated states of the other three. Ss rated such displays on a scale from “lumpy” (or crude texture) to “lacy” (or even texture). The consistency of classification obtained for displays with strong aggregated (“lumpy”) properties was substantially higher than that obtained for displays with strong distributed (“lacy”) properties. An incidental feature of the Ss’ behavior was their deliberate degrading of the visual quality of the displays. Comparison is made with one-dimensional displays concatenated in two dimensions. 相似文献
6.
7.
D Elliott 《Canadian journal of psychology》1988,42(1):57-68
8.
Brady TF Chun MM 《Journal of experimental psychology. Human perception and performance》2007,33(4):798-815
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context. 相似文献
9.
Irwin Pollack 《Acta psychologica》1973,37(2):107-115
A contingent uncertainty of one bit is perceptible when imposed upon a combination of two binary-coded visual display variables, but not when imposed upon a combination of three variables. Why? The limitation may be sought in the average amount of the constraint, in the form of the constraint, or in the particular selection of display variables. Tests were carried out in which apparently equivalent informational constraints were imposed upon a single display variable. Such constraints were highly discriminable. Further tests reveal that the limiting feature for the detection of multi-variate constraints is probably the mean constraint level, averaged over all display elements, rather than the constraint level imposed upon the constrained display elements. 相似文献
10.
Smith MC Bentin S Spalek TM 《Journal of experimental psychology. Learning, memory, and cognition》2001,27(5):1289-1298
The reduction of semantic priming following letter search of the prime suggests that semantic activation can be blocked if attention is allocated to the letter level during word processing. Is this true even for the very fast-acting component of semantic activation? To test this, the authors explored semantic priming of lexical decision at stimulus onset asynchronies (SOAs) of either 200 or 1,000 ms. Following semantic prime processing, priming occurred at both SOAs. In contrast, no priming occurred at the long SOA following letter-level processing. Of greatest interest, at the short SOA there was priming following the less demanding consonant/vowel task but not following the more attention-demanding letter search task. Hence, semantic activation can occur even when attention is directed to the letter level, provided there are sufficient resources to support this activation. The authors conclude that the default setting during word recognition is for fast-acting activation of the semantic system. 相似文献
11.
In the present study, we investigated the contributions of motor and perceptual processes to directional constraints as observed during hand-foot coordination. Participants performed cyclical flexion-extension movements of the right hand and foot under two coordination modes: in-phase (isodirectional) and antiphase (non-isodirectional). Those tasks were performed either with full vision or no vision of the limbs. Depending on the position of the forearm (prone or supine), the coordination patterns were performed with similar and dissimilar neuro-muscular coupling with respect to their phylogenetic origin as antigravity muscles. Results showed that the antiphase pattern was more difficult to maintain than the in-phase pattern and that neuro-muscular coupling significantly influenced the coordination dynamics. Moreover, the effect of vision differed as a function of both neuro-muscular coupling and coordination mode. Under dissimilar neuro-muscular coupling, the presence of visual feedback stabilized the in-phase pattern and destabilized the antiphase pattern. In contrast, visual feedback did not influence pattern stability during conditions of similar neuro-muscular coupling. These results shed light on the complex interactions between motor and perceptual (visual) constraints during the production of hand-foot coordination patterns. 相似文献
12.
“Upfixes” are “visual morphemes” originating in comics where an element floats above a character’s head (ex. lightbulbs or gears). We posited that, similar to constructional lexical schemas in language, upfixes use an abstract schema stored in memory, which constrains upfixes to locations above the head and requires them to “agree” with their accompanying facial expressions. We asked participants to rate and interpret both conventional and unconventional upfixes that either matched or mismatched their facial expression (Experiment 1) and/or were placed either above or beside the head (Experiment 2). Interpretations and ratings of conventionality and face–upfix matching (Experiment 1) along with overall comprehensibility (Experiment 2) suggested that both constraints operated on upfix understanding. Because these constraints modulated both conventional and unconventional upfixes, these findings support that an abstract schema stored in long-term memory allows for generalisations beyond memorised individual items. 相似文献
13.
The authors investigated how and to what extent visual information and associated task constraints are negotiated in the coordinative structure of playground swinging. Participants (N = 20) were invited to pump a swing from rest to a prescribed maximal amplitude under 4 conditions: normal vision, no vision, and 2 visual conditions involving explicit phasing constraints. In the latter conditions, participants were presented with a flow pattern consisting of a periodically expanding and contracting optical structure. They were instructed to phase the swing motion so that the forward turning point coincided with either the maximal size (enhanced optical flow) or the minimal size (reduced optical flow) of the presented flow pattern. Removal of visual information clearly influenced the swinging behavior, in that intersegmental coordination became more stereotyped, reflecting a general stiffening of the swinger. The conditions involving explicit phasing requirements also affected the coordination, but in an opposite way: The coordination became less stereotyped. The two phasing instructions had differential effects: The intersegmental coordination deviated more from normal swinging (i.e., without phasing constraints) when optical flow was enhanced than when it was reduced. Collectively, those findings show that visual information plays a formative role in the coordinative structure of swinging, in that variations of visual information and task constraints were accompanied by subtle yet noticeable changes in intersegmental coordination. 相似文献
14.
The furthest distance that is judged to be reachable can change after participants have used a tool or if they are led to misjudge the position of their hand. Here we investigated how judged reachability changed when visual feedback about the hand was shifted. We hoped to distinguish between various ways in which visuomotor adaptation could influence judged reachability. Participants had to judge whether they could reach a virtual cube without actually doing so. They indicated whether they could reach this virtual cube by moving their hand. During these hand movements, visual feedback about the position of the hand was shifted in depth, either away from or toward the participant. Participants always adapted to the shifted feedback. In a session in which the hand movements in the presence of visual feedback were mainly in depth, perceived reachability shifted in accordance with the feedback (more distant cubes were judged to be reachable when feedback was shifted further away). In a second session in which the hand movements in the presence of visual feedback were mainly sideways, for some participants perceived reachability shifted in the opposite direction than we expected. The shift in perceived reachability was not correlated with the adaptation to the shift in visual feedback. We conclude that reachability judgments are not directly related to visuomotor adaptation. 相似文献
15.
16.
17.
Bradley S. Gibson Matthias Scheutz Gregory J. Davis 《Attention, perception & psychophysics》2009,71(2):363-374
Humans routinely use spatial language to control the spatial distribution of attention. In so doing, spatial information may be communicated from one individual to another across opposing frames of reference, which in turn can lead to inconsistent mappings between symbols and directions (or locations). These inconsistencies may have important implications for the symbolic control of attention because they can be translated into differences in cue validity, a manipulation that is known to influence the focus of attention. This differential validity hypothesis was tested in Experiment 1 by comparing spatial word cues that were predicted to have high learned spatial validity (“above/below”) and low learned spatial validity (“left/right”). Consistent with this prediction, when two measures of selective attention were used, the results indicated that attention was less focused in response to “left/right” cues than in response to “above/below” cues, even when the actual validity of each of the cues was equal. In addition, Experiment 2 predicted that spatial words such as “left/right” would have lower spatial validity than would other directional symbols that specify direction along the horizontal axis, such as “←/→” cues. The results were also consistent with this hypothesis. Altogether, the present findings demonstrate important semantic-based constraints on the spatial distribution of attention. 相似文献
18.
Georgije Lukatela Katerina Lukatela M. T. Turvey 《Attention, perception & psychophysics》1993,53(5):461-466
If the phonological codes of visually presented words are assembled-rapidly and automatically for use in lexical access, then words that sound alike should induce similar activity within the internal lexicon.Towed is homophonous with TOAD, which is semantically related tofrog, andbeach is homophonous withbeech, which is semantically related totree. Stimuli such as these were used in a priming-of-namingtask, in which words homophonous with associates of the target words preceded the targets at an onset asynchrony of 100 msec. Relative to spelling controls (trod, bench), the low-frequencytowed and the high-frequencybeach speeded up the naming offrog andtree, respectively, to the same degree. This result was discussed in relation to the accumulating evidence for the primacy of phonological constraints in visual lexical access. nt]mis|This research was supported in part by National Institute of Child Health and Human Development Grants HD-08945 and HD-0 1994 to the first author and Haskins Laboratories, respectively. 相似文献
19.
The influence of visual perception of self-motion on locomotor adaptation to unilateral limb loading
Self-perception of motion through visual stimulation may be important for adapting to locomotor conditions. Unilateral limb loading is a locomotor condition that can improve stability and reduce abnormal limb movement. In the present study, the authors investigated the effect of self-perception of motion through virtual reality (VR) on adaptation to unilateral limb loading. Healthy young adults, assigned to either a VR or a non-VR group, walked on a treadmill in the following 3 locomotor task periods--no load, loaded, and load removed. Subjects in the VR group viewed a virtual corridor during treadmill walking. Exposure to VR reduced cadence and muscle activity. During the loaded period, the swing time of the unloaded limb showed a larger increase in the VR group. When the load was removed, the swing time of the previously loaded limb and the stance time of the previously unloaded limb showed larger decrease and the swing time of the previously unloaded limb showed a smaller increase in the VR group. Lack of visual cues may cause the adoption of cautious strategies (higher muscle activity, shorter and more frequent steps, changes in the swing and stance times) when faced with situations that require adaptations. VR technology, providing such perceptual cues, has an important role in enhancing locomotor adaptation. 相似文献