首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments examined the processing capacity required to use sequential information in a serial reaction time task with partially predictable sequences. The first two experiments varied the response stimulus interval (RSI) between 0 and 500 msec and found the relative advantage of the high-probability stimulus to be independent of the length of the RSI. The third experiment compared utilization of sequential information either with or without a secondary task. The secondary task did not affect the high-probability stimulus but did increase the amount of time required to respond to the low-probability events. The results are discussed in terms of the attentional demands of memory access.  相似文献   

2.
The authors performed an experiment in which participants (N = 24) made judgments about maximum jump and reachability on ground surfaces with different elastic properties: sand and a trampoline. Participants performed judgments in two conditions: (a) while standing and after having recently jumped on the surface in question and (b) while standing on a third control surface, eliminating haptic exploration of the surface in question. There was a high correlation between perceived maximum reachable height and actual maximum reachable height in all conditions. Judging performance on the basis of visual and haptic exploration of ground surface information was slightly overestimated, whereas performance on the basis of visual information alone was underestimated and variable for the different surfaces. The authors discuss possible causes for the observed errors. They emphasize that there is a considerable nonvisual aspect to the nature of the information specifying affordances for overhead reach and jumping and that perceptual performance is degraded when spontaneous exploratory movement is restricted.  相似文献   

3.
Processing of temporal information: Evidence from eye movements   总被引:1,自引:0,他引:1  
In two experiments, we recorded eye movements to study how readers monitor temporal order information contained in narrative texts. Participants read short texts containing critical temporal information in the sixth sentence, which could be either consistent or inconsistent with temporal order information given in the second sentence. In Experiment 1, inconsistent sentences yielded more regressions to the second sentence and longer refixations of it. In Experiment 2, this pattern of eye movements was shown only by readers who noticed the inconsistency and were able to report it. Theoretical and methodological implications of the results for research on text comprehension are discussed.  相似文献   

4.
With their eyes initially on either the home, midline, or final end position, 30 participants practiced a 2-target aiming movement. After 120 acquisition trials, participants performed a retention test and were then transferred to each of the other 2 eye conditions. During acquisition, all groups improved over practice, but the home group showed the greatest improvement. The temporal improvement was most pronounced in the times spent after peak velocity. Retention and transfer tests indicated that participants performed best under eye-movement conditions that were the same as the 1 they had practiced in. There was also positive transfer of training between conditions in which the oculomotor information was similar. Thus, to optimize learning, one should practice under the same afferent and oculomotor conditions that will be required for the final performance.  相似文献   

5.
In a haptic search task, one has to determine the presence of a target among distractors. It has been shown that if the target differs from the distractors in two properties, shape and texture, performance is better than in both single-property conditions (Van Polanen, Bergmann Tiest, & Kappers, 2013). The search for a smooth sphere among rough cubical distractors was faster than both the searches for a rough sphere (shape information only) and for a smooth cube (texture information only). This effect was replicated in this study as a baseline. The main focus here was to further investigate the nature of this integration. It was shown that performance is better when the two properties are combined in a single target (smooth sphere), than when located in two separate targets (rough sphere and smooth cube) that are simultaneously present. A race model that assumes independent parallel processing of the two properties could explain the enhanced performance with two properties, but this could only take place effectively when the two properties were located in a single target.  相似文献   

6.
Subjects haptically explored two legs of a triangular path and responded by returning to the origin. Seven conditions were tested, varying in (1) whether the path was imaginally displaced between the initial exploration and the response; (2) the nature of the displacement, if present--rotation or translation; (3) variability in the origin location across trials; and (4) instructions to complete a triangle versus remembering the origin location. Mean distance and angle responses were modeled by the encoding-error model (Fujita, Klatzky, Loomis, & Golledge, 1993), which attributes errors to misencoding of the path legs and angle. The model failed to predict the finding of systematic errors in response distance but not response angle, a dissociation that held when the path was undisplaced or imaginally translated. Rotation before responding produced errors more consistent with the model. The data suggest use of a body-centered representation to complete undisplaced or imaginally translated paths, but adoption of an object-centered representation after imagined rotation, as is more consistent with pathway completion using whole-body locomotion.  相似文献   

7.
In a yes/no identification task using touch alone, subjects indicated whether an object belonged to a named category. Previously, we found that subjects explored in two stages—first grasping and lifting the object, then executing further exploratory procedures (Lederman & Klatzky, 1990b). We proposed that Stage 1 (grasp/lift) was sufficient to extract coarse information about multiple object properties, whereas Stage 2 was directed toward precise information about particularly diagnostic properties. the current study, subjects were initially constrained to grasping and lifting, after which they could explore further. Accuracy was above chance after Stage 1, confirming our assumption that the grasp/lift combination was broadly useful. Stage 2 increased accuracy and confidence. It primarily elicited exploratory procedures associated with object geometry, but exploration was also influenced by diagnostic object properties.  相似文献   

8.
Two experiments were conducted to examine the role of vision in the execution of a movement sequence. Experiment 1 investigated whether individual components of a sequential movement are controlled together or separately. Participants executed a rapid aiming movement to two targets in sequence. A full vision condition was compared to a condition in which vision was eliminated while in contact with the first target. The size of the first target was constant, while the second target size was varied. Target size had an influence on movement time and peak velocity to the first target. Vision condition and target size did not affect the time spent on the first target. These results suggest that preparation of the second movement is completed before the first movement is terminated. Experiment 2 examined when this preparation occurred. A full vision condition was compared to a condition in which vision was occluded during the flight phase of the first movement. Movement initiation times were shorter when vision was continually available. Total movement time was reduced with vision in two-target condition, but not in a control one-target condition. The time spent on the first target was greater when vision was not available during the first movement component. The results indicate that vision prior to movement onset can be used to formulate a movement plan to both targets in the sequence [Fischman & Reeve (1992).  相似文献   

9.
Imagined haptic exploration in judgments of object properties   总被引:1,自引:0,他引:1  
In Experiment 1, each subject rated a single, named object for its roughness, hardness, temperature, weight, size, or shape. In Experiment 2, each subject compared one pair of objects along the same dimensions. In both studies, a substantial proportion of subjects who judged the first four dimensions imagined a hand making exploratory movements appropriate for the designated information. The proportion of hand-exploration images decreased substantially when judging size or shape, or when judgments could be made readily through general semantic knowledge. The results suggest that the incorporation of haptic exploration into visual imagery provides access to information about haptically accessible object properties.  相似文献   

10.
Hand movements: a window into haptic object recognition   总被引:15,自引:0,他引:15  
  相似文献   

11.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

12.
In a yes/no identification task using touch alone, subjects indicated whether an object belonged to a named category. Previously, we found that subjects explored in two stages--first grasping and lifting the object, then executing further exploratory procedures (Lederman & Klatzky, 1990b). We proposed that Stage 1 (grasp/lift) was sufficient to extract coarse information about multiple object properties, whereas Stage 2 was directed toward precise information about particularly diagnostic properties. In the current study, subjects were initially constrained to grasping and lifting, after which they could explore further. Accuracy was above chance after Stage 1, confirming our assumption that the grasp/lift combination was broadly useful. Stage 2 increased accuracy and confidence. It primarily elicited exploratory procedures associated with object geometry, but exploration was also influenced by diagnostic object properties.  相似文献   

13.
It is known that the logic BI of bunched implications is a logic of resources. Many studies have reported on the applications of BI to computer science. In this paper, an extension BIS of BI by adding a sequence modal operator is introduced and studied in order to formalize more fine-grained resource-sensitive reasoning. By the sequence modal operator of BIS, we can appropriately express “sequential information” in resource-sensitive reasoning. A Gentzen-type sequent calculus SBIS for BIS is introduced, and the cut-elimination and decidability theorems for SBIS are proved. An extension of the Grothendieck topological semantics for BI is introduced for BIS, and the completeness theorem with respect to this semantics is proved. The cut-elimination, decidability and completeness theorems for SBIS and BIS are proved using some theorems for embedding BIS into BI.  相似文献   

14.
15.
16.
Effectively executing goal-directed behaviours requires both temporal and spatial accuracy. Previous work has shown that providing auditory cues enhances the timing of upper-limb movements. Interestingly, alternate work has shown beneficial effects of multisensory cueing (i.e., combined audiovisual) on temporospatial motor control. As a result, it is not clear whether adding visual to auditory cues can enhance the temporospatial control of sequential upper-limb movements specifically. The present study utilized a sequential pointing task to investigate the effects of auditory, visual, and audiovisual cueing on temporospatial errors. Eighteen participants performed pointing movements to five targets representing short, intermediate, and large movement amplitudes. Five isochronous auditory, visual, or audiovisual priming cues were provided to specify an equal movement duration for all amplitudes prior to movement onset. Movement time errors were then computed as the difference between actual and predicted movement times specified by the sensory cues, yielding delta movement time errors (ΔMTE). It was hypothesized that auditory-based (i.e., auditory and audiovisual) cueing would yield lower movement time errors compared to visual cueing. The results showed that providing auditory relative to visual priming cues alone reduced ΔMTE particularly for intermediate amplitude movements. The results further highlighted the beneficial impact of unimodal auditory cueing for improving visuomotor control in the absence of significant effects for the multisensory audiovisual condition.  相似文献   

17.
The time adult Ss were allowed to explore stimuli was varied during intra- and cross-model equivalence matching involving vision and touch. Increasing time to explore either each standard, each comparison, or both standard and comparison from 4 to 16 sec significantly improved haptic intramodel matching. However, cross-modal matching, from either vision to touch or touch to vision, improved significantly only when time to explore each standard was increased. Videotape recordings of Ss’ hand movements revealed use of a greater variety of haptic scanning strategies by Ss in groups where increased exploration time enhanced accuracy. The difference in effects of exploration time on intra- compared to cross-model shape matching was discussed in terms of possible differences in requirements between the two tasks.  相似文献   

18.
To investigate how tactile and proprioceptive information are used in haptic object discrimination we conducted a haptic search task in which participants had to search for either a cylinder, a bar or a rotated cube within a grid of aligned cubes. Tactile information from one finger is enough to detect a cylinder amongst the cubes. For detecting a bar or a rotated cube amongst cubes touch alone is not enough. For the rotated cube this is evident because its shape is identical to that of the non-targets, so proprioception must provide information about the orientation of the fingers and hand when touching it. For the bar one either needs proprioceptive information about the distance and direction of a single finger’s movements along the surfaces, or proprioceptive information from several fingers when they touch it simultaneously. When using only one finger, search times for the bar were much longer than those for the other two targets. When the whole hand or both hands were used the search times were similar for all shapes. Most errors were made when searching for the rotated cube, probably due to systematic posture-related biases in judging orientation on the basis of proprioception. The results suggest that tactile and proprioceptive information are readily combined for shape discrimination.  相似文献   

19.
A series of experiments was conducted in which a word initially appeared in parafoveal vision, followed by the subject's eye movement to the stimulus. During the eye movement, the initially displayed word was replaced by a word which the subject read. Under certain conditions, the prior parafoveal word facilitated naming the foveal word. Three alternative hypotheses were explored concerning the nature of the facilitation. The verbalization hypothesis suggests that information acquired from the parafoveal word permits the subject to begin to form the speech musculature properly for saying the word. The visual features integration hypothesis suggests that visual information obtained from the parafoveal word is integrated with foveal information after the saccade. The preliminary letter identification hypothesis suggests that some abstract code about the letters of the parafoveal word is stored and integrated with information available in the fovea after the saccade. The results of the experiments supported the latter hypothesis in that information about the beginning letters of words was facilitatory in the task. The other two hypotheses were disconfirmed by the results of the experiments.  相似文献   

20.
ABSTRACT

The present study examined haptic and visual memory capacity for familiar objects through the application of an intentional free-recall task with three-time intervals in a sample of 78 healthy older adults without cognitive impairment. A wooden box and a turntable were used for the presentation of haptic and visual stimuli, respectively. The procedure consisted of two phases, a study phase that consisted of the presentation of stimuli, and a test phase (free-recall task) performed after one hour, one day or one week. The analysis of covariance (ANCOVA) indicated that there was a main effect only for the time intervals (F (2,71) = 12.511, p = .001, η2 = 0.261), with a lower recall index for the interval of one week compared to the other intervals. We concluded that the memory capacity between the systems (haptic and visual) is similar for long retrieval intervals (hours to days).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号