首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.

We investigated whether the moment at which an event is perceived depends on its temporal context. Participants learned a mapping between time and space by watching the hand of a clock rotating a full revolution in a fixed duration. Then the hand was removed, and a target disc was flashed within a fixed-interval duration. Participants were to indicate where the hand would have been at the time of the target. In three separate experiments, we estimated the disruption from a distractor disc that was presented before or after the target disc, with a variable time between them. The target was either revealed at the end of the trial or cued beforehand, and in the latter case, was cued by either color or temporal order. We found an attraction to the presentation time of the distractor when both events were attended equally (target revealed at the end). When the target was cued beforehand, the reported time was under- or overestimated, depending on whether the nature of distractor had to be decoded (precued by color) or not (precued by order). In summary, the perceived time of an event is always affected by other events in temporal proximity, but the nature of this effect depends on how each event is attended.

  相似文献   

3.
The present study examined predictors of siblings' relations in 202 young adults (aged 21-32 years), who completed the Adult Sibling Relationship Questionnaire and the Narcissistic Personality Inventory. Results indicate that warmth between siblings is explained by gender (with women feeling closer), perceived paternal favoritism, low levels of narcissism, and an interaction suggesting that paternal favoritism moderates the link between narcissism and sibling warmth. Conflict between siblings was explained by gender (sisters), age, parental favoritism, high levels of narcissism, extreme levels of similarity or dissimilarity between siblings, and interactions indicating that older age is a predictor of conflict between siblings among women but not among men. The impact of parental favoritism and narcissism on sibling relationships in young adulthood was discussed.  相似文献   

4.
5.
Comparison time for pairs of vertical-line stimuli, sufficiently different that they can be errorlessly discriminated with respect to visual extent, was examined as a function of arithmetic relations (physical ratio and difference) on members of the pair. Arithmetic relations are coded very precisely by judgment time: Responses slow as stimulus ratios approach one with difference fixed, and as stimulus differences approach zero with ratio fixed. Most models which assume a simple (Difference or Ratio) resolution rule operating on independent sensations require judgment time to depend on either ratios or on differences but not on both. Further tests showed both an index based on median judgment times and a confusion index based on pairs of observed judgment times, satisfied the requirements for a Positive Difference Structure. One representation of these data, which remains acceptable through all analyses, is a Difference resolution rule operating on sensations determined by a power psychophysical function with β < 1. Specifically, L(x, y) = F{ψ(x) ? ψ(y)} + R, where L(x, y) is the judgment time with the stimulus pair x and y, ψ(x) = Axβ + C, R is a positive constant, and F is a continuous monotone decreasing function.  相似文献   

6.
Previous work has generated inconsistent results with regard to what extent working memory (WM) content guides visual attention. Some studies found effects of easy to verbalize stimuli, whereas others only found an influence of visual memory content. To resolve this, we compared the time courses of memory-based attentional guidance for different memory types. Participants first memorized a colour, which was either easy or difficult to verbalize. They then looked for an unrelated target in a visual search display and finally completed a memory test. One of the distractors in the search display could have the memorized colour. We varied the time between the to-be-remembered colour and the search display, as well as the ease with which the colours could be verbalized. We found that the influence of easy to verbalize WM content on visual search decreased with increasing time, whereas the influence of visual WM content was sustained. However, visual working memory effects on attention also decreased when the duration of visual encoding was limited by an additional task or when the memory item was presented only briefly. We propose that for working memory effects on visual attention to be sustained, a sufficiently strong visual representation is necessary.  相似文献   

7.
Recognition memory for shapes has been shown to depend on differences between the size of shapes at the time of encoding and at the time of the memory test (Jolicoeur, 1987). Experiment 1 of the present paper replicates this effect and establishes a set of parameters used in the subsequent experiments. Experiment 2 considers the results of Experiment 1 in light of the distinction between "perceived" size, which, under normal viewing conditions, varies minimally with changes in distance between the observer and object, and "retinal" size, which varies proportionally with viewing distance as an object is moved closer to or farther from an observer. Subjects studied novel shapes and performed a recognition memory test in which the distance from the subject to the viewing screen at the time of testing was different from that at the time of encoding. The viewing distance and the size of the shapes were manipulated such that perceived and retinal sizes were dissociated. The results suggest that the size-congruency effect in memory for visual shape occurs as a result of changes in the perceived size of shapes between the encoding and the testing phases, with little or no contribution of retinal size per se.  相似文献   

8.
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery.  相似文献   

9.
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery.  相似文献   

10.
The ability of the visual system to localize objects is one of its most important functions and yet remains one of the least understood, especially when either the object or the surrounding scene is in motion. The specific process that assigns positions under these circumstances is unknown, but two major classes of mechanism have emerged: spatial mechanisms that directly influence the coded locations of objects, and temporal mechanisms that influence the speed of perception. Disentangling these mechanisms is one of the first steps towards understanding how the visual system assigns locations to objects when there are motion signals present in the scene.  相似文献   

11.
12.
It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks (e.g., categorization) demonstrate that similarity is dynamic and changes as perceptual information is accumulated (Lamberts, 1998). In three visual search experiments, the time course of target-distractor similarity effects and distractor-distractor similarity effects was examined. A version of the extended generalized context model (EGCM; Lamberts, 1998) provided a good account of the time course of the observed similarity effects, supporting the notion that similarity in search is dynamic. Modeling also indicated that increasing distractor homogeneity influences both perceptual and decision processes by (respectively) increasing the rate at which stimulus features are processed and enabling strategic weighting of stimulus information.  相似文献   

13.
14.
The perceived spatial frequency of a visual pattern can increase when a pattern drifts or is presented at a peripheral visual field location, as compared with a foveally viewed, stationary pattern. We confirmed previously reported effects of motion on foveally viewed patterns and of location on stationary patterns and extended this analysis to the effect of motion on peripherally viewed patterns and the effect of location on drifting patterns. Most central to our investigation was the combined effect of temporal modulation and spatial location on perceived spatial frequency. The group data, as well as the individual sets of data for most observers, are consistent with the mathematical concept of separability for the effects of temporal modulation and spatial location on perceived spatial frequency. Two qualitative psychophysical models suggest explanations for the effects. Both models assume that the receptive-field sizes of a set of underlying psychophysical mechanisms monotonically change as a function of temporal modulation or visual field location, whereas the perceptual labels attached to a set of channels remain invariant. These models predict that drifting or peripheral viewing of a pattern will cause a shift in the perceived spatial frequency of the pattern to a higher apparent spatial frequency.  相似文献   

15.
Patel M  Chait M 《Cognition》2011,119(1):125-130
Accurately timing acoustic events in dynamic scenes is fundamental to scene analysis. To detect events in busy scenes, listeners must often identify a change in the pattern of ongoing fluctuation, resulting in many ubiquitous events being detected later than when they occurred. This raises the question of how delayed detection time affects the manner in which such events are perceived relative to other events in the environment.To model these situations, we use sequences of tone-pips with a time–frequency pattern that changes from regular to random (‘REG–RAND’) or vice versa (‘RAND–REG’). REG–RAND transitions are detected rapidly, but the emergence of regularity cannot be established immediately, and thus RAND–REG transitions take significantly longer to detect. Using a temporal order judgment task, and a light-flash as a temporal marker, we demonstrate that listeners do not perceive the onset of RAND–REG transitions at the point of detection (∼530 ms post transition), but automatically re-adjust their estimate ∼300 ms closer to the nominal transition.These results demonstrate that the auditory system possesses mechanisms that survey the proximal history of an ongoing stimulus and automatically adjust perception to compensate for prolonged detection time, allowing listeners to build meaningful representations of the environment.  相似文献   

16.
Participants who were unable to detect familiarity from masked 17 ms faces (Stone and Valentine, 2004 and Stone and Valentine, in press-b) did report a vague, partial visual percept. Two experiments investigated the relative strength of the visual percept generated by famous and unfamiliar faces, using masked 17 ms exposure. Each trial presented simultaneously a famous and an unfamiliar face, one face in LVF and the other in RVF. In one task, participants responded according to which of the faces generated the stronger visual percept, and in the other task, they attempted an explicit familiarity decision. The relative strength of the visual percept of the famous face compared to the unfamiliar face was moderated by response latency and participants' attitude towards the famous person. There was also an interaction of visual field with response latency, suggesting that the right hemisphere can generate a visual percept differentiating famous from unfamiliar faces more rapidly than the left hemisphere. Participants were at chance in the explicit familiarity decision, confirming the absence of awareness of facial familiarity.  相似文献   

17.
Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the shape of the response time distributions, supporting an early normalization process that is separate from processes influenced by word frequency. In contrast, speeded pronunciation and semantic classification produced interactive influences of word frequency and stimulus quality, which is a fundamental prediction from interactive activation models of lexical processing. These findings suggest that stimulus normalization is specific to lexical decision and is driven by the task's emphasis on familiarity-based information.  相似文献   

18.
In four experiments, we examined whether facial expressions used while singing carry musical information that can be “read” by viewers. In Experiment 1, participants saw silent video recordings of sung melodic intervals and judged the size of the interval they imagined the performers to be singing. Participants discriminated interval sizes on the basis of facial expression and discriminated large from small intervals when only head movements were visible. Experiments 2 and 3 confirmed that facial expressions influenced judgments even when the auditory signal was available. When matched with the facial expressions used to perform a large interval, audio recordings of sung intervals were judged as being larger than when matched with the facial expressions used to perform a small interval. The effect was not diminished when a secondary task was introduced, suggesting that audio-visual integration is not dependent on attention. Experiment 4 confirmed that the secondary task reduced participants’ ability to make judgments that require conscious attention. The results provide the first evidence that facial expressions influence perceived pitch relations.  相似文献   

19.
Two experiments are reported examining judgments from 16 subjects who indicated the apparent direction of a photographed pointer that was rotated to different physical positions while being photographed. The photographs themselves were rotated about a vertical axis to several positions with respect to the subjects’ central viewing axis. The results replicate the well-known distortion in apparent direction associated with photographed pointers positioned to project directly out of the plane of the photograph. This effect has been described by Goldstein (1979) as the “differential rotation effect” because its magnitude is reduced as the depicted angle of the pointer becomes less orthogonal to the photograph. Analysis of the two-dimensional properties of the projected images shows that this differential rotation is related to projected angles on the surface of the photograph. This analysis may explain why circular objects often do not appear to be correctly drawn in the periphery of geometrically correct projections.  相似文献   

20.
The present study examined the effect of perceived motion-in-depth on temporal interval perception. We required subjects to estimate the length of a short empty interval starting from the offset of a first marker and ending with the onset of a second marker. The size of the markers was manipulated so that the subjects perceived a visual object as approaching or receding. We demonstrated that the empty intervals between markers was perceived as taking shorter to view when the object was perceived as approaching than when it was perceived as receding. We found in addition that the motion-in-depth effect disappeared when the shape continuity between the first and second marker was broken or when the object approached but missed the face. We conclude that anticipated collision of an approaching object altered perception of an empty interval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号