排序方式: 共有449条查询结果,搜索用时 0 毫秒
191.
《Quarterly journal of experimental psychology (2006)》2013,66(3):505-526
Recently, there has been considerable debate about whether readers can identify multiple words in parallel or whether they are limited to a serial mode of word identification, processing one word at a time (see, e.g., Reichle, Liversedge, Pollatsek, & Rayner, 2009). Similar questions can be applied to bimorphemic compound words: Do readers identify all the constituents of a compound word in parallel, and does it matter which of the morphemes is identified first? We asked subjects to read compound words embedded in sentences while monitoring their eye movements. Using the boundary paradigm (Rayner, 1975), we manipulated the preview that subjects received of the compound word before they fixated it. In particular, the morpheme order of the preview was either normal (cowboy) or reversed (boycow). Additionally, we manipulated the preview availability for each of the morphemes separately. Preview was thus available for the first morpheme only (cowtxg), for the second morpheme only (enzboy), or for neither of the morphemes (enztxg). We report three major findings: First, there was an effect of morpheme order on gaze durations measured on the compound word, indicating that, as expected, readers obtained a greater preview benefit when the preview presented the morphemes in the correct order than when their order was reversed. Second, gaze durations on the compound word were influenced not only by preview availability for the first, but also by that for the second morpheme. Finally, and most importantly, the results show that readers are able to extract some morpheme information even from a reverse order preview. In summary, readers obtain preview benefit from both constituents of a short compound word, even when the preview does not reflect the correct morpheme order. 相似文献
192.
《Quarterly journal of experimental psychology (2006)》2013,66(9):1840-1857
Visual information processing is guided by an active mechanism generating saccadic eye movements to salient stimuli. Here we investigate the specific contribution of saccades to memory encoding of verbal and spatial properties in a serial recall task. In the first experiment, participants moved their eyes freely without specific instruction. We demonstrate the existence of qualitative differences in eye-movement strategies during verbal and spatial memory encoding. While verbal memory encoding was characterized by shifting the gaze to the to-be-encoded stimuli, saccadic activity was suppressed during spatial encoding. In the second experiment, participants were required to suppress saccades by fixating centrally during encoding or to make precise saccades onto the memory items. Active suppression of saccades had no effect on memory performance, but tracking the upcoming stimuli decreased memory performance dramatically in both tasks, indicating a resource bottleneck between display-controlled saccadic control and memory encoding. We conclude that optimized encoding strategies for verbal and spatial features are underlying memory performance in serial recall, but such strategies work on an involuntary level only and do not support memory encoding when they are explicitly required by the task. 相似文献
193.
《Quarterly journal of experimental psychology (2006)》2013,66(6):1096-1120
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention. 相似文献
194.
《Quarterly journal of experimental psychology (2006)》2013,66(5):993-1006
Recent studies attribute top-down control as the primary determinant of the speed with which attention can be disengaged from an object, and disengagement has received increased interest as a means to distinguish between top-down and bottom-up accounts of attention capture. We present the results of three experiments exploring the breadth of the representations that delay attentional disengagement based on top-down goals. Experiments 1 and 2 examined whether objects similar to an observers' search target, but differing in luminance and hue, delayed the reallocation of attention in a search paradigm designed to isolate disengagement time. Experiment 3 explored whether the representations that delay disengagement are based on absolute similarity with the search target, or are tuned based on target/nontarget relationships. These three studies confirmed the role of top-down goals in automatically contributing to dwell times and revealed that the representations that underlie disengagement effects are broad (automatically delaying disengagement for items similar, but not identical to, the search target). In some cases, attention sets appeared to be graded in nature, but in others target–distractor relationships influenced the degree to which an irrelevant item held attention. Implications for theories of attention capture and potential functional significance of these automatic effects are discussed. 相似文献
195.
It is still not known what underlies successful performance in goaltending. Some studies have reported that advanced cues from the shooter's body (hip, kicking leg or support leg) are most important (Savelsbergh, G. J. P., Williams, A. M., Van der Kamp, J., & Ward, P. (2002). Visual search, anticipation and expertise in soccer goalkeepers. Journal of Sports Sciences, 20, 279-287; Savelsbergh, G. J. P., Williams, A. M., Van der Kamp, J., & Ward, P. (2005). Anticipation and visual search behaviour in expert soccer goalkeepers. Ergonomics, 48, 1686-1697; Williams, A. M., & Burwitz, L. (1993). Advanced cue utilization in soccer. In T. Reilly, J. Clarys, & A. Stibbe (Eds.), Science and football II (pp. 239-243). London, England: E&FN Spon), while others have found that the early tracking of the object prior to and during flight is most critical (Bard, C., & Fleury, M. (1981). Considering eye movement as a predictor of attainment. In: I. M. Cockerill, & W. M. MacGillvary (Eds.), Vision and Sport (pp. 28-41). Cheltenham, England: Stanley Thornes (Publishers) Ltd.). These results are similar to those found in a number of interceptive timing studies (Land, M. F., & McLeod, P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience, 3, 1340-1345; Ripoll and Fleurance, 1988; Vickers, J. N., & Adolphe, R. M. (1997). Gaze behaviour during a ball tracking and aiming skill. International Journal of Sports Vision, 4, 18-27). The coupled gaze and motor behavior of elite goaltenders were determined while responding to wrist shots taken from 5 m and 10 m on ice. The results showed that the goalies faced shots that were significantly different in phase durations due to distance (5 versus 10 m), but this was not a factor in making saves. Instead, the ability to stop the puck was dependent on the location, onset and duration of the final fixation/tracking gaze (or quiet eye) prior to initiating the saving action. The relative onset of quiet eye was significantly (p<.001) earlier (8.6%) and the duration was longer on saves (M=80.5%; 952.3 ms) compared to goals (onset 18.86%; M=70.1%, 826.1 ms). The quiet eye was located on the puck/stick during the preparation and execution of the shot in 70.53% of all trials, or on the ice in front of the release point of the puck (25.68%) and rarely on the body of the shooter (2.1%). The results are discussed within the context of current research on goaltending with specific emphasis on the timing of critical cues and the effect of tasks constraints. 相似文献
196.
It is not known why people move their eyes when engaged in non-visual cognition. The current study tested the hypothesis that differences in saccadic eye movement rate (EMR) during non-visual cognitive tasks reflect different requirements for searching long-term memory. Participants performed non-visual tasks requiring relatively low or high long-term memory retrieval while eye movements were recorded. In three experiments, EMR was substantially lower for low-retrieval than for high-retrieval tasks, including in an eyes closed condition in Experiment 3. Neither visual imagery nor between-task difficulty was related to EMR, although there was some evidence for a minor effect of within-task difficulty. Comparison of task-related EMRs to EMR during a no-task waiting period suggests that eye movements may be suppressed or activated depending on task requirements. We discuss a number of possible interpretations of saccadic eye movements during non-visual cognition and propose an evolutionary model that links these eye movements to memory search through an elaboration of circuitry involved in visual perception. 相似文献
197.
We investigated the scanning strategies used by 2- to 3.5-month-old infants when viewing partly occluded object displays. Eye movements were recorded with a corneal reflection system as the infants observed stimuli depicting two rod parts above and below an occluding box. Stimulus parameters were chosen on the basis of past research demonstrating the importance of motion, occluder width, and edge alignment to perception of object unity. Results indicated that the infants tailored scanning to display characteristics, engaging in more extensive scanning when unity perception was challenged by a wide occluder or misaligned edges. In addition, older infants tended to scan the lower parts of the displays more frequently than did younger infants. Exploration of individual differences, however, revealed marked contrasts in specific scanning styles across infants. The findings are consistent with views of perceptual development stressing the importance of information processing skills and self-directed action to the acquisition of object knowledge. 相似文献
198.
When we observe someone shift their gaze to a peripheral event or object, a corresponding shift in our own attention often follows. This social orienting response, joint attention, has been studied in the laboratory using the gaze cueing paradigm. Here, we investigate the combined influence of the emotional content displayed in two critical components of a joint attention episode: The facial expression of the cue face, and the affective nature of the to-be-localized target object. Hence, we presented participants with happy and disgusted faces as cueing stimuli, and neutral (Experiment 1), pleasant and unpleasant (Experiment 2) pictures as target stimuli. The findings demonstrate an effect of ‘emotional context’ confined to participants viewing pleasant pictures. Specifically, gaze cueing was boosted when the emotion of the gazing face (i.e., happy) matched that of the targets (pleasant). Demonstrating modulation by emotional context highlights the vital flexibility that a successful joint attention system requires in order to assist our navigation of the social world. 相似文献
199.
Reiko Graham Chris Kelland Friesen Harlan M. Fichtenholtz Kevin S. LaBar 《Visual cognition》2013,21(3):331-368
Facial expression and gaze perception are thought to share brain mechanisms but behavioural interactions, especially from gaze-cueing paradigms, are inconsistent. We conducted a series of gaze-cueing studies using dynamic facial cues to examine orienting across different emotional expression and task conditions, including face inversion. Across experiments, at a short stimulus–onset asynchrony (SOA) we observed both an expression effect (i.e., faster responses when the face was emotional versus neutral) and a cue validity effect (i.e., faster responses when the target was gazed-at), but no interaction between validity and emotion. Results from face inversion suggest that the emotion effect may have been due to both facial expression and stimulus motion. At longer SOAs, validity and emotion interacted such that cueing by emotional faces, fearful faces in particular, was enhanced relative to neutral faces. These results converge with a growing body of evidence that suggests that gaze and expression are initially processed independently and interact at later stages to direct attentional orienting. 相似文献
200.
Humans often look at other people in natural scenes, and previous research has shown that these looks follow the conversation and that they are sensitive to sound in audiovisual speech perception. In the present experiment, participants viewed video clips of four people involved in a discussion. By removing the sound, we asked whether auditory information would affect when speakers were fixated, how fixations between different observers were synchronized, and whether the eyes or mouth were looked at most often. The results showed that sound changed the timing of looks—by alerting observers to changes in conversation and attracting attention to the speaker. Clips with sound also led to greater attentional synchrony, with more observers fixating the same regions at the same time. However, looks towards the eyes of the people continued to dominate and were unaffected by removing the sound. These findings provide a rich example of multimodal social attention. 相似文献