首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In order to investigate whether addressees can make immediate use of speaker-based constraints during reference resolution, participant addressees’ eye movements were monitored as they helped a confederate cook follow a recipe. Objects were located in the helper’s area, which the cook could not reach, and the cook’s area, which both could reach. Critical referring expressions matched one object (helper’s area) or two objects (helper’s and cook’s areas), and were produced when the cook’s hands were empty or full, which defined the cook’s reaching ability constraints. Helper’s first and total fixations showed that they restricted their domain of interpretation to their own objects when the cook’s hands were empty, and widened it to include the cook’s objects only when the cook’s hands were full. These results demonstrate that addressees can quickly take into account task-relevant constraints to restrict their referential domain to referents that are plausible given the speaker’s goals and constraints.  相似文献   

2.
3.
Communication is aided greatly when speakers and listeners take advantage of mutually shared knowledge (i.e., common ground). How such information is represented in memory is not well known. Using a neuropsychological-psycholinguistic approach to real-time language understanding, we investigated the ability to form and use common ground during conversation in memory-impaired participants with hippocampal amnesia. Analyses of amnesics' eye fixations as they interpreted their partner's utterances about a set of objects demonstrated successful use of common ground when the amnesics had immediate access to common-ground information, but dramatic failures when they did not. These findings indicate a clear role for declarative memory in maintenance of common-ground representations. Even when amnesics were successful, however, the eye movement record revealed subtle deficits in resolving potential ambiguity among competing intended referents; this finding suggests that declarative memory may be critical to more basic aspects of the on-line resolution of linguistic ambiguity.  相似文献   

4.
Children generally behave more egocentrically than adults when assessing another's perspective. We argue that this difference does not, however, indicate that adults process information less egocentrically than children, but rather that adults are better able to subsequently correct an initial egocentric interpretation. An experiment tracking participants' eye movements during a referential communication task indicated that children and adults were equally quick to interpret a spoken instruction egocentrically but differed in the speed with which they corrected that interpretation and looked at the intended (i.e., non-egocentric) object. The existing differences in egocentrism between children and adults therefore seems less a product of where people start in their perspective taking process than where they stop, with lingering egocentric biases among adults produced by insufficient correction of an automatic moment of egocentrism. We suggest that this pattern of similarity in automatic, but not controlled, processes may explain between-group differences in a variety of dual-process judgments.  相似文献   

5.
Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so "for" their addressees (the audience design hypothesis) or "for" themselves, as a by-product of planning and articulating utterances. Three experiments addressed these issues. In Experiments 1 and 3, speakers followed pictorial guides to spontaneously instruct addressees to move objects. Critical instructions (e.g., "Put the dog in the basket on the star") were syntactically ambiguous, and the referential situation supported either one or both interpretations. Speakers reliably produced disambiguating cues to syntactic ambiguity whether the situation was ambiguous or not. However, Experiment 2 suggested that most speakers were not yet aware of whether the situation was ambiguous by the time they began to speak, and so adapting to addressees' particular needs may not have been feasible in Experiment 1. Experiment 3 examined individual speakers' awareness of situational ambiguity and the extent to which they signaled structure, with or without addressees present. Speakers tended to produce prosodic cues to syntactic boundaries regardless of their addressees' needs in particular situations. Such cues did prove helpful to addressees, who correctly interpreted speakers' instructions virtually all the time. In fact, even when speakers produced syntactically ambiguous utterances in situations that supported both interpretations, eye-tracking data showed that 40% of the time addressees did not even consider the non-intended objects. We discuss the standards needed for a convincing test of the audience design hypothesis.  相似文献   

6.
When describing visual scenes, speakers typically gaze at objects while preparing their names. In a study of the relation between eye movements and speech, a corpus of self-corrected speech errors was analyzed. If errors result from rushed word preparation, insufficient visual information, or failure to check prepared names against objects, speakers should spend less time gazing at referents before uttering errors than before uttering correct names. Counter to predictions, gazes to referents before errors (e.g., gazes to an axe before saying "ham-" [hammer]) highly resembled gazes to referents before correct names (e.g., gazes to an axe before saying "axe"). However, speakers gazed at referents for more time after initiating erroneous compared with correct names, apparently while they prepared corrections. Assuming that gaze nonetheless reflects word preparation, errors were not associated with insufficient preparation. Nor were errors systematically associated with decreased inspection of objects. Like gesture, gaze may accurately reflect a speaker's intentions even when the accompanying speech does not.  相似文献   

7.
Duran ND  Dale R  Kreuz RJ 《Cognition》2011,121(1):22-40
We explored perspective-taking behavior in a visuospatial mental rotation task that requires listeners to adopt an egocentric or “other-centric” frame of reference. In the current task, objects could be interpreted relative to the point-of-view of the listener (egocentric) or of a simulated partner (other-centric). Across three studies, we evaluated participants’ willingness to consider and act on partner-specific information, showing that a partner’s perceived ability to contribute to collaborative mutual understanding modulated participants’ perspective-taking behavior, either by increasing other-centric (Study 2) or egocentric (Study 3) responding. Moreover, we show that a large proportion of participants resolved referential ambiguity in terms of their partner’s perspective, even when it was more cognitively difficult to do so (as tracked by online movement measures), and when the presence of a social partner had to be assumed (Studies 1 and 2). In addition, participants continued to consider their partner’s perspective during trials where visual perspectives were shared. Our results show that participants will thoroughly invest in either an other-centric or egocentric mode of responding, and that perspective-taking strategies are not always dictated by minimizing processing demands, but by more potent (albeit subtle) factors in the social context.  相似文献   

8.
Spatial updating of environments described in texts   总被引:3,自引:0,他引:3  
  相似文献   

9.
There is an ongoing debate as to whether people track multiple moving objects in a serial fashion or with a parallel mechanism. One recent study compared eye movements when observers tracked identical objects (Multiple Object Tracking—MOT task) versus when they tracked the identities of different objects (Multiple Identity Tracking—MIT task). Distinct eye-movement patterns were found and attributed to two separate tracking systems. However, the same results could be caused by differences in the stimuli viewed during tracking. In the present study, object identities in the MIT task were invisible during tracking, so observers performed MOT and MIT tasks with identical stimuli. Observer were able to track either position and identity depending on the task. There was no difference in eye movements between position tracking and identity tracking. This result suggests that, while observers can use different eye-movement strategies in MOT and MIT, it is not necessary.  相似文献   

10.
We asked participants to make simple risky choices while we recorded their eye movements. We built a complete statistical model of the eye movements and found very little systematic variation in eye movements over the time course of a choice or across the different choices. The only exceptions were finding more (of the same) eye movements when choice options were similar, and an emerging gaze bias in which people looked more at the gamble they ultimately chose. These findings are inconsistent with prospect theory, the priority heuristic, or decision field theory. However, the eye movements made during a choice have a large relationship with the final choice, and this is mostly independent from the contribution of the actual attribute values in the choice options. That is, eye movements tell us not just about the processing of attribute values but also are independently associated with choice. The pattern is simple—people choose the gamble they look at more often, independently of the actual numbers they see—and this pattern is simpler than predicted by decision field theory, decision by sampling, and the parallel constraint satisfaction model. © 2015 The Authors. Journal of Behavioral Decision Making published by John Wiley & Sons Ltd.  相似文献   

11.
What do people enjoy about making recommendations? Although recommendation recipients can gain useful information, the value of these exchanges for the information provider is less clear in comparison. In this article we test whether a common recommendation heuristic—egocentric projection—also has hedonic consequences, by conducting experiments that compare recommendations (suggestions for another person) to reviews, in which people merely express their own preferences. Over five studies, people preferred reviewing over recommending. Recommenders enjoyed themselves less when they had to take their recipients' perspective, to the extent that the recipients' tastes were different from their own. These results suggest that self‐expression can be intrinsically rewarding for recommendation makers, and that recommendation seekers can elicit more information by asking for reviews instead.  相似文献   

12.
自我中心性偏差是社交失败的重要原因,但其产生机制还存在争议。以往研究存在抑制性选择模型与流利性错误归因两种理论观点:前者认为对自身观点的抑制失败会导致自我中心性偏差;后者则认为错误地选择自身更为流畅的信息会导致自我中心性偏差。为整合上述争论,提出抑制-归因协同作用模型,认为抑制和归因两种加工或可共同导致自我中心性偏差。未来研究应借助精巧的研究范式和特殊被试群体,进一步验证该模型。  相似文献   

13.
The representation of uniform motion in vision   总被引:3,自引:0,他引:3  
M T Swanston  N J Wade  R H Day 《Perception》1987,16(2):143-159
For veridical detection of object motion any moving detecting system must allocate motion appropriately between itself and objects in space. A model for such allocation is developed for simplified situations (points of light in uniform motion in a frontoparallel plane). It is proposed that motion of objects is registered and represented successively at four levels within frames of reference that are defined by the detectors themselves or by their movements. The four levels are referred to as retinocentric, orbitocentric, egocentric, and geocentric. Thus the retinocentric signal is combined with that for eye rotation to give an orbitocentric signal, and the left and right orbitocentric signals are combined to give an egocentric representation. Up to the egocentric level, motion representation is angular rather than three-dimensional. The egocentric signal is combined with signals for head and body movement and for egocentric distance to give a geocentric representation. It is argued that although motion perception is always geocentric, relevant registrations also occur at the three earlier levels. The model is applied to various veridical and nonveridical motion phenomena.  相似文献   

14.
Similar to certain bats and dolphins, some blind humans can use sound echoes to perceive their silent surroundings. By producing an auditory signal (e.g., a tongue click) and listening to the returning echoes, these individuals can obtain information about their environment, such as the size, distance, and density of objects. Past research has also hinted at the possibility that blind individuals may be able to use echolocation to gather information about 2-D surface shape, with definite results pending. Thus, here we investigated people’s ability to use echolocation to identify the 2-D shape (contour) of objects. We also investigated the role played by head movements—that is, exploratory movements of the head while echolocating—because anecdotal evidence suggests that head movements might be beneficial for shape identification. To this end, we compared the performance of six expert echolocators to that of ten blind nonecholocators and ten blindfolded sighted controls in a shape identification task, with and without head movements. We found that the expert echolocators could use echoes to determine the shapes of the objects with exceptional accuracy when they were allowed to make head movements, but that their performance dropped to chance level when they had to remain still. Neither blind nor blindfolded sighted controls performed above chance, regardless of head movements. Our results show not only that experts can use echolocation to successfully identify 2-D shape, but also that head movements made while echolocating are necessary for the correct identification of 2-D shape.  相似文献   

15.
Wu S  Keysar B 《Cognitive Science》2007,31(1):169-181
It makes sense that the more information people share, the better they communicate. To evaluate the effect of knowledge overlap on the effectiveness of communication, participants played a communication game where the "director" identified objects to the "addressee". Pairs either shared information about most objects' names (high overlap), or about the minority of objects' names (low overlap). We found that high-overlap directors tended to use more names than low overlap directors. High overlap directors also used more names with objects whose names only they knew, thereby confusing their addressees more often than low-overlap directors. We conclude that while sharing more knowledge can be beneficial to communication overall, it can cause communication to be locally ineffective. Sharing more information reduces communication effectiveness precisely when there is an opportunity to inform-when people communicate information only they themselves know.  相似文献   

16.
Chambers CG  Juan VS 《Cognition》2008,108(1):26-50
Recent studies have shown that listeners use verbs and other predicate terms to anticipate reference to semantic entities during real-time language comprehension. This process involves evaluating the denoted action against relevant properties of potential referents. The current study explored whether action-relevant properties are readily available to comprehension systems as a result of the embodied nature of linguistic and conceptual representations. In three experiments, eye movements were monitored as listeners followed instructions to move depicted objects on a computer screen. Critical instructions contained the verb return (e.g., Now return the block to area 3), which presupposes the previous displacement of its complement object--a property that is not reflected in perceptible or stable characteristics of objects. Experiment 1 demonstrated that predictions for previously displaced objects are generated upon hearing return, ruling out the possibility that anticipatory effects draw directly on static affordances in perceptual symbols. Experiment 2 used a referential communication task to evaluate how communicative relevance constrains the use of perceptually derived information. Results showed that listeners anticipate previously displaced objects as candidates upon hearing return only when their displacement was known to the speaker. Experiment 3 showed that the outcome of the original act of displacement further modulates referential predictions. The results show that the use of perceptually grounded information in language interpretation is subject to communicative constraints, even when language denotes physical actions performed on concrete objects.  相似文献   

17.
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.  相似文献   

18.
Viewing position effects are commonly observed in reading, but they have only rarely been investigated in object perception or in the realistic context of a natural scene. In two experiments, we explored where people fixate within photorealistic objects and the effects of this landing position on recognition and subsequent eye movements. The results demonstrate an optimal viewing position—objects are processed more quickly when fixation is in the centre of the object. Viewers also prefer to saccade to the centre of objects within a natural scene, even when making a large saccade. A central landing position is associated with an increased likelihood of making a refixation, a result that differs from previous reports and suggests that multiple fixations within objects, within scenes, occur for a range of reasons. These results suggest that eye movements within scenes are systematic and are made with reference to an early parsing of the scene into constituent objects.  相似文献   

19.
李晶  张侃 《心理学报》2011,43(3):221-228
采用部分场景再认范式研究了对称场景中物体朝向的一致性对内在参照系建立的影响。被试在实验1中沿对称轴方向学习无朝向物体组成的场景; 实验2中沿对称轴方向学习有朝向物体组成的场景; 实验3中沿对称轴与物体朝向方向学习与实验2相同的场景。结果表明:(1)物体朝向一致性影响了对称场景中内在参照方向的选择; (2)无论是否存在观察视点的干扰, 观察者选择对称轴和物体的一致朝向作为内在参照方向的可能性没有显著差异。  相似文献   

20.
When listeners follow spoken instructions to manipulate real objects, their eye movements to the objects are closely time locked to the referring words. We review five experiments showing that this time-locked characteristic of eye movements provides a detailed profile of the processes that underlie real-time spoken language comprehension. Together, the first four experiments showed that listerners immediately integrated lexical, sublexical, and prosodic information in the spoken input with information from the visual context to reduce the set of referents to the intended one. The fifth experiment demonstrated that a visual referential context affected the initial structuring of the linguistic input, eliminating even strong syntactic preferences that result in clear garden paths when the referential context is introduced linguistically. We argue that context affected the earliest moments of language processing because it was highly accessible and relevant to the behavioral goals of the listener.We thank D. Ballard and M. Hayhoe for the use of their laboratory (National Resource Laboratory for the Study of Brain and Behavior). We also thank J. Pelz for his assistance in learning how to use the equipment and K. Kobashi for assisting in the data collection. Finally, we thank Janet Nicol and an anonymous reviewer for their comments and suggestions. The research was supported by NIH resource grant 1-P41-RR09283; NIH HD27206 (M.K.T.); an NSF graduate fellowship (M.J.S.-K.); and a Canadian SSHRC fellowship (J.C.S.).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号