首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A’s eye position was superimposed over Partner B’s search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. Supplemental materials for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.  相似文献   

2.
3.
Atypical processing of eye contact is one of the significant characteristics of individuals with autism, but the mechanism underlying atypical direct gaze processing is still unclear. This study used a visual search paradigm to examine whether the facial context would affect direct gaze detection in children with autism. Participants were asked to detect target gazes presented among distracters with different gaze directions. The target gazes were either direct gaze or averted gaze, which were either presented alone (Experiment 1) or within facial context (Experiment 2). As with the typically developing children, the children with autism, were faster and more efficient to detect direct gaze than averted gaze, whether or not the eyes were presented alone or within faces. In addition, face inversion distorted efficient direct gaze detection in typically developing children, but not in children with autism. These results suggest that children with autism use featural information to detect direct gaze, whereas typically developing children use configural information to detect direct gaze.  相似文献   

4.
From early ages, gaze acts as a cue to infer the interests, behaviours, thoughts and emotions of social partners. Despite sharing attentional properties with other non-social directional stimuli, such as arrows, gaze produces unique effects. A spatial interference task revealed this dissociation. The direction of arrows was identified faster on congruent than on incongruent direction-location trials. Conversely, gaze produced a reversed congruency effect (RCE), with faster identifications on incongruent than congruent trials. To determine the emergence of these gaze-specific attentional mechanisms, 214 Spanish children (4–17 years) divided into 6 age groups, performed the aforementioned task across three experiments. Results showed stimulus-specific developmental trajectories. Whereas the standard effect of arrows was unaffected by age, gaze shifted from an arrow-like effect at age 4 to a gaze-specific RCE at age 12. The orienting mechanisms shared by gaze and arrows are already present in 4-year olds and, throughout childhood, gaze becomes a special social cue with additional attentional properties. Besides orienting attention to a direction, as arrows would do, gaze might orient attention towards a specific object that would be attentionally selected. Such additional components may not fully develop until adolescence. Understanding gaze-specific attentional mechanisms may be crucial for children with atypical socio-cognitive development.  相似文献   

5.
Dias JW  Rosenblum LD 《Perception》2011,40(12):1457-1466
Speech alignment describes the unconscious tendency to produce speech that shares characteristics with perceived speech (eg Goldinger, 1998 Psychological Review 105 251-279). In the present study we evaluated whether seeing a talker enhances alignment over just hearing a talker. Pairs of participants performed an interactive search task which required them to repeatedly utter a series of keywords. Half of the pairs performed the task while hearing each other, while the other half could see and hear each other. Alignment was assessed by naive judges rating the similarity of interlocutors' keywords recorded before, during, and after the interactive task. Results showed that interlocutors aligned more when able to see one another suggesting that visual information enhances speech alignment.  相似文献   

6.
赵亚军  张智君 《心理学报》2009,41(12):1133-1142
采用改进的Posner视-空间线索提示范式, 对眼睛注视线索提示效应(eyes gaze cueing effect)的加工机制进行了探讨。实验一考察注视提示线索对空间Stroop效应的影响; 实验二考察注视提示线索对特征抽取和特征整合的影响。实验结果显示: 注视线索提示有效时的空间Stroop效应显著大于提示无效的情景; 在单一特征搜索与特征联合搜索任务中, 注视线索提示效应无差别。这说明, 注视线索通过在头脑里形成空间方位表征诱导注意转移; 注视线索通过影响特征抽取、而非特征整合阶段, 对客体加工产生易化。本研究支持注视线索提示效应属于内源性注意的观点。  相似文献   

7.
显示与反馈方式是人机交互的重要设计要素。在视线交互这种自然人机交互技术中,视线光标的呈现方式一直是研究的焦点。本研究选取基于凝视点击范式的文本输入任务,设计了2(有无视点锁定功能)×2(有无实时注视点)被试内实验,探讨光标显示模式对视线交互绩效与用户体验的影响。结果:视点锁定功能可以提升交互绩效和用户体验,有无实时注视点对交互绩效无显著影响;输入速度的提升主要在于视点锁定功能能够让被试的视线更快地转移到下一个目标上。研究结果可为视线光标的设计与应用提供借鉴。  相似文献   

8.
Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation.  相似文献   

9.
From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown faces with both direct and averted gaze, and subsequently given a preference test involving the same face and a novel one. A novelty preference during test was only found following initial exposure to a face with direct gaze. Further, face recognition was also generally enhanced for faces with both direct and with averted gaze when the infants started the task with the direct gaze condition. Together, these results indicate that the direction of the gaze modulates face recognition in early infancy.  相似文献   

10.
Staudte M  Crocker MW 《Cognition》2011,(2):268-291
Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream ( [Griffin, 2001] , [Meyer et al., 1998] and [Tanenhaus et al., 1995] ). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker’s focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human–robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker’s referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement.We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms.  相似文献   

11.
胡中华  赵光  刘强  李红 《心理学报》2012,44(4):435-445
已有研究发现在视觉搜索任务中对直视的探测比斜视更快且更准确, 该现象被命名为“人群中的凝视效应”。大多数研究者将该效应的产生归因于直视会捕获更多的注意。然而, 直视条件下对搜索项的匹配加工更容易也有可能导致对直视的探测比斜视快。此外,已有研究还发现头的朝向会影响对注视方向的探测, 但对于其产生原因缺乏实验验证。本研究采用视觉搜索范式, 运用眼动技术, 把注视探测的视觉搜索过程分为准备阶段、搜索阶段和反应阶段, 对这两个问题进行了探讨。结果显示:对直视的探测优势主要表现在搜索阶段和反应阶段; 在搜索阶段直视的探测优势获益于搜索路径的变短和分心项数量的变少以及分心项平均注视时间的变短; 头的朝向仅在搜索阶段对注视探测产生影响。该结果表明, 在直视探测中对搜索项的匹配加工比在斜视探测中更容易也是导致“人群中的凝视效应”的原因之一; 头的朝向仅仅影响了对注视方向的搜索并没有影响对其的确认加工。  相似文献   

12.
Conversation is supported by the beliefs that people have in common and the perceptual experience that they share. The visual context of a conversation has two aspects: the information that is available to each conversant, and their beliefs about what is present for each other. In our experiment, we separated these factors for the first time and examined their impact on a spontaneous conversation. We manipulated the fact that a visual scene was shared or not and the belief that a visual scene was shared or not. Participants watched videos of actors talking about a controversial topic, then discussed their own views while looking at either a blank screen or the actors. Each believed (correctly or not) that their partner was either looking at a blank screen or the same images. We recorded conversants' eye movements, quantified how they were coordinated, and analyzed their speech patterns. Gaze coordination has been shown to be causally related to the knowledge people share before a conversation, and the information they later recall. Here, we found that both the presence of the visual scene, and beliefs about its presence for another, influenced language use and gaze coordination.  相似文献   

13.
赵亚军  张智君  刘炜 《心理科学》2012,35(2):304-308
采用注视-西蒙范式探讨了注视方向知觉的空间编码机制。实验一让被试采用双手交叉的反应方式,发现注视-西蒙效应并不随反应手的交叉而反转,说明它涉及抽象的空间方向编码,而非基于以手为参照系的半侧优势效应。实验二采用纯音音调辨别任务,发现了典型的注视-西蒙效应,结合实验一视觉通道的结果,说明注视-西蒙效应并非特异于视觉通道,它可能发生在晚期的反应选择阶段,而非早期的知觉阶段。结果支持注视线索能够自动诱发观察者形成抽象的方向表征的观点。  相似文献   

14.
Previous studies have found that attention is automatically oriented in the direction of other people's gaze. This study directly investigated whether the perceiving gaze direction modulates the orienting of observers' attention. Gaze perception was manipulated by changing the face context (head orientation) of the gaze cue: the perceived gaze angle was increased (or decreased) when the head and gaze are congruent (or incongruent), while the local‐feature information of the eye region was preserved for all stimuli. The results showed that gaze‐cueing effects were enhanced when the perceived gaze direction was averted more toward left or right, and reduced when the perceived gaze direction was closer to direct gaze. The results suggest that gaze‐cueing effects are based on mechanisms specialized for gaze perception, and the magnitude of gaze‐cueing effects was probably a function of the perceived gaze direction.  相似文献   

15.
Semantic fluency was examined in Hebrew‐speaking 5‐year‐old monozygotic and dizygotic twins (N = 396, 198 pairs), 22% of them with mother‐reported speech‐related problems. There were positive correlations of similar magnitudes among monozygotic, same‐sex dizygotic, and opposite‐sex dizygotic twins. Analyses showed no genetic effects, alongside significant shared (39%) and non‐shared environmental (61%) effects on fluency scores. The presence of speech‐related problems in one twin affected the fluency score of the co‐twin. A multivariate regression analysis revealed that parental education and length of stay at daycare significantly predicted fluency scores. We suggest that semantic fluency performance is highly affected by environmental factors at age 5 although genetic effects might emerge later on.  相似文献   

16.
Articulatory constraints on interpersonal postural coordination   总被引:1,自引:0,他引:1  
Cooperative conversation has been shown to foster interpersonal postural coordination. The authors investigated whether such coordination is mediated by the influence of articulation on postural sway. In Experiment 1, talkers produced words in synchrony or in alternation, as the authors varied speaking rate and word similarity. Greater shared postural activity was found for the faster speaking rate. In Experiment 2, the authors demonstrated that shared postural activity also increases when individuals speak the same words or speak words that have similar stress patterns. However, this increase in shared postural activity is present only when participants' data are compared with those of their partner, who was present during the task, but not when compared with the data of a member of a different pair speaking the same word sequences as those of the original partner. The authors' findings suggest that interpersonal postural coordination observed during conversation is mediated by convergent speaking patterns.  相似文献   

17.
Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported in which pairs of complex numerals were named (arabic format, Experiment 1) or read aloud (alphabetic format, Experiment 2) as house numbers and as clock times. We examined whether the degree of incrementality is differentially influenced by the production task (naming vs. reading) and mode (house numbers vs. clock time expressions), by comparing gaze durations and speech onset latencies. In both tasks and modes, dissociations were obtained between speech onset latencies (reflecting articulation) and gaze durations (reflecting planning), indicating incrementality. Furthermore, whereas none of the factors that determined gaze durations were reflected in the reading and naming latencies for the house numbers, the dissociation between gaze durations and response latencies for the clock times concerned mainly numeral length in both tasks. These results suggest that the degree of incrementality is influenced by the type of utterance (house number vs. clock time) rather than by task (reading vs. naming). The results highlight the importance of the utterance structure in determining the degree of incrementality.  相似文献   

18.
Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported in which pairs of complex numerals were named (arabic format, Experiment 1) or read aloud (alphabetic format, Experiment 2) as house numbers and as clock times. We examined whether the degree of incrementality is differentially influenced by the production task (naming vs. reading) and mode (house numbers vs. clock time expressions), by comparing gaze durations and speech onset latencies. In both tasks and modes, dissociations were obtained between speech onset latencies (reflecting articulation) and gaze durations (reflecting planning), indicating incrementality. Furthermore, whereas none of the factors that determined gaze durations were reflected in the reading and naming latencies for the house numbers, the dissociation between gaze durations and response latencies for the clock times concerned mainly numeral length in both tasks. These results suggest that the degree of incrementality is influenced by the type of utterance (house number vs. clock time) rather than by task (reading vs. naming). The results highlight the importance of the utterance structure in determining the degree of incrementality.  相似文献   

19.
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.  相似文献   

20.
We performed two experiments comparing the effects of speech production and speech comprehension on simulated driving performance. In both experiments, participants completed a speech task and a simulated driving task under single‐ and dual‐task conditions, with language materials matched for linguistic complexity. In Experiment 1, concurrent production and comprehension resulted in more variable velocity compared to driving alone. Experiment 2 replicated these effects in a more difficult simulated driving environment, with participants showing larger and more variable headway times when speaking or listening while driving than when just driving. In both experiments, concurrent production yielded better control of lane position relative to single‐task performance; concurrent comprehension had little impact on control of lane position. On all other measures, production and comprehension had very similar effects on driving. The results show, in line with previous work, that there are detrimental consequences for driving of concurrent language use. Our findings imply that these detrimental consequences may be roughly the same whether drivers are producing speech or comprehending it. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号