首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   929篇
  免费   103篇
  国内免费   63篇
  2024年   4篇
  2023年   29篇
  2022年   16篇
  2021年   34篇
  2020年   57篇
  2019年   54篇
  2018年   53篇
  2017年   45篇
  2016年   35篇
  2015年   37篇
  2014年   54篇
  2013年   131篇
  2012年   21篇
  2011年   32篇
  2010年   34篇
  2009年   36篇
  2008年   48篇
  2007年   48篇
  2006年   38篇
  2005年   38篇
  2004年   25篇
  2003年   28篇
  2002年   32篇
  2001年   10篇
  2000年   20篇
  1999年   12篇
  1998年   18篇
  1997年   13篇
  1996年   14篇
  1995年   12篇
  1994年   15篇
  1993年   8篇
  1992年   6篇
  1991年   7篇
  1990年   1篇
  1989年   4篇
  1988年   3篇
  1987年   3篇
  1985年   4篇
  1984年   4篇
  1983年   2篇
  1982年   2篇
  1981年   3篇
  1980年   1篇
  1979年   2篇
  1977年   2篇
排序方式: 共有1095条查询结果,搜索用时 31 毫秒
61.
Recent studies of naturalistic face‐to‐face communication have demonstrated coordination patterns such as the temporal matching of verbal and non‐verbal behavior, which provides evidence for the proposal that verbal and non‐verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non‐verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, 2012 ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non‐verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non‐verbal channels. We propose a “temporal heterogeneity” hypothesis to explain how the language system adapts to the demands of dialog.  相似文献   
62.
The common ground that conversational partners share is thought to form the basic context for language use. According to the classic view, inferences about common ground, or mutual knowledge, are guided by beliefs about the physical, cognitive, and attentional states of one's communicative partners. Here, we provide a first test of the attention assumption for common ground, the proposal that common ground for a co‐present entity—such as an object or an utterance—can only be formed if a person has evidence that his or her partner has also attended to it. In three experiments, a participant speaker and two partners learned names for novel monster pictures as a group. The speaker was then asked to describe the monsters to each partner separately in a referential communication task. The critical manipulation was the (in)attentiveness of one partner at different points in the study. Analysis of the speaker's referring expressions revealed that speakers assumed their partner shared common ground for the monster names only when that partner exhibited engaged attention as the names were learned. These findings provide key and novel support for the classic proposal that formation of common ground critically depends on assumptions about the attentional state of one's conversational partner.  相似文献   
63.
This experiment investigated social referencing as a form of discriminative learning in which maternal facial expressions signaled the consequences of the infant's behavior in an ambiguous context. Eleven 4- and 5-month-old infants and their mothers participated in a discrimination-training procedure using an ABAB design. Different consequences followed infants' reaching toward an unfamiliar object depending on the particular maternal facial expression. During the training phases, a joyful facial expression signaled positive reinforcement for the infant reaching for an ambiguous object, whereas a fearful expression signaled aversive stimulation for the same response. Baseline and extinction conditions were implemented as controls. Mothers' expressions acquired control over infants' approach behavior for all participants. All participants ceased to show discriminated responding during the extinction phase. The results suggest that 4- and 5-month-old infants can learn social referencing via discrimination training.  相似文献   
64.
Abstract

The social meaning model asserts that some nonverbal behaviors have consensually recognized relational meanings within a given social community. According to this perspective, the interpretations made by encoders, decoders, and 3rd-party observers of the same nonverbal behavior should be congruent. The authors applied the model to the identification of relational message interpretations of nonverbal matching behavior. Confederates either matched or did not match the nonverbal behaviors of conversational participants while being watched by nonparticipant observers. All three nonconfederate participants provided interpretations of the confederates' relational messages. As the authors had expected, there were moderate correlations between the 3 perspectives, with observers usually providing less favorable assessments than the conversational participants. The authors also examined the influence of positive and negative stimulus behavior on relational message interpretations.  相似文献   
65.
66.
Meaning,grounding, and the construction of social reality   总被引:1,自引:0,他引:1  
Culture has become a critical concept for social psychology over the past quarter of a century. Yet, cultural dynamics, the process and mechanism of formation, maintenance, and transformation of culture, has begun to be investigated only recently. This article reports the current state of play of a research program that takes cultural dynamics as its central question. In this approach, humans are construed as meaning making animals that create, recreate, and exchange information, and turn it into a meaningful basis for action. The locus of meaning making and remaking is an everyday joint activity. The grounding model of cultural transmission describes how cultural information is deliberately or inadvertently transmitted in a joint activity. As we go about our business of living our daily lives, we ground information to our common ground, and construct a social reality that is mutually meaningful and yet only local. If locally grounded information is further generalized to a large collective and disseminated through social networks, repeated and iterative activations of the grounding process maintain the social reality of the collective that we take for granted. Implications of the grounding model of cultural transmission and future research directions are discussed.  相似文献   
67.
An important challenge of automated vehicles (AV) will be the cooperative interaction with surrounding road users such as pedestrians. One solution to compensate for the missing driver-pedestrian interaction might be explicit communication via external human machine interfaces (eHMIs). Additionally, implicit communication such as a vehicle pitch motion might support AVs when interacting with pedestrians. While previous work explored various explicit and implicit communication cues, these concepts communicated the intention of the AV at one single time point. Currently, empirical findings on two-step communication lack so far. In this study, we empirically test the effect of a two-step AV communication that uses an implicit cue at a long distance and subsequently provides an implicit or explicit cues at a short distance. We compared its efficiency to single-step communication concepts providing implicit or explicit cues at the shorter distance only. To explore whether the right communication cue is used at the right distance, we analyzed pedestrians’ fixations while approaching an AV using an eye tracking device.We conducted a virtual reality study (N = 30) with AV communication concept that provided an active pitch motion or an eHMI and compared them with a two-step AV communication concept that provided an additional active pitch motion at a long distance when approaching the pedestrian. Furthermore, we recorded pedestrians’ fixation behavior while the AV approached.Consistently to previous work, single-step AV communication showed a beneficial effect on crossing behavior. Pedestrians initiated their crossing earlier while approaching an AV with an active pitch motion or an eHMI compared to the baseline condition. While active pitch motion reduced subjective safety feeling, eHMI increased it. However, the two-step communication concept did not further improve pedestrians’ crossing initiation times and their safety feeling. The pattern of fixation behavior differed as a function of AV distance. When the approaching AV was far away, pedestrians exclusively looked at the environment. During the approach, pedestrians gradually fixated the bumper and the hood and only then the windshield of the AV. Hence, it seems to be useful to present an AV intent communication at a certain distance from the pedestrian. These findings posit the importance of considering pedestrians’ fixation behavior when developing communication concepts between AVs and pedestrians.  相似文献   
68.
Automated vehicles are expected to require some form of communication (e.g., via LED strip or display) with vulnerable road users such as pedestrians. However, the passenger inside the automated vehicle could perform gestures or motions which could potentially be interpreted by the pedestrian as contradictory to the outside communication of the car. To explore this conflict, we conducted an online experiment (N = 59) with different message types (no message, intention, command), gestures (no gesture, wave, stop), and user positions (driver, co-driver) and measured the pedestrian’s confidence in crossing. Our results show that certain combinations (e.g., car indicates cross while the user in the driver seat gestures stop) confused the pedestrian, resulting in significantly lower confidence to cross. We further show that designing intention-based external communication led to less confusion and a significantly higher intention to cross.  相似文献   
69.
The number of automated vehicles (AVs) is expected to successively increase in the near future. This development has a considerable impact on the informal communication between AVs and pedestrians. Informal communication with the driver will become obsolete during the interaction with AVs. Literature suggests that external human machine interfaces (eHMIs) might substitute the communication between drivers and pedestrians. In the study, we additionally test a recently discussed type of communication in terms of artificial vehicle motion, namely active pitch motion, as an informal communication cue for AVs.N = 54 participants approached AVs in a virtual inner-city traffic environment. We explored the effect of three communication concepts: an artificial vehicle motion, namely active pitch motion, eHMI and the combination of both. Moreover, vehicle types (sports car, limousine, SUV) were varied. A mixed-method approach was applied to investigate the participantś crossing behavior and subjective safety feeling. Furthermore, eye movement parameters were recorded as indicators for mental workload.The results revealed that any communication concept drove beneficial effects on the crossing behavior. The participants crossed the road earlier when an active pitch motion was present, as this was interpreted as a stronger braking. Further, the eHMI and a combination of eHMI and active pitch motion had a positive effect on the crossing behavior. The active pitch motion showed no effect on the subjective safety feeling, while eHMI and the combination enhanced the pedestrianś safety feeling while crossing. The use of communication resulted in less mental workload, as evidenced by eye-tracking parameters. Variations of vehicle types did not result in significant main effects but revealed interactions between parameters. The active pitch motion revealed no learning. In contrast, it took participants several trials for the eHMI and the combination condition to affect their crossing behavior. To sum up, this study indicates that communication between AVs and pedestrians can benefit from the consideration of vehicle motion.  相似文献   
70.
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号