首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Shannon Spaulding 《Synthese》2018,195(9):4009-4030
Disagreeing with others about how to interpret a social interaction is a common occurrence. We often find ourselves offering divergent interpretations of others’ motives, intentions, beliefs, and emotions. Remarkably, philosophical accounts of how we understand others do not explain, or even attempt to explain such disagreements. I argue these disparities in social interpretation stem, in large part, from the effect of social categorization and our goals in social interactions, phenomena long studied by social psychologists. I argue we ought to expand our accounts of how we understand others in order to accommodate these data and explain how such profound disagreements arise amongst informed, rational, well-meaning individuals.  相似文献   

2.
ABSTRACT

Interacting with other people is a ubiquitous part of daily life. A complex set of processes enable our successful interactions with others. The present research was conducted to investigate how the processing of visual stimuli may be affected by the presence and the hand posture of a co-actor. Experiments conducted with participants acting alone have revealed that the distance from the stimulus to the hand of a participant can alter visual processing. In the main experiment of the present paper, we asked whether this posture-related source of visual bias persists when participants share the task with another person. The effect of personal and co-actor hand-proximity on visual processing was assessed through object-specific benefits to visual recognition in a task performed by two co-actors. Pairs of participants completed a joint visual recognition task and, across different blocks of trials, the position of their own hands and of their partner's hands varied relative to the stimuli. In contrast to control studies conducted with participants acting alone, an object-specific recognition benefit was found across all hand location conditions. These data suggest that visual processing is, in some cases, sensitive to the posture of a co-actor.  相似文献   

3.
The present study explored African American (n = 16) and European American (n = 19) college women's ideal body size perceptions for their own and the other ethnic group along with reasons behind their selections. Respondents completed an ethnically-neutral figure rating scale and then participated in ethnically-homogenous focus groups. European Americans mostly preferred a curvy-thin or athletic ideal body while most African American students resisted notions of a singular ideal body. European Americans suggested that African Americans’ larger ideal body sizes were based on greater body acceptance and the preferences of African American men. African Americans used extreme terms when discussing their perceptions of European Americans’ thin idealization, celebrity role models, and weight management behaviors. African Americans’ perceptions of European Americans’ body dissatisfaction were also attributed to the frequent fat talk they engaged in. Implications for promoting the psychosocial well-being of ethnically-diverse emerging adult females attending college are discussed.  相似文献   

4.
Animal Cognition - The use of 2-dimensional representations (e.g. photographs or digital images) of real-life physical objects has been an important tool in studies of animal cognition. Horses are...  相似文献   

5.
Twelve‐month‐olds realize that when an agent cannot see an object, her incomplete perceptions still guide her goal‐directed actions. What would happen if the agent had incomplete perceptions because she could see only one part of the object, for example one side of a screen? In the present research, 16‐month‐olds were first shown an agent who always pointed to a red object, as opposed to a black or a yellow object, suggesting that she preferred red over the other colours. Next, two screens were introduced while the agent was absent. The screens were (1) red or green on both sides; (2) red on the front (infants’ side) but green on the back (the agent’s side) or vice versa; or (3) only coloured red or green on the front. During test, the agent, who could see only the back of the screens, pointed to one of the two screens. The results revealed that while infants expected the agent to continue acting on her colour preference and point to the red rather than the green screen during test, they did so in accord with the agent’s perception of the screens, rather than their own perceptions: they expected the agent to point to the red screen in (1), but to the green‐front screen in (2), and they had no prediction of which screen the agent should point to in (3). The implications of the present findings for early psychological reasoning research are discussed.  相似文献   

6.
This research investigates the effect of members’ cognitive styles on team processes that affect errors in execution tasks. In two laboratory studies, we investigated how a team’s composition (members’ cognitive styles related to object and spatial visualization) affects the team’s strategic focus and strategic consensus, and how those affect the team’s commission of errors. Study 1, conducted with 70 dyads performing a navigation and identification task, established that teams high in spatial visualization are more process-focused than teams high in object visualization. Process focus, which pertains to a team’s attention to the details of conducting a task, is associated with fewer errors. Study 2, conducted with 64 teams performing a building task, established that heterogeneity in cognitive style is negatively associated with the formation of a strategic consensus, which has a direct and mediating relationship with errors.  相似文献   

7.
Our epistemology can shape the way we think about perception and experience. Speaking as an epistemologist, I should say that I don't necessarily think that this is a good thing. If we think that we need perceptual evidence to have perceptual knowledge or perceptual justification, we will naturally feel some pressure to think of experience as a source of reasons or evidence. In trying to explain how experience can provide us with evidence, we run the risk of either adopting a conception of evidence according to which our evidence isn't very much like the objects of our beliefs that figure in reasoning (e.g., by identifying our evidence with experiences or sensations) or the risk of accepting a picture of experience according to which our perceptions and perceptual experiences are quite similar to beliefs in terms of their objects and their representational powers. But I think we have good independent reasons to resist identifying our evidence with things that don't figure in our reasoning as premises and I think we have good independent reason to doubt that experience is sufficiently belief‐like to provide us with something premise‐like that can figure in reasoning. We should press pause. We shouldn't let questionable epistemological assumptions tell us how to do philosophy of mind. I don't think that we have good reason to think that we need the evidence of the senses to explain how perceptual justification or knowledge is possible. Part of my scepticism derives from the fact that I think we can have kinds of knowledge where the relevant knowledge is not evidentially grounded. Part of my scepticism derives from the fact that there don't seem to be many direct arguments for thinking that justification and knowledge always requires evidential support. In this paper, I shall consider the three arguments I've found for thinking that justification and knowledge do always require evidential support and explain why I don't find them convincing. I think that we can explain perceptual justification, rationality, and defeat without assuming that our experiences provide us with evidence. In the end, I think we can partially vindicate Davidson's (notorious) suggestion that our beliefs, not experiences, provide us with reasons for forming further beliefs. This idea turns out to be compatible with foundationalism once we understand that foundational status can come from something other than evidential support.  相似文献   

8.
The present study investigated whether infants learn the effects of other persons' actions like they do for their own actions, and whether infants transfer observed action-effect relations to their own actions. Nine-, 12-, 15- and 18-month-olds explored an object that allowed two actions, and that produced a certain salient effect after each action. In a self-exploration group, infants explored the object directly, whereas in two observation groups, infants first watched an adult model acting on the object and obtaining a certain effect with each action before exploring the objects by themselves. In one observation group, the infants' actions were followed by the same effects as the model's actions, but in the other group, the action-effect mapping for the infant was reversed to that of the model. The results showed that the observation of the model had an impact on the infants' exploration behavior from 12 months, but not earlier, and that the specific relations between observed actions and effects were acquired by 15 months. Thus, around their first birthday infants learn the effects of other persons' actions by observation, and they transfer the observed action-effect relations to their own actions in the second year of life.  相似文献   

9.
This article discusses the concepts of literacy, theological literacy and literacy practices as a resource for understanding how tradition and faith/belief are intertwined. Against the background of recent elaborations of literacy within the field of literature and educational studies, the suggestion is made that “tradition” can be understood as a semiotic domain, i.e. a set of practices that recruits one or more modalities to communicate distinctive types of meanings. Theological literacy is accordingly defined as the ability to interpret, develop and communicate a theological semiotic domain. Literacy helps us to see that Christian faith/belief cannot be taught and acquired once and for all by learning a doctrinal content and a specific religious practice. At the same time, however, literacy nevertheless stresses the importance of knowing doctrinal content and religious practise, seeing that literacy is part of the process of shaping and construing faith and tradition.  相似文献   

10.
The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.  相似文献   

11.
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual pre‐schoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate ‘with their hands’ to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production.  相似文献   

12.
Conveying complex mental scenarios is at the heart of human language. Advances in cognitive linguistics suggest this is mediated by an ability to activate cognitive systems involved in non-linguistic processing of spatial information. In this fMRI-study, we compare sentences with a concrete spatial meaning to sentences with an abstract meaning. Using this contrast, we demonstrate that sentence meaning involving motion in a concrete topographical context, whether linked to animate or inanimate subjects nouns, yield more activation in a bilateral posterior network, including fusiform/parahippocampal, and retrosplenial regions, and the temporal-occipital-parietal junction. These areas have previously been shown to be involved in mental navigation and spatial memory tasks. Sentences with an abstract setting activate an extended largely left-lateralised network in the anterior temporal, and inferior and superior prefrontal cortices, previously found activated by comprehension of complex semantics such as narratives. These findings support a model of language, where the understanding of spatial semantic content emerges from the recruitment of brain regions involved in non-linguistic spatial processing.  相似文献   

13.
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain–behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = –.51) and memory (r = –.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.  相似文献   

14.
Persaud N 《Consciousness and cognition》2008,17(4):1375; discussion 1376-1375; discussion 1377
  相似文献   

15.
《Acta psychologica》2013,143(3):261-268
We investigated the influence of dimensional set on report of object feature information using an immediate memory probe task. Participants viewed displays containing up to 36 coloured geometric shapes which were presented for several hundred milliseconds before one item was abruptly occluded by a probe. A cue presented simultaneously with the probe instructed participants to report either about the colour or shape of the probe item. A dimensional set towards the colour or shape of the presented items was induced by manipulating task probability — the relative probability with which the two feature dimensions required report. This was done across two participant groups: One group was given trials where there was a higher report probability of colour, the other a higher report probability of shape. Two experiments showed that features were reported most accurately when they were of high task probability, though in both cases the effect was largely driven by the colour dimension. Importantly the task probability effect did not interact with display set size. This is interpreted as tentative evidence that this manipulation influences feature processing in a global manner and at a stage prior to visual short term memory.  相似文献   

16.
Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical "emotional facial expression" was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.  相似文献   

17.
ABSTRACT To grasp an object the digits need to be placed at suitable positions on its surface. The selection of such grasping points depends on several factors. Here the authors examined whether being able to see 1 of the selected grasping points is such a factor. Subjects grasped large cylinders or oriented blocks that would normally be grasped with the thumb continuously visible and the final part of the index finger's trajectory occluded by the object in question. An opaque screen that hid the thumb's usual grasping point was used to examine whether individuals would choose a grip that was oriented differently to maintain vision of the thumb's grasping point. A transparent screen was used as a control. Occluding the thumb's grasping point made subjects move more carefully (adopting a larger grip aperture) and choose a slightly different grip orientation. However, the change in grip orientation was much too small to keep the thumb visible. The authors conclude that humans do not particularly aim for visible grasping points.  相似文献   

18.
Most psychology experiments start with a stimulus, and, for an increasing number of studies, the stimulus is presented on a computer monitor. Usually, that monitor is a CRT, although other technologies are becoming available. The monitor is a sampling device; the sampling occurs in four dimensions: spatial, temporal, luminance, and chromatic. This paper reviews some of the important issues in each of these sampling dimensions and gives some recommendations for how to use the monitor effectively to present the stimulus. In general, the position is taken that to understand what the stimulus actually is requires a clear specification of the physical properties of the stimulus, since the actual experience of the stimulus is determined both by the physical variables and by the psychophysical variables of how the stimulus is handled by our sensory systems.  相似文献   

19.
Shame, embarrassment, compassion, and contempt have been considered candidates for the status of basic emotions on the grounds that each has a recognisable facial expression. In two studies (N=88, N=60) on recognition of these four facial expressions, observers showed moderate agreement on the predicted emotion when assessed with forced choice (58%; 42%), but low agreement when assessed with free labelling (18%; 16%). Thus, even though some observers endorsed the predicted emotion when it was presented in a list, over 80% spontaneously interpreted these faces in a way other than the predicted emotion.  相似文献   

20.
Although apes understand others’ goals and perceptions, little is known about their understanding of others’ emotional expressions. We conducted three studies following the general paradigm of Repacholi and colleagues (1997, 1998 ). In Study 1, a human reacted emotionally to the hidden contents of two boxes, after which the ape was allowed to choose one of the boxes. Apes distinguished between two of the expressed emotions (happiness and disgust) by choosing appropriately. In Studies 2 and 3, a human reacted either positively or negatively to the hidden contents of two containers; then the ape saw him eating something. When given a choice, apes correctly chose the container to which the human had reacted negatively, based on the inference that the human had just eaten the food to which he had reacted positively – and so the other container still had food left in it. These findings suggest that great apes understand both the directedness and the valence of some human emotional expressions, and can use this understanding to infer desires.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号