首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Shannon Spaulding 《Synthese》2018,195(9):4009-4030
Disagreeing with others about how to interpret a social interaction is a common occurrence. We often find ourselves offering divergent interpretations of others’ motives, intentions, beliefs, and emotions. Remarkably, philosophical accounts of how we understand others do not explain, or even attempt to explain such disagreements. I argue these disparities in social interpretation stem, in large part, from the effect of social categorization and our goals in social interactions, phenomena long studied by social psychologists. I argue we ought to expand our accounts of how we understand others in order to accommodate these data and explain how such profound disagreements arise amongst informed, rational, well-meaning individuals.  相似文献   

2.
3.
Twelve‐month‐olds realize that when an agent cannot see an object, her incomplete perceptions still guide her goal‐directed actions. What would happen if the agent had incomplete perceptions because she could see only one part of the object, for example one side of a screen? In the present research, 16‐month‐olds were first shown an agent who always pointed to a red object, as opposed to a black or a yellow object, suggesting that she preferred red over the other colours. Next, two screens were introduced while the agent was absent. The screens were (1) red or green on both sides; (2) red on the front (infants’ side) but green on the back (the agent’s side) or vice versa; or (3) only coloured red or green on the front. During test, the agent, who could see only the back of the screens, pointed to one of the two screens. The results revealed that while infants expected the agent to continue acting on her colour preference and point to the red rather than the green screen during test, they did so in accord with the agent’s perception of the screens, rather than their own perceptions: they expected the agent to point to the red screen in (1), but to the green‐front screen in (2), and they had no prediction of which screen the agent should point to in (3). The implications of the present findings for early psychological reasoning research are discussed.  相似文献   

4.
The present study explored African American (n = 16) and European American (n = 19) college women's ideal body size perceptions for their own and the other ethnic group along with reasons behind their selections. Respondents completed an ethnically-neutral figure rating scale and then participated in ethnically-homogenous focus groups. European Americans mostly preferred a curvy-thin or athletic ideal body while most African American students resisted notions of a singular ideal body. European Americans suggested that African Americans’ larger ideal body sizes were based on greater body acceptance and the preferences of African American men. African Americans used extreme terms when discussing their perceptions of European Americans’ thin idealization, celebrity role models, and weight management behaviors. African Americans’ perceptions of European Americans’ body dissatisfaction were also attributed to the frequent fat talk they engaged in. Implications for promoting the psychosocial well-being of ethnically-diverse emerging adult females attending college are discussed.  相似文献   

5.
This research investigates the effect of members’ cognitive styles on team processes that affect errors in execution tasks. In two laboratory studies, we investigated how a team’s composition (members’ cognitive styles related to object and spatial visualization) affects the team’s strategic focus and strategic consensus, and how those affect the team’s commission of errors. Study 1, conducted with 70 dyads performing a navigation and identification task, established that teams high in spatial visualization are more process-focused than teams high in object visualization. Process focus, which pertains to a team’s attention to the details of conducting a task, is associated with fewer errors. Study 2, conducted with 64 teams performing a building task, established that heterogeneity in cognitive style is negatively associated with the formation of a strategic consensus, which has a direct and mediating relationship with errors.  相似文献   

6.
Our epistemology can shape the way we think about perception and experience. Speaking as an epistemologist, I should say that I don't necessarily think that this is a good thing. If we think that we need perceptual evidence to have perceptual knowledge or perceptual justification, we will naturally feel some pressure to think of experience as a source of reasons or evidence. In trying to explain how experience can provide us with evidence, we run the risk of either adopting a conception of evidence according to which our evidence isn't very much like the objects of our beliefs that figure in reasoning (e.g., by identifying our evidence with experiences or sensations) or the risk of accepting a picture of experience according to which our perceptions and perceptual experiences are quite similar to beliefs in terms of their objects and their representational powers. But I think we have good independent reasons to resist identifying our evidence with things that don't figure in our reasoning as premises and I think we have good independent reason to doubt that experience is sufficiently belief‐like to provide us with something premise‐like that can figure in reasoning. We should press pause. We shouldn't let questionable epistemological assumptions tell us how to do philosophy of mind. I don't think that we have good reason to think that we need the evidence of the senses to explain how perceptual justification or knowledge is possible. Part of my scepticism derives from the fact that I think we can have kinds of knowledge where the relevant knowledge is not evidentially grounded. Part of my scepticism derives from the fact that there don't seem to be many direct arguments for thinking that justification and knowledge always requires evidential support. In this paper, I shall consider the three arguments I've found for thinking that justification and knowledge do always require evidential support and explain why I don't find them convincing. I think that we can explain perceptual justification, rationality, and defeat without assuming that our experiences provide us with evidence. In the end, I think we can partially vindicate Davidson's (notorious) suggestion that our beliefs, not experiences, provide us with reasons for forming further beliefs. This idea turns out to be compatible with foundationalism once we understand that foundational status can come from something other than evidential support.  相似文献   

7.
Animal Cognition - The use of 2-dimensional representations (e.g. photographs or digital images) of real-life physical objects has been an important tool in studies of animal cognition. Horses are...  相似文献   

8.
This article discusses the concepts of literacy, theological literacy and literacy practices as a resource for understanding how tradition and faith/belief are intertwined. Against the background of recent elaborations of literacy within the field of literature and educational studies, the suggestion is made that “tradition” can be understood as a semiotic domain, i.e. a set of practices that recruits one or more modalities to communicate distinctive types of meanings. Theological literacy is accordingly defined as the ability to interpret, develop and communicate a theological semiotic domain. Literacy helps us to see that Christian faith/belief cannot be taught and acquired once and for all by learning a doctrinal content and a specific religious practice. At the same time, however, literacy nevertheless stresses the importance of knowing doctrinal content and religious practise, seeing that literacy is part of the process of shaping and construing faith and tradition.  相似文献   

9.
The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.  相似文献   

10.
Conveying complex mental scenarios is at the heart of human language. Advances in cognitive linguistics suggest this is mediated by an ability to activate cognitive systems involved in non-linguistic processing of spatial information. In this fMRI-study, we compare sentences with a concrete spatial meaning to sentences with an abstract meaning. Using this contrast, we demonstrate that sentence meaning involving motion in a concrete topographical context, whether linked to animate or inanimate subjects nouns, yield more activation in a bilateral posterior network, including fusiform/parahippocampal, and retrosplenial regions, and the temporal-occipital-parietal junction. These areas have previously been shown to be involved in mental navigation and spatial memory tasks. Sentences with an abstract setting activate an extended largely left-lateralised network in the anterior temporal, and inferior and superior prefrontal cortices, previously found activated by comprehension of complex semantics such as narratives. These findings support a model of language, where the understanding of spatial semantic content emerges from the recruitment of brain regions involved in non-linguistic spatial processing.  相似文献   

11.
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual pre‐schoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate ‘with their hands’ to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production.  相似文献   

12.
Persaud N 《Consciousness and cognition》2008,17(4):1375; discussion 1376-1375; discussion 1377
  相似文献   

13.
14.
15.
The present study investigated whether infants learn the effects of other persons' actions like they do for their own actions, and whether infants transfer observed action-effect relations to their own actions. Nine-, 12-, 15- and 18-month-olds explored an object that allowed two actions, and that produced a certain salient effect after each action. In a self-exploration group, infants explored the object directly, whereas in two observation groups, infants first watched an adult model acting on the object and obtaining a certain effect with each action before exploring the objects by themselves. In one observation group, the infants' actions were followed by the same effects as the model's actions, but in the other group, the action-effect mapping for the infant was reversed to that of the model. The results showed that the observation of the model had an impact on the infants' exploration behavior from 12 months, but not earlier, and that the specific relations between observed actions and effects were acquired by 15 months. Thus, around their first birthday infants learn the effects of other persons' actions by observation, and they transfer the observed action-effect relations to their own actions in the second year of life.  相似文献   

16.
Two experiments tested the predictions from Bjork and Murray’s (1977) extension of Estes’ (1972, 1974) interactive channels model that repetition of a target within a display should, under certain conditions, impair or slow processing. These predictions were contrasted with those of the continuous flow model (Eriksen & Schultz, 1979) that, under the same conditions, repetitions should not impair processing and might possibly facilitate it. Experiment 1 evaluated the relative effects of feature similarity, variable target-noise spacing, and perceptual segregation in a response competition paradigm. In general, results favored the continuous flow conception and competition among internal recognition responses, and no evidence was found for impaired performance due to target repetitions. However, questions concerning possible facilitation arising from redundancy led to Experiment 2. In that experiment, trials were blocked by spacing or noise type. Again, target redundancy did not impair performance relative to the single target control. However, no facilitation effects were found.  相似文献   

17.
ABSTRACT

Studies examining visual abilities in individuals with early auditory deprivation have reached mixed conclusions, with some finding congenital auditory deprivation and/or lifelong use of a visuospatial language improves specific visual skills and others failing to find substantial differences. A more consistent finding is enhanced peripheral vision and an increased ability to efficiently distribute attention to the visual periphery following auditory deprivation. However, the extent to which this applies to visual skills in general or to certain conspicuous stimuli, such as faces, in particular is unknown. We examined the perceptual resolution of peripheral vision in the deaf, testing various facial attributes typically associated with high-resolution scrutiny of foveal information processing. We compared performance in face-identification tasks to performance using non-face control stimuli. Although we found no enhanced perceptual representations in face identification, gender categorization, or eye gaze direction recognition tasks, fearful expressions showed greater resilience than happy or neutral ones to increasing eccentricities. In the absence of an alerting sound, the visual system of auditory deprived individuals may develop greater sensitivity to specific conspicuous stimuli as a compensatory mechanism. The results also suggest neural reorganization in the deaf in their opposite advantage of the right visual field in face identification tasks.  相似文献   

18.
Visual object perception is usually studied by presenting one object at a time at the fovea. However, the world around us is composed of multiple objects. The way our visual system deals with this complexity has remained controversial in the literature. Some models claim that the ventral pathway, a set of visual cortical areas responsible for object recognition, can process only one or very few objects at a time without ambiguity. Other models argue in favor of a massively parallel processing of objects in a scene. Recent experiments in monkeys have provided important data about this issue. The ventral pathway seems to be able to perform complex analyses on several objects simultaneously, but only during a short time period. Subsequently only one or very few objects are explicitly selected and consciously perceived. Here, we survey the implications of these new findings for our understanding of object processing.  相似文献   

19.
Most psychology experiments start with a stimulus, and, for an increasing number of studies, the stimulus is presented on a computer monitor. Usually, that monitor is a CRT, although other technologies are becoming available. The monitor is a sampling device; the sampling occurs in four dimensions: spatial, temporal, luminance, and chromatic. This paper reviews some of the important issues in each of these sampling dimensions and gives some recommendations for how to use the monitor effectively to present the stimulus. In general, the position is taken that to understand what the stimulus actually is requires a clear specification of the physical properties of the stimulus, since the actual experience of the stimulus is determined both by the physical variables and by the psychophysical variables of how the stimulus is handled by our sensory systems.  相似文献   

20.
This paper takes up the question of whether we can visually represent something as having semantic value. Something has semantic value if it represents some property, thing or concept. An argument is offered that we can represent semantic value based on a variety of number-color synesthesia. This argument is shown to withstand several objections that can be lodged against the popular arguments from phenomenal contrast and from the mundane example of reading.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号