首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
How do our brains represent distinct objects in consciousness? In order to consciously distinguish between objects, our brains somehow selectively bind together activity patterns of spatially intermingled neurons that simultaneously represent similar and dissimilar features of distinct objects. Gamma-band synchronous oscillations (GSO) of neuroelectrical activity have been hypothesized to be a mechanism used by our brains to generate and bind conscious sensations to represent distinct objects. Most experiments relating GSO to specific features of consciousness have been published only in the last several years. This brief review focuses on a wide variety experiments in which animals, including humans, discriminate between sensory stimuli and make these discriminations evident in their behavior. Performance of these tasks, in humans, is invariably accompanied by conscious awareness of both stimuli and behavior. Results of these experiments indicate that specific patterns of GSO correlate closely with specific aspects of conscious sensorimotor processing. That is, GSO appear to be closely correlated with neural generation of our most paradigmatic cognitive state: consciousness.  相似文献   

2.
Various causal attribution theories, starting with the covariation model, argue that people use consensus, distinctiveness, and consistency information to causally explain events and behaviors. Yet, the visual presentation of the covariation model in the form of a cube is based on the assumptions that these dimensions generally affect attributions independently, symmetrically, and equally. A Gricean analysis suggests that these assumptions may not generally hold in the case of causal judgments for verbally communicated interpersonal events. We had participants judge the causal role of an actor and a patient in interpersonal events that were described through actor‐verb‐patient sentences under high versus low consensus and distinctiveness (Studies 1, 2, and 3) or without such information (Studies 2 and 3). As predicted by Gricean logic, consensus and distinctiveness effects on causality ratings depended on the target whose causal role participants assessed, on the information about the alternative dimension, and, most consistently, on consensus and distinctiveness being high versus low. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
ABSTRACT

This paper aims to demonstrate how an analytical paradigm shift from the General Linear Model (GLM) used in most communication processes and effects research to dynamic systems theory (DST, a nonlinear mathematical theory), fundamentally changes one’s research assumptions and research questions and leads to novel approaches to research design, data collection, and analysis. Concrete examples demonstrating these changes are drawn from the co-viewing literature. In addition, we discuss how data collected and interpreted using the GLM can be re-analyzed and re-interpreted to further inform our understanding of communication behavior when we use the assumptions of dynamic systems theory to derive new predictions.  相似文献   

4.
An ever-increasing proportion of social psychology researchers use various versions of complex correlational models such as path analyses or structural equation models and others to draw causal conclusions from correlational data. Critics of complex correlational models have pointed out that (a) misspecification errors are the rule rather than the exception, (b) one cannot draw causal conclusions from a set of correlations, (c) most researchers fail to adjust their correlations for attenuation due to unreliability, and (d) the measures researchers use may actually be measures of outside variables that are correlated with other variables in one's model. Rather than rehash the debates that go along with these criticisms, the author makes some assumptions that are extremely favorable to the complex correlational modeler in that all of these criticisms are disallowed. Nevertheless, even with these assumptions, the author shows how spurious direct and indirect effects are likely to be created by moderately valid measures when researchers compute complex correlations. The author concludes that until social psychologists are better able to deal with the issue of the validity of their measures, they should not use complex correlational models.  相似文献   

5.
Ingroup bias is one of the most basic intergroup phenomena and has been consistently demonstrated to be increased under conditions of existential threat. In the present research the authors question the omnipresence of ingroup bias under threat and test the assumptions that these effects depend on the content of social identity and group norm salient in a situation. In the first two studies cross-categorization and recategorization manipulations eliminated and even reversed mortality salience effects on bias in relations between English and Scottish students (Study 1) as well as English and French people (Study 2). In the third study the specific normative content of a given social identity (collectivism vs. individualism) was shown to moderate mortality salience effects on ingroup bias. The results of these studies suggest a social identity perspective on terror management processes.  相似文献   

6.
According to dual-process models of memory, recognition is subserved by two processes: recollection and familiarity. Many variants of these models assume that recollection and familiarity make stochastically independent contributions to performance in recognition tasks and that the variance of the familiarity signal is equal for targets and for lures. Here, we challenge these ‘common-currency’ assumptions. Using a model-comparison approach, featuring the Continuous Dual Process (CDP; Wixted & Mickes, 2010) model as the protagonist, we show that when these assumptions are relaxed, the model’s fits to individual participants’ data improve. Furthermore, our analyses reveal that across items, recollection and familiarity show a positive correlation. Interestingly, this across-items correlation was dissociated from an across-participants correlation between the sensitivities of these processes. We also find that the familiarity signal is significantly more variable for targets than for lures. One striking theoretical implication of these findings is that familiarity—rather than recollection, as most models assume—may be the main contributor responsible for one of the most influential findings of recognition memory, that of subunit zROC slopes. Additionally, we show that erroneously adopting the common-currency assumptions, introduces severe biases to estimates of recollection and familiarity.  相似文献   

7.
Although longitudinal designs are the only way in which age changes can be directly observed, a recurrent criticism involves to what extent retest effects may downwardly bias estimates of true age-related cognitive change. Considerable attention has been given to the problem of retest effects within mixed effects models that include separate parameters for longitudinal change over time (usually specified as a function of age) and for the impact of retest (specified as a function of number of exposures). Because time (i.e., intervals between assessment) and number of exposures are highly correlated (and are perfectly correlated in equal interval designs) in most longitudinal studies, the separation of effects of within-person change from effects of retest gains is only possible given certain assumptions (e.g., age convergence). To the extent that cross-sectional and longitudinal effects of age differ, obtained estimates of aging and retest may not be informative. The current simulation study investigated the recovery of within-person change (i.e., aging) and retest effects from repeated cognitive testing as a function of number of waves, age range at baseline, and size and direction of age-cohort differences on the intercept and age slope in age-based models of change. Significant bias and Type I error rates in the estimated effects of retest were observed when these convergence assumptions were not met. These simulation results suggest that retest effects may not be distinguishable from effects of aging-related change and age-cohort differences in typical long-term traditional longitudinal designs.  相似文献   

8.
Lee Ellis 《Sex roles》2011,64(9-10):707-722
A theory is proposed that predicts the existence of numerous gender differences in cognition and behavior. The basis for these expectations is the single assumption that females have evolved tendencies to form long term sociosexual alliances with a competent resource provisioner. This assumption is teamed with evidence that males are actually a variant on the female sex with brains masculinized in ways that help them respond to female mating preferences. To orchestrate male responses to female biases in mates, the theory asserts that androgens (male sex hormones) have two main effects on human brain functioning. One is a diminished sensitivity to most environmental stimuli. The other involves shifting cognitive functioning away from the left hemisphere toward a more even and task-specialized hemispheric distribution. Many of the cognitive and behavioral differences between males and females predicted by the theory are described.  相似文献   

9.
Traditional assumptions (e.g., there are traitlike differences in disclosure) predict that people who are generally liked should generally disclose (e.g., individual-level effects). In contrast, dynamic interactional models predict that significant disclosure-liking effects are apt to be a function of mutual influences in particular dyads (e.g., dyadic-level effects). To directly explore these issues and separately examine individual and dyadic effects, 45 sorority women were asked to indicate how much they disclosed to, received disclosure from, and liked each other. Social relations analysis (Kenny & LaVoie, 1984) revealed significant disclosure-liking effects only at the dyadic level, casting doubts on traditional assumptions and supporting a dynamic interaction model of disclosure-liking effects. Implications for personality and interpersonal relationships are discussed.  相似文献   

10.
This study examined the belief-similarity model of prejudice from a sociolinguistic perspective. It was hypothesized that normatively regulated speech styles strongly affect observers' assumptions about the cultural background of the speaker. These linguistically based cultural assumptions were expected to override racial characteristics in controlling intergroup attitudes. Stimulus speech styles were Black English Vernacular (BEV) and Standard English (SE). Speech style was expected to strongly affect prejudicial attitudes, with such effects mediated by assumed cultural similarity. Racial label and speech style were expected to be most salient to ratings of “intimate” behavior and among more ethnocentric subjects. Subjects heard taped statements in either BEV or SE, ostensibly delivered by a White or a Black speaker. Subjects rated the speaker on perceived cultural similarity, general evaluation, perceived aggressiveness, and social distance. Speech style had a substantial main effect on each of these variables. Racial label had a marginally significant effect on evaluation, and interacted with ethnocentrism for perceived similarity and social distance. All effects of speech, race, and ethnocentrism were substantially attenuated or eliminated when similarity was used as a covariate. Thus, speech style had substantial effects on prejudice, as did race within more ethnocentric subjects. Both effects were largely mediated by assumed cultural similarity.  相似文献   

11.
This work compares the sensitivity of five modern analytical techniques for detecting the effects of a design with measures which are partially repeated when the assumptions of the traditional ANOVA approach are not met, namely: the approach of the mixed model adjusted by means of the SAS Proc Mixed module, the Bootstrap-F approach, the Brown-Forsythe multivariate approach, the Welch-James multivariate approach and Welch-James multivariate approach with robust estimators. Previously, Livacic-Rojas, Vallejo and Fernández found out that these methods are comparable in terms of their Type I error rates. The results obtained suggest that the mixed model approach, as well as the Brown-Forsythe and Welch-James approaches, satisfactorily controlled the Type II error rates corresponding to the main effects of the measurement occasions under most of the conditions assessed.  相似文献   

12.
Several authors have argued that the loss of a loved one triggers changes in people's beliefs and assumptions, and that these changes play a role in emotional problems after bereavement. The present study was an attempt to investigate these hypotheses. Thirty students who had been confronted with the death of a parent or sibling, on average nearly 3 years ago, were compared with 30 nonbereaved matched control subjects on different measures assessing basic assumptions and irrational beliefs as defined in REBT. In line with the notion that bereavement has an impact on people's basic assumptions, results showed that bereaved students had a less positive view of the meaningfulness of the world and the worthiness of the self than their nonbereaved counterparts. Also, in accord with the notion that the tendency to think irrationally is likely to increase after a stressful life event, the bereaved were found to have higher levels of irrational thinking. Furthermore, it was found that the degree to which bereaved individuals endorsed general as well as bereavement-specific irrational beliefs was significantly associated with the intensity of symptoms of traumatic grief. Conversely, none of the basic assumptions was associated with traumatic grief. Beliefs reflecting low frustration tolerance explained most variance in traumatic grief. Clinical and theoretical implications of these findings are discussed.  相似文献   

13.
Abductivists claim that explanatory considerations (e.g., simplicity, parsimony, explanatory breadth, etc.) favor belief in the external world over skeptical hypotheses involving evil demons and brains in vats. After showing how most versions of abductivism succumb fairly easily to obvious and fatal objections, I explain how rationalist versions of abductivism can avoid these difficulties. I then discuss the most pressing challenges facing abductivist appeals to the a priori and offer suggestions on how to overcome them.  相似文献   

14.
Many philosophers assume that philosophical theories about the psychological nature of moral judgment can be confirmed or disconfirmed by the kind of evidence gathered by natural and social scientists (especially experimental psychologists and neuroscientists). I argue that this assumption is mistaken. For the most part, empirical evidence can do no work in these philosophical debates, as the metaphorical heavy-lifting is done by the pre-experimental assumptions that make it possible to apply empirical data to these philosophical debates. For the purpose of this paper, I emphasize two putatively empirically-supported theories about the psychological nature of moral judgment. The first is the Sentimental Rules Account, which is defended by Shaun Nichols. The second is defended by Jesse Prinz, and is a form of sentimentalist moral relativism. I show that both of the arguments in favour of these theories rely on assumptions which would be rejected by their philosophical opponents. Further, these assumptions carry substantive moral commitments and thus cannot be confirmed by further empirical investigation. Because of this shared methodological assumption, I argue that a certain form of empirical moral psychology rests on a mistake.  相似文献   

15.
Neuroimaging research has been at the forefront of concerns regarding the failure of experimental findings to replicate. In the study of brain-behavior relationships, past failures to find replicable and robust effects have been attributed to methodological shortcomings. Methodological rigor is important, but there are other overlooked possibilities: most published studies share three foundational assumptions, often implicitly, that may be faulty. In this paper, we consider the empirical evidence from human brain imaging and the study of non-human animals that calls each foundational assumption into question. We then consider the opportunities for a robust science of brain-behavior relationships that await if scientists ground their research efforts in revised assumptions supported by current empirical evidence.  相似文献   

16.
Likelihood surface methods for geographic offender profiling rely on several assumptions regarding the underlying location choice mechanism of an offender. We propose an ex ante test for checking whether a given set of crime locations is compatible with two necessary assumptions: circular symmetry and distance decay. The proposed (SDD) test compares the observed inter point distances of a given series of crimes with a theoretical distribution function governed by these assumptions, using a Monte Carlo simulation procedure for approximating that distribution function. We apply the SDD test to data on serial burglary from both the UK and the Netherlands. In most cases, the assumption of an underlying symmetric distance decay function has to be rejected. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
Despite the fact that the Dao De Jing道德經 is one of the mos frequently translated texts in history,most of these translations share certain unexamined and problematic assumptions which often make it seem as though the text is irrational,incoherent,and full of non sequiturs.Frequently,these assumptions involve the imposition of historically anachronous,linguistically unsound,and philosophically problematic categories and attitudes onto the text.One of the main causes of the problem is the persistent tendency on the part of most translators to read the first line of the text as referring to or implying the existence of some kind of "eternal Dao." These are what I term "ontological "readings,as opposed to the "process" reading I will be articulating here.  相似文献   

18.
Mediation analysis uses measures of hypothesized mediating variables to test theory for how a treatment achieves effects on outcomes and to improve subsequent treatments by identifying the most efficient treatment components. Most current mediation analysis methods rely on untested distributional and functional form assumptions for valid conclusions, especially regarding the relation between the mediator and outcome variables. Propensity score methods offer an alternative whereby the propensity score is used to compare individuals in the treatment and control groups who would have had the same value of the mediator had they been assigned to the same treatment condition. This article describes the use of propensity score weighting for mediation with a focus on explicating the underlying assumptions. Propensity scores have the potential to offer an alternative estimation procedure for mediation analysis with alternative assumptions from those of standard mediation analysis. The methods are illustrated investigating the mediational effects of an intervention to improve sense of mastery to reduce depression using data from the Job Search Intervention Study (JOBS II). We find significant treatment effects for those individuals who would have improved sense of mastery when in the treatment condition but no effects for those who would not have improved sense of mastery under treatment.  相似文献   

19.
In my reply to Likierman and Ellman, I remark on the difficulty of responding accurately to written case material. I underline some problems resulting from the necessarily abbreviated information contained in even the most carefully constructed and detailed presentations. Using the responses to my Richard paper, I attempt to show that in the absence of complete information, respondents tend to substitute their own imaginative assumptions for missed or incomplete information. They also tend to borrow from preferred theory to organize these assumptions as well as the conclusions that follow. Like unmarked wind currents, these tendencies can carry discussions far off the author's intended course.  相似文献   

20.
Null hypothesis significance testing (NHST) is the researcher's workhorse for making inductive inferences. This method has often been challenged, has occasionally been defended, and has persistently been used through most of the history of scientific psychology. This article reviews both the criticisms of NHST and the arguments brought to its defense. The review shows that the criticisms address the logical validity of inferences arising from NHST, whereas the defenses stress the pragmatic value of these inferences. The author suggests that both critics and apologists implicitly rely on Bayesian assumptions. When these assumptions are made explicit, the primary challenge for NHST--and any system of induction--can be confronted. The challenge is to find a solution to the question of replicability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号