首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We sketch a framework for building a unified science of cognition. This unification is achieved by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems. The core idea is that functional analyses are sketches of mechanisms, in which some structural aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are seamlessly integrated with multilevel mechanistic explanations.  相似文献   

2.
We investigate the relationship between two approaches to modeling physical systems. On the first approach, simplifying assumptions are made about the level of detail we choose to represent in a computational simulation with an eye toward tractability. On the second approach simpler, analogue physical systems are considered that have more or less well-defined connections to systems of interest that are themselves too difficult to probe experimentally. Our interest here is in the connections between the artifacts of modeling that appear in these two approaches. We begin by outlining an important respect in which the two are essentially dissimilar and then propose a method whereby overcoming that dissimilarity by hand results in usefully analogous behavior. We claim that progress can be made if we think of artifacts as clues to the projectible predicates proper to the models themselves. Our degree of control over the connection between interesting analogue physical systems and their targets arises from determining the projectible predicates in the analogue system through a combination of theory and experiment. To obtain a similar degree of control over the connection between large-scale, distributed simulations of complex systems and their targets we must similarly determine the projectible predicates of the simulations themselves. In general theory will be too intractable to be of use, and so we advocate an experimental program for determining these predicates.
the object of the natural history which I propose is...to give light to the discovery of causes and supply a suckling philosophy with its first food. Francis Bacon, The Great Instauration
  相似文献   

3.
If we keep on doing what we have been doing, we are going to keep on getting what we have been getting. Concerns about the gap between science and practice are longstanding. There is a need for new approaches to supplement the existing approaches of research to practice models and the evolving community-centered models for bridging this gap. In this article, we present the Interactive Systems Framework for Dissemination and Implementation (ISF) that uses aspects of research to practice models and of community-centered models. The framework presents three systems: the Prevention Synthesis and Translation System (which distills information about innovations and translates it into user-friendly formats); the Prevention Support System (which provides training, technical assistance or other support to users in the field); and the Prevention Delivery System (which implements innovations in the world of practice). The framework is intended to be used by different types of stakeholders (e.g., funders, practitioners, researchers) who can use it to see prevention not only through the lens of their own needs and perspectives, but also as a way to better understand the needs of other stakeholders and systems. It provides a heuristic for understanding the needs, barriers, and resources of the different systems, as well as a structure for summarizing existing research and for illuminating priority areas for new research and action. The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.  相似文献   

4.
A widespread assumption in the contemporary discussion of probabilistic models of cognition, often attributed to the Bayesian program, is that inference is optimal when the observer's priors match the true priors in the world—the actual “statistics of the environment.” But in fact the idea of a “true” prior plays no role in traditional Bayesian philosophy, which regards probability as a quantification of belief, not an objective characteristic of the world. In this paper I discuss the significance of the traditional Bayesian epistemic view of probability and its mismatch with the more objectivist assumptions about probability that are widely held in contemporary cognitive science. I then introduce a novel mathematical framework, the observer lattice, that aims to clarify this issue while avoiding philosophically tendentious assumptions. The mathematical argument shows that even if we assume that “ground truth” probabilities actually do exist, there is no objective way to tell what they are. Different observers, conditioning on different information, will inevitably have different probability estimates, and there is no general procedure to determine which one is right. The argument sheds light on the use of probabilistic models in cognitive science, and in particular on what exactly it means for the mind to be “tuned” to its environment.  相似文献   

5.
Chaos-related obstructions to predictability have been used to challenge accounts of theory validation based on the agreement between theoretical predictions and experimental data (Rueger & Sharp, 1996. The British Journal for the Philosophy of Science, 47, 93–112; Koperski, 1998. Philosophy of Science, 40, 194–212). These challenges are incomplete in two respects: (a) they do not show that chaotic regimes are unpredictable in principle (i.e., with unbounded resources) and, as a result, that there is something conceptually wrong with idealized expectations of correct predictions from acceptable theories, and (b) they do not explore whether chaos-induced predictive failures of deterministic models can be remedied by stochastic modeling. In this paper we appeal to an asymptotic analysis of state space trajectories and their numerical approximations to show that chaotic regimes are deterministically unpredictable even with unbounded resources. Additionally, we explain why stochastic models of chaotic systems, while predictively successful in some cases, are in general predictively as limited as deterministic ones. We conclude by suggesting that the way in which scientists deal with such principled obstructions to predictability calls for a more comprehensive approach to theory validation, on which experimental testing is augmented by a multifaceted mathematical analysis of theoretical models, capable of identifying chaos-related predictive failures as due to principled limitations which the world itself imposes on any less-than-omniscient epistemic access to some natural systems. We give special thanks to two anonymous reviewers for their helpful comments that have substantially contributed to the final version of this paper  相似文献   

6.
There has come to exist a partial fusion of construct validation theory and latent variable modeling at the center of which is located a practice of equating concepts such as construct, factor, latent variable, concept, unobservable, unmeasurable, underlying, hypothetical variable, theoretical term, theoretical variable, intervening variable, cause, abstractive property, functional unity, and measured property. In the current paper we: a) provide a structural explanation of this concept equating; b) provide arguments to the effect that it is illegitimate; c) suggest that the singular reason for the presence of construct in the literature of the social and behavioral sciences is to mark an allowance taken by the social and behavioral scientist to obliterate the concept/referent distinction that is foundational of sound science.  相似文献   

7.
A synthesis of the two primary theory structures in Robert Rosen’s relational complexity, (1) relational entailment mapping based on category theory as described by Rosen and Louie, and (2) relational holism based on modeling relations, as described by Kineman, provides an integral foundation for relational complexity theory as a natural science and analytical method. Previous incompatibilities between these theory structures are resolved by re-interpreting Aristotle’s four causes, identifying final and formal causes as relations with context. Category theory is applied to introduce contextual entailment algebra needed to complete the synthesis. The modeling relation is represented as a recursive four-cause hierarchy, which is a unit of both whole and part analysis (a ‘holon’) that relates realized and contextual domains of nature as complementary inverse entailments between structure and function. Context is a non-localized domain of distributed potentials (models) for existence, as contrasted with the realized domain of localized interactive and measurable events. Synthesis is achieved by giving modeling relations an algebraic form in category theory and by expanding relational analysis to include contextual entailments. The revised form of analysis is applied and demonstrated to examine Rosen’s M-R diagram, showing that structure–function relations imply adaptive interaction with the environment, and that contextual relations imply three forms of the M-R entailment corresponding with the generally known three forms of life; Archaea, Bacteria, and Eukaryota, which can be represented by their holon diagrams. The result of this synthesis is a consistent foundation for relational science that should have important implications in many disciplines.  相似文献   

8.
Measurement models, such as factor analysis and item response theory models, are commonly implemented within educational, psychological, and behavioral science research to mitigate the negative effects of measurement error. These models can be formulated as an extension of generalized linear mixed models within a unifying framework that encompasses various kinds of multilevel models and longitudinal models, such as partially nonlinear latent growth models. We introduce the R package PLmixed, which implements profile maximum likelihood estimation to estimate complex measurement and growth models that can be formulated within the general modeling framework using the existing R package lme4 and function optim. We demonstrate the use of PLmixed through two examples before concluding with a brief overview of other possible models.  相似文献   

9.
We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the what, the how, and the why of the Bayesian approach: what sorts of problems and data the framework is most relevant for, and how and why it may be useful for developmentalists. We emphasize a qualitative understanding of Bayesian inference, but also include information about additional resources for those interested in the cognitive science applications, mathematical foundations, or machine learning details in more depth. In addition, we discuss some important interpretation issues that often arise when evaluating Bayesian models in cognitive science.  相似文献   

10.
Alex Morgan 《Synthese》2014,191(2):213-244
Many philosophers and psychologists have attempted to elucidate the nature of mental representation by appealing to notions like isomorphism or abstract structural resemblance. The ‘structural representations’ that these theorists champion are said to count as representations by virtue of functioning as internal models of distal systems. In his 2007 book, Representation Reconsidered, William Ramsey endorses the structural conception of mental representation, but uses it to develop a novel argument against representationalism, the widespread view that cognition essentially involves the manipulation of mental representations. Ramsey argues that although theories within the ‘classical’ tradition of cognitive science once posited structural representations, these theories are being superseded by newer theories, within the tradition of connectionism and cognitive neuroscience, which rarely if ever appeal to structural representations. Instead, these theories seem to be explaining cognition by invoking so-called ‘receptor representations’, which, Ramsey claims, aren’t genuine representations at all—despite being called representations, these mechanisms function more as triggers or causal relays than as genuine stand-ins for distal systems. I argue that when the notions of structural and receptor representation are properly explicated, there turns out to be no distinction between them. There only appears to be a distinction between receptor and structural representations because the latter are tacitly conflated with the ‘mental models’ ostensibly involved in offline cognitive processes such as episodic memory and mental imagery. While structural representations might count as genuine representations, they aren’t distinctively mental representations, for they can be found in all sorts of non-intentional systems such as plants. Thus to explain the kinds of offline cognitive capacities that have motivated talk of mental models, we must develop richer conceptions of mental representation than those provided by the notions of structural and receptor representation.  相似文献   

11.
Community science has a rich tradition of using theories and research designs that are consistent with its core value of contextualism. However, a survey of empirical articles published in the American Journal of Community Psychology shows that community scientists utilize a narrow range of statistical tools that are not well suited to assess contextual data. Multilevel modeling, geographic information systems (GIS), social network analysis, and cluster analysis are recommended as useful tools to address contextual questions in community science. An argument for increased methodological consilience is presented, where community scientists are encouraged to adopt statistical methodology that is capable of modeling a greater proportion of the data than is typical with traditional methods.  相似文献   

12.
Hong  Maxwell R.  Jacobucci  Ross 《Psychometrika》2019,84(1):327-332

Research questions that address developmental processes are becoming more prevalent in psychology and other areas of social science. Growth models have become a popular tool to model multiple individuals measured over several time points. These types of models allow researchers to answer a wide variety of research questions, such as modeling inter- and intra-individual differences and variability in longitudinal process (Molenaar 2004). The recently published book, Growth Modeling: Structural Equation and Multilevel Modeling Approaches (Grimm, Ram & Estabrook 2017), provides a solid foundation for both beginners and more advanced researchers interested in longitudinal data analysis by juxtaposing both the multilevel and structural equation modeling frameworks for several different models. By providing both sufficient technical background and practical coding examples in a variety of both commercial and open-source software, this book should serve as an excellent reference tool for behavioral and methodological researchers interested in growth modeling.

  相似文献   

13.
The change detection paradigm has become an important tool for researchers studying working memory. Change detection is especially useful for studying visual working memory, because recall paradigms are difficult to employ in the visual modality. Pashler (Perception & Psychophysics, 44, 369–378, 1988) and Cowan (Behavioral and Brain Sciences, 24, 87–114, 2001) suggested formulas for estimating working memory capacity from change detection data. Although these formulas have become widely used, Morey (Journal of Mathematical Psychology, 55, 8–24, 2011) showed that the formulas suffer from a number of issues, including inefficient use of information, bias, volatility, uninterpretable parameter estimates, and violation of ANOVA assumptions. Morey presented a hierarchical Bayesian extension of Pashler’s and Cowan’s basic models that mitigates these issues. Here, we present WoMMBAT (Working Memory Modeling using Bayesian Analysis Techniques) software for fitting Morey’s model to data. WoMMBAT has a graphical user interface, is freely available, and is cross-platform, running on Windows, Linux, and Mac operating systems.  相似文献   

14.
Harbecke  Jens 《Synthese》2020,199(1):19-41

This paper discusses the relevance of models for cognitive science that integrate mechanistic and computational aspects. Its main hypothesis is that a model of a cognitive system is satisfactory and explanatory to the extent that it bridges phenomena at multiple mechanistic levels, such that at least several of these mechanistic levels are shown to implement computational processes. The relevant parts of the computation must be mapped onto distinguishable entities and activities of the mechanism. The ideal is contrasted with two other accounts of modeling in cognitive science. The first has been presented by David Marr in combination with a distinction of “levels of computation”. The second builds on a hierarchy of “mechanistic levels” in the sense of Carl Craver. It is argued that neither of the two accounts secures satisfactory explanations of cognitive systems. The mechanistic-computational ideal can be thought of as resulting from a fusion of Marr’s and Craver’s ideals. It is defended as adequate and plausible in light of scientific practice, and certain metaphysical background assumptions are discussed.

  相似文献   

15.
This article reviews the causal implications of latent variable and psychometric network models for the validation of personality trait questionnaires. These models imply different data generating mechanisms that have important consequences for the validity and validation of questionnaires. From this review, we formalize a framework for assessing the evidence for the validity of questionnaires from the psychometric network perspective. We focus specifically on the structural phase of validation, where items are assessed for redundancy, dimensionality, and internal structure. In this discussion, we underline the importance of identifying unique personality components (i.e. an item or set of items that share a unique common cause) and representing the breadth of each trait's domain in personality networks. After, we argue that psychometric network models have measures that are statistically equivalent to factor models but we suggest that their substantive interpretations differ. Finally, we provide a novel measure of structural consistency, which provides complementary information to internal consistency measures. We close with future directions for how external validation can be executed using psychometric network models. © 2020 European Association of Personality Psychology  相似文献   

16.
An increasingly popular view among philosophers of science is that of science as action—as the collective activity of scientists working in socially‐coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science‐as‐social‐process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science.  相似文献   

17.
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types—continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.  相似文献   

18.
Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).  相似文献   

19.
Simultaneous developments in big data, social media, and computational social science have set the stage for how we think about and understand interpersonal and mass communication. This article explores some of the ways that these developments generate 4 hypothetical “vectors”—directions—into the next generation of communication research. These vectors include developments in network analysis, modeling interpersonal and social influence, recommendation systems, and the blurring of distinctions between interpersonal and mass audiences through narrowcasting and broadcasting. The methods and research in these arenas are occurring in areas outside the typical boundaries of the communication discipline but engage classic, substantive questions in mass and interpersonal communication.  相似文献   

20.
Growth curve modeling is one of the main analytical approaches to study change over time. Growth curve models are commonly estimated in the linear and nonlinear mixed-effects modeling framework in which both the mean and person-specific curves are modeled parametrically with functions of time such as the linear, quadratic, and exponential. However, when more complex nonlinear trajectories need to be estimated and researchers do not have a priori knowledge of an appropriate functional form of growth, parametric models may be too restrictive. This paper reviews functional mixed-effects models, a nonparametric extension of mixed-effects models that permit both the mean and person-specific curves to be estimated without assuming a prespecified functional form of growth. Details of the model are presented along with results from a simulation study and an empirical example. The simulation study showed functional mixed-effects models performed reasonably well under various conditions commonly associated with longitudinal panel data, such as few time points per person, irregularly spaced time points across persons, missingness, and nonlinear trajectories. The usefulness of functional mixed-effects models is illustrated by analyzing empirical data from the Early Childhood Longitudinal Study – Kindergarten Class of 1998–1999.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号