首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Integrative complexity is a conceptually unique and very popular measurement of the complexity of human thought. We believe, however, that it is currently being underutilized because it takes quite a bit of time to score. More time‐efficient computer‐based measurements of complexity that are currently available are correlated with integrative complexity at fairly low levels. To help fill in this gap, we developed a novel automated integrative complexity system designed specifically from the integrative complexity theoretical framework. This new automated IC system achieved an alpha of .72 on the standard integrative complexity coding test. In addition, across nine datasets covering over 1,300 paragraphs, this new automated system consistently showed modest relationships with human‐scored integrative complexity (average alpha = .62; average r = .46). Further analyses revealed that this relationship consistently remained significant when controlling for superficial markers of complexity and that the new system accounted for both the differentiation and integration components of integrative complexity. Although the overlap between the automated and human‐scored systems is only modest (and thus suggests the continued usefulness of human scoring), it nonetheless provides the best automated integrative complexity measurement to date.  相似文献   

2.
《Counseling and values》2017,62(2):144-158
It is becoming common for decisions with serious consequences to be made by automation. Therefore, it is important for counselors to consider the challenges of working with clients who are affected. If a high‐consequence decision that leads to tragedy is made by a computer, does this change the counseling process? This article starts this discussion by investigating forgiveness therapy as it applies to computers. First, forgiving a human is qualitatively different from forgiving a computer. Next, examples of automated decisions are presented. Finally, the authors discuss issues that clients wishing to forgive a computer face, suggest interventions, and propose a research agenda.  相似文献   

3.
A psychosocial approach to rural development and development interventions, which we designate as ‘psychology of rural development’ (PsyRD), does not yet exist as an area of research or intervention within the field of psychology or development studies, even though rural development is in part obviously shaped by psychosocial factors. Thus, in this discussion paper, we argue the need for PsyRD, explore how it may provide new insights and tools for analysis vis‐à‐vis rural development scenarios and issues of social equity and outline the shape that, in our view, such a psychology should take. First, the multiple dimensions of rural development and the many practical problems faced by rural development agents contain strong psychosocial elements that require contributions from psychology. Yet at the same time, the psychological literature on this topic contains many limitations and biases, which leads us to, in the second part of the paper lay the groundwork for a PsyRD that focuses on the importance of adopting a critical and interdisciplinary approach capable of dealing with complexity and multidetermination. Finally, we conclude by outlining the challenges of PsyRD. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we present a new approach for the optimal experimental design problem of generating diagnostic choice tasks, where the respondent's decision strategy can be unambiguously deduced from the observed choice. In this new approach, we applied a genetic algorithm that creates a one‐to‐one correspondence between a set of predefined decision strategies and the alternatives of the choice task; it also manipulates the characteristics of the choice tasks. In addition, this new approach takes into account the measurement errors that can occur when the preferences of the decision makers are being measured. The proposed genetic algorithm is capable of generating diagnostic choice tasks even when the search space of possible choice tasks is very large. As proof‐of‐concept, we used this novel approach to generate respondent‐specific choice tasks with either low or high context‐based complexity that we operationalize by the similarity of alternatives and the conflict between alternatives. We find in an experiment that an increase in the similarity of the alternatives and an increase in the number of conflicts within the choice task lead to an increased use of non‐compensatory strategies and a decreased use of compensatory decision strategies. In contrast, the size of the choice tasks, measured by the number of attributes and alternatives, only weakly influences the strategy selection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Studies of infant looking times over the past 50 years have provided profound insights about cognitive development, but their dependent measures and analytic techniques are quite limited. In the context of infants' attention to discrete sequential events, we show how a Bayesian data analysis approach can be combined with a rational cognitive model to create a rich data analysis framework for infant looking times. We formalize (i) a statistical learning model, (ii) a parametric linking between the learning model's beliefs and infants' looking behavior, and (iii) a data analysis approach and model that infers parameters of the cognitive model and linking function for groups and individuals. Using this approach, we show that recent findings from Kidd, Piantadosi and Aslin ( 2012 ) of a U‐shaped relationship between look‐away probability and stimulus complexity even holds within infants and is not due to averaging subjects with different types of behavior. Our results indicate that individual infants prefer stimuli of intermediate complexity, reserving attention for events that are moderately predictable given their probabilistic expectations about the world.  相似文献   

6.
Lone‐actor terrorists are very often presented as emotionally and/or cognitively impaired—yet is it really the case? The present article provides the first rigorous assessment of the hypotheses according to which a high level of negative emotions, especially anger, and a lack of cognitive flexibility and complexity play a role in lone‐actor terrorists’ violent actions. Using a sample of lone‐actor terrorists’ writings, we use the LIWC (a fully automated language use analysis software) to compare terrorists’ cognition and emotion with those of other control groups, most notably nonviolent radical activists. Results strongly support the first hypothesis but clearly refute the second one, suggesting that lone‐actor terrorists are in fact characterized by a specific combination of high‐anger and high‐cognitive complexity. These method and results lay the groundwork for a more systematic and nuanced analysis of the psychology of terrorists, which is currently in a deadlock.  相似文献   

7.
Humans are sensitive to complexity and regularity in patterns (Falk & Konold, 1997; Yamada, Kawabe, & Miyazaki, 2013). The subjective perception of pattern complexity is correlated to algorithmic (or Kolmogorov-Chaitin) complexity as defined in computer science (Li & Vitányi, 2008), but also to the frequency of naturally occurring patterns (Hsu, Griffiths, & Schreiber, 2010). However, the possible mediational role of natural frequencies in the perception of algorithmic complexity remains unclear. Here we reanalyze Hsu et al. (2010) through a mediational analysis, and complement their results in a new experiment. We conclude that human perception of complexity seems partly shaped by natural scenes statistics, thereby establishing a link between the perception of complexity and the effect of natural scene statistics.  相似文献   

8.
The precision of an eye-tracker is critical to the correct identification of eye movements and their properties. To measure a system’s precision, artificial eyes (AEs) are often used, to exclude eye movements influencing the measurements. A possible issue, however, is that it is virtually impossible to construct AEs with sufficient complexity to fully represent the human eye. To examine the consequences of this limitation, we tested currently used AEs from three manufacturers of eye-trackers and compared them to a more complex model, using 12 commercial eye-trackers. Because precision can be measured in various ways, we compared different metrics in the spatial domain and analyzed the power-spectral densities in the frequency domain. To assess how precision measurements compare in artificial and human eyes, we also measured precision using human recordings on the same eye-trackers. Our results show that the modified eye model presented can cope with all eye-trackers tested and acts as a promising candidate for further development of a set of AEs with varying pupil size and pupil–iris contrast. The spectral analysis of both the AE and human data revealed that human eye data have different frequencies that likely reflect the physiological characteristics of human eye movements. We also report the effects of sample selection methods for precision calculations. This study is part of the EMRA/COGAIN Eye Data Quality Standardization Project.  相似文献   

9.
The complexity of homeless service users’ characteristics and the contextual challenges faced by services can make the experience of working with people in homelessness stressful and can put providers’ well‐being at risk. In the current study, we investigated the association between service characteristics (i.e., the availability of training and supervision and the capability‐fostering approach) and social service providers’ work engagement and burnout. The study involved 497 social service providers working in homeless services in eight different European countries (62% women; mean age = 40.73, SD = 10.45) and was part of the Horizon 2020 European study “Homelessness as Unfairness (HOME_EU).” Using hierarchical linear modeling (HLM), findings showed that the availability of training and supervision were positively associated with providers’ work engagement and negatively associated with burnout. However, results varied based on the perceived usefulness of the training and supervision provided within the service and the specific outcome considered. The most consistent finding was the association between the degree to which a service promotes users’ capabilities and all the aspects of providers’ well‐being analyzed. Results are discussed in relation to their implications for how configuration of homeless services can promote social service providers’ well‐being and high‐quality care.  相似文献   

10.
Response process data collected from human–computer interactive items contain detailed information about respondents' behavioural patterns and cognitive processes. Such data are valuable sources for analysing respondents' problem-solving strategies. However, the irregular data format and the complex structure make standard statistical tools difficult to apply. This article develops a computationally efficient method for exploratory analysis of such process data. The new approach segments a lengthy individual process into a sequence of short subprocesses to achieve complexity reduction, easy clustering and meaningful interpretation. Each subprocess is considered a subtask. The segmentation is based on sequential action predictability using a parsimonious predictive model combined with the Shannon entropy. Simulation studies are conducted to assess the performance of the new method. We use a case study of PIAAC 2012 to demonstrate how exploratory analysis for process data can be carried out with the new approach.  相似文献   

11.
James D. Proctor 《Zygon》2004,39(3):637-657
Abstract. I argue for the centrality of the concepts of biophysical and human nature in science‐and‐religion studies, consider five different metaphors, or “visions,” of nature, and explore possibilities and challenges in reconciling them. These visions include (a) evolutionary nature, built on the powerful explanatory framework of evolutionary theory; (b) emergent nature, arising from recent research in complex systems and self‐organization; (c) malleable nature, indicating both the recombinant potential of biotechnology and the postmodern challenge to a fixed ontology; (d) nature as sacred, a diffuse popular concept fundamental to cultural analysis; and (e) nature as culture, an admission of epistemological constructivism. These multiple visions suggest the famous story of the blind men and the elephant, in which each man made the classic mistake of part‐whole substitution in believing that what he grasped (the tail, for example) represented the elephant as a whole. Indeed, given the inescapability of metaphor, we may have to admit that the ultimate truth about the “elephant” (nature, or the reality toward which science and religion point) is a mystery, and the best we can hope for is to confess the limitations of any particular vision.  相似文献   

12.
Donald Michie 《Zygon》1985,20(4):375-389
Abstract. The definition of an expert system as a knowledge-based source of advice and explanation pinpoints the critical problem which confronts the would-be builders of such systems. How is the required body of knowledge to be elicited from its human possessors in a form sufficiently complete for effective organization in computer memory? This article reviews recent advances in the art of automated knowledge-extraction from expert-supplied example decisions. Computer induction, as the new approach is called, promises both important parallels to the human capacity for concept formation and also commercial exploitability.  相似文献   

13.
14.
Multicriteria decision analysis (MCDA) methods have been recently applied in many environmental problems. Major aims have been to structure and to analyse multifaceted and complex problems, to compare incommensurable impacts and to clarify the preference order of alternatives. Most MCDA tools concentrate on aiding the process of choice among alternatives. The choice usually occurs at the end of the decision‐making process, but MCDA tools can also assist earlier in the process. In this article, we present a new MCDA‐based method in order to create watercourse regulation alternatives, which meet the objectives of stakeholders. The method is comprised of three elements: (1) framework for the planning and learning process partly based on the Image Theory, (2) analysis and evaluation of ecological, social and economic impacts of regulation, and (3) visual interactive Excel implementation of value‐tree analysis (REGAIM‐model). We show how the method was applied in a complex watercourse regulation development project in Finland. Altogether 36 face‐to‐face computer‐aided interviews were undertaken with the REGAIM model with representatives of different stakeholder groups. We present the main results of the interviews and discuss how these interviews supported generation of new watercourse regulation alternatives. We also describe the advantages of the new approach in the participatory watercourse management, and discuss the applicability of Image Theory in the watercourse regulation context. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
When learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.  相似文献   

16.
In our judgment, the Apple II/FIRST system (Scandrett & Gormezano, 1980) is an efficient and versatile system for experimental control and data acquisition in classical conditioning experiments. However, these attributes would be of limited value if the system did not extract measures from our analog signals with a high degree of correspondence with our ruler measurement procedures. Accordingly, we determined the system’s validity in extracting measures of CR occurrence and CR latency in three conditioning experiments. Pearson productmoment correlation coefficients indicated a very satisfactory degree of agreement on measurements made by the Apple II/FIRST system and ruler. Moreover, intraclass correlations and analysis of variance procedures applied to percent CRs and CR latency revealed several small, but divergent, differences between ruler and computer measurement of CR latency across the three experiments. However, subsequent analyses of variance revealed that the number and pattern of significant sources of variation for ruler or computer measurements were virtually identical. Accordingly, we have concluded that our system can successfully replace our traditional method of ruler measurement.  相似文献   

17.
Automatic facial recognition is becoming increasingly ubiquitous in security contexts such as passport control. Currently, Automated Border Crossing (ABC) systems in the United Kingdom (UK) and the European Union (EU) require supervision from a human operator who validates correct identity judgments and overrules incorrect decisions. As the accuracy of this human–computer interaction remains unknown, this research investigated how human validation is impacted by a priori face‐matching decisions such as those made by automated face recognition software. Observers matched pairs of faces that were already labeled onscreen as depicting the same identity or two different identities. The majority of these labels provided information that was consistent with the stimuli presented, but some were also inconsistent or provided “unresolved” information. Across three experiments, accuracy consistently deteriorated on trials that were inconsistently labeled, indicating that observers’ face‐matching decisions are biased by external information such as that provided by ABCs.  相似文献   

18.
Cognitive Continuum Theory (CCT) is an adaptive theory of human judgement and posits a continuum of cognitive modes anchored by intuition and analysis. The theory specifies surface and depth task characteristics that are likely to induce cognitive modes at different points along the cognitive continuum. The current study manipulated both the surface (information representation) and depth (task structure) characteristics of a multiple‐cue integration threat assessment task. The surface manipulation influenced cognitive mode in the predicted direction with an iconic information display inducing a more intuitive mode than a numeric information display. The depth manipulation influenced cognitive mode in a pattern not predicted by CCT. Results indicate this difference was due to a combination of task complexity and participant satisfacing. As predicted, analysis produced a more leptokurtic error distribution than intuition. Task achievement was a function of the extent to which participants demonstrated an analytic cognitive mode index, and not a function of correspondence, as predicted. This difference was likely due to the quantitative nature of the task manipulations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

19.
Plausibility has been implicated as playing a critical role in many cognitive phenomena from comprehension to problem solving. Yet, across cognitive science, plausibility is usually treated as an operationalized variable or metric rather than being explained or studied in itself. This article describes a new cognitive model of plausibility, the Plausibility Analysis Model (PAM), which is aimed at modeling human plausibility judgment. This model uses commonsense knowledge of concept-coherence to determine the degree of plausibility of a target scenario. In essence, a highly plausible scenario is one that fits prior knowledge well: with many different sources of corroboration, without complexity of explanation, and with minimal conjecture. A detailed simulation of empirical plausibility findings is reported, which shows a close correspondence between the model and human judgments. In addition, a sensitivity analysis demonstrates that PAM is robust in its operations.  相似文献   

20.
This study draws on cognitive elaboration theory to examine when and why people evaluate computer‐based information more favorably than information from a less automated source. Half of participants received information from a computer, while half received the identical information from a less automated source. Moreover, participants were induced to be more vs. less involved in the information‐acquisition process. As predicted, participants in the low‐involvement condition evaluated the information more favorably when it came from a computer than from a less automated source. This difference was eliminated in the high‐involvement condition. Further supporting our reasoning, the interaction effect between information source and level of involvement was more pronounced for participants low, rather than high, in need for cognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号