首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
采用学习探测模式和回指解决相结合的研究范式,探讨情境模型中空间距离的心理表征问题。实验1分离几何距离和类别距离,探讨类别距离对回指解决的影响。结果表明,在控制几何距离的条件下,读者采用由房间数目体现的类别距离信息建构情境模型;实验2进一步探讨几何距离的作用,结果发现,类别距离信息相同的条件下,读者在情境模型中表征几何距离信息。研究表明,类别距离和几何距离分别对空间情境模型的回指解决产生独立影响  相似文献   

2.
This paper reports a study of a multi-agent model of working memory (WM) in the context of Boolean concept learning. The model aims to assess the compressibility of information processed in WM. Concept complexity is described as a function of communication resources (i.e., the number of agents and the structure of communication between agents) required in WM to learn a target concept. This model has been successfully applied in measuring learning times for three-dimensional (3D) concepts (Mathy and Bradmetz in Curr Psychol Cognit 22(1):41-82, 2004). In this previous study, learning time was found to be a function of compression time. To assess the effect of decompression time, this paper presents an extended intra-conceptual study of response times for two- and 3D concepts. Response times are measured in recognition phases. The model explains why the time required to compress a sample of examples into a rule is directly linked to the time to decompress this rule when categorizing examples. Three experiments were conducted with 65, 49, and 84 undergraduate students who were given Boolean concept learning tasks in two and three dimensions (also called rule-based classification tasks). The results corroborate the metric of decompression given by the multi-agent model, especially when the model is parameterized following static serial processing of information. Also, this static serial model better fits the patterns of response times than an exemplar-based model.  相似文献   

3.
Prior research has identified two modes of quantitative estimation: numerical retrieval and ordinal conversion. In this paper we introduce a third mode, which operates by a feature-based inference process. In contrast to prior research, the results of three experiments demonstrate that people estimate automobile prices by combining metric information associated with two critical features: product class and brand status. In addition, Experiments 2 and 3 demonstrated that when participants are seeded with the actual current base price of one of the to-be-estimated vehicles, they respond by revising the general metric and splitting the information carried by the seed between the two critical features. As a result, the degree of post-seeding revision is directly related to the number of these features that the seed and the transfer items have in common. The paper concludes with a general discussion of the practical and theoretical implications of our findings.  相似文献   

4.
The time and budgets that managers can devote to enhancing sales force productivity are limited, so sales managers must decide where it is worthwhile to invest in productivity improvements—to improve salespeople’s current effort allocation, realign territories, enhance sales force sizing, or provide more training. To prioritize these alternatives, management must assess the outcomes of investments on the basis of a common metric—profit. This paper proposes to estimate a core sales response function that allows for quantifying the profits derived from each possible action and demonstrates the benefits of this approach through an actual case study.  相似文献   

5.
Gernsbacher (1984) found that number of word meanings (polysemy) did not influence lexical decision time when it was operationalized as number of dictionary definitions. This finding supports her contention that subjects do not store all possible dictionary meanings for words in memory. The present experiments extended Gernsbacher's research by determining whether more psychologically valid measures of polysemy affect lexical decision time. Three metrics were used to represent the meanings that subjects actually access from memory (accessible polysemy): (1) the first meanings subjects think of when asked to define stimulus words, (2) all the meanings subjects generate for words, and (3) the average number of meanings subjects generate. The results showed that the second and third metrics of polysemy influenced lexical decision time, whereas the first metric (representing mostly the access to dominant meanings for words) only approached significance.  相似文献   

6.
The aim of this study was to focus on similarities in the discrimination of three different quantities—time, number, and line length—using a bisection task involving children aged 5 and 8 years and adults, when number and length were presented nonsequentially (Experiment 1) and sequentially (Experiment 2). In the nonsequential condition, for all age groups, although to a greater extent in the younger children, the psychophysical functions were flatter, and the Weber ratio higher for time than for number and length. Number and length yielded similar psychophysical functions. Thus, sensitivity to time was lower than that to the other quantities, whether continuous or not. However, when number and length were presented sequentially (Experiment 2), the differences in discrimination performance between time, number, and length disappeared. Furthermore, the Weber ratio values as well as the bisection points for all quantities presented sequentially appeared to be close to that found for duration in the nonsequential condition. The results are discussed within the framework of recent theories suggesting a common mechanism for all analogical quantities.  相似文献   

7.
Abstract

Accelerated longitudinal designs (ALDs) are designs in which participants from different cohorts provide repeated measures covering a fraction of the time range of the study. ALDs allow researchers to study developmental processes spanning long periods within a relatively shorter time framework. The common trajectory is studied by aggregating the information provided by the different cohorts. Latent change score (LCS) models provide a powerful analytical framework to analyze data from ALDs. With developmental data, LCS models can be specified using measurement occasion as the time metric. This provides a number of benefits, but has an important limitation: It makes it not possible to characterize the longitudinal changes as a function of a developmental process such as age or biological maturation. To overcome this limitation, we propose an extension of an occasion-based LCS model that includes age differences at the first measurement occasion. We conducted a Monte Carlo study and compared the results of including different transformations of the age variable. Our results indicate that some of the proposed transformations resulted in accurate expectations for the studied process across all the ages in the study, and excellent model fit. We discuss these results and provide the R code for our analysis.  相似文献   

8.
不同的时态逻辑能够适应不同的推理任务。为了符合应用,关于时间的模型从离散的自然数和整数,延伸到稠密的线性实数,甚至扩展到区间代数和树代数。如果简单的时态连接词的表达力已经足够,就只需使用这些简单的时态连接词来构造的时态逻辑。在能够承担降低运算速度的风险下,我们可以为实现更强的表达力而使用更多的连接词,也可以加上度量信息或者固定点。作者近期提出了一个令人惊讶的结论:建立在实数时间上的具有足够表达力的语言和基于自然数离散时间流的传统简单算子,它们推理的计算复杂性是一样的。在这篇论文中,作者试图对建立在标准时态连接词和线性时间流的普通类上的时态逻辑中所有决策问题的计算复杂性作新的说明。尤其是,文中指出,所有标准逻辑在PSPACE中都存在决策问题。  相似文献   

9.
The study was directed to the need to structure in a few variables the domain measured by personality and interest measures commonly employed in educational counseling: Strong, Kuder, EPPS and the Study of Values. Despite initial uncertainty regarding number of factors to be employed, effects of ipsative scores and of mixing test formats, both an oblique and orthogonal rotation yielded nearly identical results. Of the twenty factors identified by both the biquartimin and varimax solutions, Seven linked vocational interest clusters with personality. Two of the remaining factors had only interest loadings, while of the eleven personality factors, only four were scale specific. Definition of the 16 common factors required that extraction proceed beyond the unit latent root criterion. The results offer evidence that over- extracting factors does not confuse the results of rotation. Further, psycho- metric differences between tests had essentially no effect on the factors found. Of three oblimin rotations attempted, only the biquartimin was successful, yielding results essentially like those of the varimax solution. Because of the vast difference in computation time for these two solutions (computer time 20 times greater for biquartimin), however, the orthogonal varimax remains the method of choice.  相似文献   

10.
The complexity of categorical syllogisms was assessed using the relational complexity metric, which is based on the number of entities that are related in a single cognitive representation. This was compared with number of mental models in an experiment in which adult participants solved all 64 syllogisms. Both metrics accounted for similarly large proportions of the variance, showing that complexity depends on the number of categories that are related in a representation of the combined premises, whether represented in multiple mental models, or by a single model. This obviates the difficulty with mental models theory due to equivocal evidence for construction of more than one mental model. The “no valid conclusion” response was used for complex syllogisms that had valid conclusions. The results are interpreted as showing that the relational complexity metric can be applied to syllogistic reasoning, and can be integrated with mental models theory, which together account for a wide range of cognitive performances.  相似文献   

11.
Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes—approximate number system (ANS) acuity—are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric—the size of the numerical distance effect (NDE size)—is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure’s past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.  相似文献   

12.
Piéron's Law describes the relationship between stimulus intensity and reaction time. Previously (Stafford & Gurney, 2004), we have shown that Piéron's Law is a necessary consequence of rise-to-threshold decision making and thus will arise from optimal simple decision-making algorithms (e.g., Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006). Here, we manipulate the color saturation of a Stroop stimulus. Our results show that Piéron's Law holds for color intensity and color-naming reaction time, extending the domain of this law, in line with our suggestion of the generality of the processes that can give rise to Piéron's Law. In addition, we find that Stroop condition does not interact with the effect of color saturation; Stroop interference and facilitation remain constant at all levels of color saturation. An analysis demonstrates that this result cannot be accounted for by single-stage decision-making algorithms which combine all the evidence pertaining to a decision into a common metric. This shows that human decision making is not information-optimal and suggests that the generalization of current models of simple perceptual decision making to more complex decisions is not straightforward.  相似文献   

13.
Dissimilarity is a function that assigns to every pair of stimuli a nonnegative number vanishing if and only if two stimuli are identical, and that satisfies the following two conditions called the intrinsic uniform continuity and the chain property, respectively: it is uniformly continuous with respect to the uniformity it induces, and, given a set of stimulus chains (finite sequences of stimuli), the dissimilarity between their initial and terminal elements converges to zero if the chains’ length (the sum of the dissimilarities between their successive elements) converges to zero. The four properties axiomatizing this notion are shown to be mutually independent. Any conventional, symmetric metric is a dissimilarity function. A quasimetric (satisfying all metric axioms except for symmetry) is a dissimilarity function if and only if it is symmetric in the small. It is proposed to reserve the term metric (not necessarily symmetric) for such quasimetrics. A real-valued binary function satisfies the chain property if and only if whenever its value is sufficiently small it majorates some quasimetric and converges to zero whenever this quasimetric does. The function is a dissimilarity function if, in addition, this quasimetric is a metric with respect to which the function is uniformly continuous.  相似文献   

14.
ABSTRACT

An accurate understanding of visitor interest is critical to the education and conservation missions of zoos. However, studies that consider multiple influences are rare, and measures such as stay time that have been used to measure visitor interest vary widely, making broader inferences challenging. The authors sought to (a) compare the relative influences of social interactions, animal behavior, environmental factors, and animal species on visitor stay time and (b) evaluate how conclusions vary depending on the metric of stay time used. They conducted 701 direct observations of zoo visitors at a big cat exhibit. The data suggest that animal visibility was a critical factor driving stay time. Animal species played a minor role. The relative importance of the number of other visitors present and animal activity level differed depending on the stay time metric used. Nine other factors examined were relatively unimportant in predicting stay time. These results have important implications for exhibit design, crowd flow management, animal husbandry, collection management, and educational programs in zoos.  相似文献   

15.
Although a fully general extension of ROC analysis to classification tasks with more than two classes has yet to be developed, the potential benefits to be gained from a practical performance evaluation methodology for classification tasks with three classes have motivated a number of research groups to propose methods based on constrained or simplified observer or data models. Here we consider an ideal observer in a task with underlying data drawn from three univariate normal distributions. We investigate the behavior of the resulting ideal observer’s decision variables and ROC surface. In particular, we show that the pair of ideal observer decision variables is constrained to a parametric curve in two-dimensional likelihood ratio space, and that the decision boundary line segments used by the ideal observer can intersect this curve in at most six places. From this, we further show that the resulting ROC surface has at most four degrees of freedom at any point, and not the five that would be required, in general, for a surface in a six-dimensional space to be non-degenerate. In light of the difficulties we have previously pointed out in generalizing the well-known area under the ROC curve performance metric to tasks with three or more classes, the problem of developing a suitable and fully general performance metric for classification tasks with three or more classes remains unsolved.  相似文献   

16.
Surface color is traditionally measured by matching methods. However, in some conditions, the color of certain surfaces cannot be measured: The surface simply looks brighter or darker than all the patches on a matching scale. We studied the reliability, validity, and range of application of three different types of simulated Munsell scales (white-, black-, and split-surrounded) as methods for measuring surface colors in simple disk-ring displays. All the scales were equally reliable for matching both increments and decrements, but about 20% of the increments were unmatchable on the white-surrounded scale, about 13% of the decrements were unmatchable on the black-surrounded scale, and about 9% of the increments were unmatchable on the split-surrounded scale. However, matches on all the scales were linearly related. Therefore, it is possible to convert them to common units, using regression parameters. These units provide an extended metric for measuring all increments and decrements in the stimulus space, effectively removing ceiling and floor effects, and providing measures even for surfaces that were perceived as out of range on some of the scales.  相似文献   

17.
The generalized graded unfolding model (GGUM) is an item response theory (IRT) model that implements symmetric, nonmonotonic, single-peaked item characteristic curves. The GGUM is appropriate for measuring individual differences for a variety of psychological constructs, especially attitudes. Like other IRT models, the location and scale (i.e., the metric) of parameter estimates from the GGUM are data dependent. Therefore, parameter estimates from alternative calibrations will generally not be comparable, even when responses to the same items are analyzed. GGUMLINK is a computer program developed to reexpress parameter estimates from two separate GGUM calibrations in a common metric. In this way, the results from separate calibrations of model parameters can be compared. GGUMLINK can secure a common metric by using one of five methods that have recently been generalized to the GGUM. The GGUMLINK executable program is available free and may be downloaded from http://www.education.umd.edu/EDMS/tutorials/index.html.  相似文献   

18.
PCA has become an increasingly used analysis technique in the movement domain to reveal patterns in data of various kinds (e.g., kinematics, kinetics, EEG, EMG) and to compress the dimension of the multivariate data set recorded. It appears that virtually all movement related PCA analyses have, however, been conducted in the time domain (PCAt). This standard approach can be biased when there are lead-lag (phase-related) properties to the multivariate time series data. Here we show through theoretical derivation and analysis of simulated and experimental postural kinematics data sets that PCAt and, PCA in the frequency domain (PCAf), can lead to contrasting determinations of the dimension of a data set, with the tendency of PCAt to overestimate the number of components. PCAf also provides the possibility of obtaining amplitude and phase-difference spectra for each principal component that are uniquely suitable to reveal control mechanisms of the system. The bias in the PCAt estimate of the number of components can have significant implications for the veracity of the interpretations drawn in regard to the dynamical degrees of freedom of the perceptual-motor system.  相似文献   

19.
Great interest in non-metric multidimensional scaling has resulted in a number of computer programs to derive solutions. This study examined the effect upon stress of data generated under five metrics and recovered under all five metrics. MDSCAL-5M. TORSCA-9, and POLYCON-II were used to analyse these data. POLYCON-II was the most accurate, although none of the programs was highly successful. In most cases recovery with the Euclidian metric provided, if not the best, very close to the best recovery regardless of the true metric. This study also raised the question of the advisability of using different metric models in nonmetric multidimensional scaling and found that even very different Minkowski metrics are quite similar in the way they rank order dissimilarities.  相似文献   

20.
Previous studies have shown an interference of task-irrelevant numerical information with the spatial parameters of visuomotor behaviour. These findings lend support to the notion that number and space share a common metric with respect to action. Here I argue that the demonstration of the structural similarity between scales for number and space would be a more stringent test for the shared metrics than a mere fact of interference. The present study investigated the scale of number mapping onto space in a manual estimation task. The physical size of target stimuli and the magnitudes of task-irrelevant numbers were parametrically manipulated in the context of the Titchener illusion. The results revealed different scaling schemas for number and space. Whereas estimates in response to changes in stimulus physical size showed a gradual increase, the effect of number was categorical with the largest number (9) showing greater manual estimate than the other numbers (1, 3, and 7). Possible interpretations that are not necessarily incompatible with the hypothesis of shared metrics with respect to action are proposed. However, the present findings suggest that a meticulous scale analysis is required in order to determine the nature of number–space interaction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号