Abstract: | A new algorithm for multidimensional scaling analysis of sorting data and hierarchical-sorting data is tested by applying it to facial expressions of emotion. We construct maps in “facial expression space” for two sets of still photographs: the I-FEEL series (expressions displayed spontaneously by infants and young children), and a subset of the Lightfoot series (posed expressions, all from one actress). The analysis avoids potential artefacts by fitting a map directly to the subject's judgments, rather than transforming the data into a matrix of estimated dissimilarities as an intermediate step. The results for both stimulus sets display an improvement in the extent to which they agree with existing maps. Some points emerge about the limitations of sorting data and the need for caution when interpreting MDS configurations derived from them. |