THE SHAPE OF THE UNDERLYING DISTRIBUTIONS IN ABSOLUTE IDENTIFICATION EXPERIMENTS
In signal-detection analyses of one-dimensional, n-alternative, absolute-identification (AI) experiments it is usually assumed that the n stimuli give rise to n equal-variance, normal- distributions (EVNDs) along a uni-dimensional decision axis. However, Parker et al. (2002) have argued that equal-variance Laplace distributions (EVLDs) provide a better fit to AI data. This result is somewhat counter-intuitive, especially if the distribution of effects along the decision axis are thought to arise from noise (or an accumulation of small errors) in the decision process, which, according to the central limit theorem, should give rise to normal distributions. Here, we show that even when the data from AI experiments are generated from EVNDs, EVLDs will characterize the results, whenever the data are averaged across sessions (either within- or between-subjects) in which the underlying acuity (separation between distributions) is changing, a situation that is likely to occur whenever there are changes in gain-control.