Participants shift response deadlines based on list difficulty during reading-aloud megastudies |
| |
Authors: | Michael J. Cortese Maya M. Khanna Robert Kopp Jonathan B. Santo Kailey S. Preston Tyler Van Zuiden |
| |
Affiliation: | 1.University of Nebraska,Omaha,USA;2.Creighton University,Omaha,USA |
| |
Abstract: | We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|