Abstract
Using multiple choice tasks per respondent in discrete choice experiment studies increases the amount of available information. However, respondents’ learning and fatigue may lead to changes in observed utility function preference (taste) parameters, as well as the variance in its error term (scale); they need to be controlled to avoid potential bias. A sizable body of empirical research offers mixed evidence in terms of whether these ordering effects are observed. We point to a significant component in explaining these differences; we show how accounting for unobservable preference and scale heterogeneity can influence the magnitude of observed ordering effects. (JEL Q23, Q51)
This article requires a subscription to view the full text. If you have a subscription you may use the login form below to view the article. Access to this article can also be purchased.