Location, Location, Habitat: How the Value of Ecosystem Services Varies across Location and by Habitat

Matthew G. Interis and Daniel R. Petrolia

Abstract

We used a choice experiment to examine how ecosystem service values (ESVs) vary across locations and, for the first time, across habitats. The study context was three habitats (oyster reef, salt marsh, and black mangrove) in two U.S. Gulf Coast locations. The null hypothesis of ESV equality across locations was rejected 44% of the time and, when tested over suites of services, was rejected 50% of the time. Across habitats, the null hypothesis was rejected 22% and 10% of the time, respectively. Overall, benefit transfer across habitats appeared to work fairly well, whereas results were more mixed across locations. (JEL H41, Q51)

I. INTRODUCTION

A central advantage of choice experiments over more traditional stated preference valuation methods such as contingent valuation is that they can be used to estimate the value of particular ecosystem services (Adamowicz et al. 1998). Furthermore, several studies have investigated whether choice experiments can be used to improve benefit transfer, with the intuition being that the modeling of changes at the ecosystem service level allows transferred values to be adjusted accordingly, based on the ecosystem services of the site to which values are being transferred (see Morrison and Bergland [2006] for a review of this literature).

In the present study, we use a choice experiment to estimate the value of incremental changes in four ecosystem services—water quality, fisheries support, flood protection, and bird habitat—in each of two different locations: Barataria-Terrebonne Estuary, Louisiana, and Mobile Bay, Alabama. Furthermore, we estimate the value of these ecosystem services when provided by three distinct habitat types found along the northern coast of the Gulf of Mexico: oyster reefs, black mangroves, and salt marsh. Each habitat provides, to varying degrees, all four ecosystem services (see Coen et al. 2007; Grabowski and Peterson 2007; Koch et al. 2009; Strange et al. 2002; Zedler and Kercher 2005). We test whether ecosystem service values differ across location and, for the first time, whether they differ depending upon the providing habitat. Also for the first time, we conduct benefit transfer tests across habitats holding location constant.

Several studies have examined the suitability of choice experiments for benefit transfer. Morrison et al. (2002) find that transfers across different sites are subject to less error than transfer across populations for a given site. Jiang, Swallow, and McGonagle (2005) find that socioeconomic and attitudinal variables of the corresponding populations affect transfer results. Colombo, Calatrava-Requena, and Hanley (2007) find that allowing for preference heterogeneity reduces transfer error. Johnston (2007) finds that similar policy context across sites is more essential to quality transfers than simple geographic proximity. Bateman et al. (2011) find that, for similar sites, simple value transfer outperforms value function transfer, whereas for more dissimilar sites, value function transfer works better.

Our study adds to this literature by providing an additional examination of the quality of benefit transfers based on choice experiment data, which we judge based on the most common tests in the literature: equivalence of model parameters, equality of implicit prices (marginal willingness to pay for changes in ecosystem services), and equality of compensating surplus measures. The greatest marginal contribution of our study however, is the examination of the suitability of choice experiment data for value transfers within the same location, but across ecosystem services provided by different habitats. So far as we know, ours is the first study to do so.

Our analysis also incorporates a recent development in methodology. We restrict the domain of the bid parameter to be in the positive domain using an approach outlined by Carson and Czajkowski (2013), who argue that the typical practice of determining value estimate confidence intervals is incorrect because the domain of the parameter in the denominator spans zero; the mean and standard deviation of the derived distribution are therefore undefined. The approach they outline avoids this problem.

All our tests are performed either across locations for the same habitat or across habitat for the same location for the two locations and three habitats in our study. We find that model parameters are not equivalent in any comparison. We reject the null hypothesis of equality of implicit prices across locations for a given habitat 44% of the time, and reject the null across habitats for a given location 22% of the time. We find that, overall, 23% of estimates of compensating surplus are unequal between transferred and direct estimates. However, all of the rejections of equality involve salt marsh in Alabama. With this exception, we conclude that transfers of ecosystem service values across habitats work well for our dataset, whereas the results for transfers across locations are a little more mixed.

II. SURVEY DESIGN

We used a choice experiment survey to elicit choice responses and other information that were then used in value estimation. In each choice set, the respondent was asked whether he was willing to pay a specified price for one of two proposed habitat construction projects, or if he would prefer neither be implemented and to pay nothing. The two construction projects differed in the amount of each ecosystem service they were predicted to deliver. The consideration of multiple competing project designs was justified by the fact that there is some flexibility in precise location (within each water body) of constructed habitat and in other design details and technology used to complete the project. An example choice question for an oyster reef construction project is shown in Figure 1, where the blanks would be filled in with prespecified values drawn from those in Table 1.

TABLE 1

Attributes and Levels

Table 2 presents a diagram of the treatments pertaining to the present study. There were two location treatments of the survey, each specifying a different location where a funded project would occur: the Barataria-Terrebonne estuary in Louisiana and Mobile Bay in Alabama. Each respondent was asked about a construction project involving one of the three targeted habitats (oyster reefs, mangroves, or salt marsh), depending upon which location treatment he was in. The habitat choices were limited by existing levels of the habitats in the location of interest. For example, Mobile Bay, has few mangroves to speak of so it was deemed unrealistic to propose constructing mangrove habitat where there currently was none. The Barataria-Terrebonne estuary has all three habitats.

TABLE 2

Diagram of Treatments by Region and Habitat

The choice experiment design was developed using Ngene software, in which 24 choice sets were created in order to maximize D-efficiency (see ChoiceMetrics 2012). Most respondents were asked a single choice question; however, some were asked four choice questions, in other words, a repeated-choice format was used.1

To increase the perception that their responses would be meaningful in the sense that they could actually influence future policy (Carson and Groves 2007), respondents were told at the beginning of the survey that a large number of taxpayers would be taking the survey and that their responses would be shared with policymakers and could affect how much they pay in taxes in the future. Respondents were then given some information about their assigned habitat, including an explanation of some of the ecosystem services it provides. Then it was explained that policymakers were considering implementing a habitat construction program, and details were given about how such a program would be implemented, including how many acres of habitat would be created and when, where, and by whom they would be created.

Respondents were shown maps of candidate locations within each water body of where habitat could potentially be constructed, and where existing habitat is located already. An example map of the oyster construction project in Mobile Bay is shown in Figure 2.

FIGURE 2

Example Habitat Map Shown to Respondents

The payment mechanism specified was a one-time payment collected on the respondent’s state tax return filed the following year. It was stipulated that the tax revenue would partially cover the cost of an implemented program, with the remainder of funds coming from existing tax dollars. It was explained that construction would commence the following year and take five years to complete. The expected benefits—the provided ecosystem services—were expected to last 30 years after completion.

To aid in the design of our survey instrument, we hired New South Research, based in Birmingham, Alabama, to conduct two focus groups, one in Birmingham, Alabama (representing a noncoastal population), and one in Mobile, Alabama (representing a coastal population), in December of 2012. The primary output sought from the focus groups was information that would be helpful in designing a realistic hypothetical habitat construction scenario. To this end, much of the discussion centered on what kind of information people would like to know in order for them to be able to choose between competing construction programs, which benefits of habitat construction would be most important to them, and to critique the wording of portions of a draft survey we presented to them. We also asked open-ended willingness-to-pay questions for some candidate projects in order to hone the bid range used in the final survey. Based on the focus groups, we decided to concentrate on the attributes, attribute levels, and prices listed in Table 1.

III. DATA COLLECTION

The survey was administered by GfK Custom Research (formerly Knowledge Networks), which has a prerecruited panel of households who have agreed to be contacted periodically to complete surveys online (known as Knowledge Panel®). Because we sought more observations than GfK could provide with their Alabama and Louisiana panels, GfK subcontracted out with some of their partner organizations to obtain extra observations in those states (we refer to these respondents as “off-panel”). It should be noted that while GfK is known for having a panel that is representative of the U.S. population and its state populations, this representativeness does not necessarily carry over to the off-panel respondents.2 In April of 2013, an initial pretest of the survey was administered to 25 respondents to make sure the online survey was working properly and to elicit open-ended feedback about respondent understanding and ease of completion. The final survey was administered in May and June 2013. Out of 8,573 respondents sampled, 5,366 (63%) completed the survey. The final sample of respondents includes 5,196 respondents with no missing values for the variables of interest.

In addition to answering the choice questions and standard demographic questions, respondents were asked to rate their confidence in federal, state, local, and private agencies to implement projects like the ones proposed (no confidence, some confidence, a lot of confidence) in order to capture any perceived incompetence of one or more of the agencies involved in construction. To control for pre-disposition toward environmentally focused projects, we asked respondents whether they had made no, minor, or major changes to their shopping choices and lifestyle over the last five years to help the environment.

We asked two questions in an attempt to better understand the perceived incentives faced by respondents. The first question asked what amount of influence the respondent believed the survey would have on actual habitat projects in the Gulf (no influence, a small influence, a large influence). This is our measure of perceived consequentiality. Carson and Groves (2007) argue that if respondents do not believe that their responses will have any effect on anything they care about, then applying standard economic theory to the data is inappropriate because any response to the choice question will yield the same expected utility. If we are to assume that respondents make choices to maximize their expected utility, it must be that respondents’ choices actually affect their expected utility. The second question asked (yes or no) whether respondents believed their household would actually have to pay the specified tax for a project if it is actually implemented. Compelled payment is essential for incentive-compatibility in binary-choice questions (Carson and Groves 2007; Herriges et al. 2010) and, while it is well-known that multinomial choices cannot generally be made incentive compatible, compelled payment yet assures that the respondents consider the specified prices of the construction programs when casting their vote. We also asked whether respondents considered their budget when making their choice (Cameron and DeShazo 2013).

Table 3 displays the descriptive statistics of the variables used in the analysis. The variable income was measured in 19 categories, where 1 indicates an annual income of less than $5,000, and 19 indicates an annual income of greater than $175,000. The category midpoints were used for categories 1 through 18, and $175,000 was used for category 19. Forty-eight percent of respondents were either looking for work or not working due to retirement or disability. The average respondent was less confident in the ability of the federal government than in state or local governments or in private companies, to implement the projects. Eighty percent of respondents believed the survey would have at least a small influence on future habitat construction projects in the Gulf, and 77% believed their household would actually have to pay the specified tax if a project was implemented. Notably, the proportion of male respondents is low (34%). This is due to our desire to have more observations than GfK had on its panel. While the proportion of male respondents who were on the GfK panel was a reasonable 43%, only 28% of off-panel respondents were male. GfK provides weights based on demographics in order to compensate for underrepresented subpopulations (DiSogra 2007), which we incorporate into our analysis.

TABLE 3

Variable Descriptive Statistics

IV. MODEL SPECIFICATION AND HYPOTHESES

Model

We assume that within a choice set, respondents choose the alternative (one of the two habitat construction projects presented or to implement neither) that will maximize their utility. Traditionally, the utility for individual i from alternative j in choice set s has been specified as Embedded Image [1] where αj is an alternative-specific constant, γ is the coefficient on the (negative of the) price of the project, tijs, β is a vector of coefficients on alternative-specific attribute levels xijs (excluding price), δj is a vector of alternativespecific coefficients on individual-specific characteristics zi, and εijs is a disturbance term. Assuming an independent and identically distributed extreme value distribution for the disturbance terms, the parameters can be estimated using McFadden’s (1974) conditional logit model, and the willingness to pay for an increase in an attribute is taken to be the attribute’s coefficient divided by the price coefficient, βk/ γ . We make two adjustments to this traditional conditional logit specification.

First, it is well known (e.g., Fieller 1932) that the ratio of two normally distributed random variables has an undefined mean and standard deviation due to the distribution of the variable in the denominator spanning zero. In a recent working paper, Carson and Czaj-kowski (2013) propose exponentiating the price parameter, the effect of which is that the support of the price parameter is restricted to be in the positive domain and the resulting ratios of attribute parameters and the price parameters will have well-defined moments. We adopt this approach, which involves making only a slight change to the utility specification: Embedded Image [2]

The coefficient on the project price is now exponentiated, and the willingness to pay for a marginal increase in attribute k will now be βk/exp(η), which is strictly positive (given βk >0). The mean and standard deviation of the willingness-to-pay distribution can now be determined in a standard approach such as Krinsky and Robb’s (1986) parametric bootstrapping. Carson and Cjakowski (2013) explain how to implement the alternative specification practically in various statistical software packages.

The second adjustment we make to the traditional conditional logit model involves the disturbance term. The conditional logit imposes the restrictive independence from irrelevant alternatives assumption (Greene 2012). To relax this assumption, we include additional error terms, ωij, which vary by individual and by alternative, but not by choice set. By allowing these terms to vary across alternatives, the independence from irrelevant alternatives assumption is relaxed, and by not allowing them to vary across choice sets for a given individual, we account for possible correlation of responses made by a given individual. Because there is no inherent difference between the two proposed construction projects in each choice set other than in the attribute levels specified, we do not allow ω to differ between the proposed project alternatives. We also fix the error term on the status quo alternative to be zero. These restrictions (which simplify the model but do not affect the results) result in a single error term for the construction project alternatives that varies across respondents. Our complete specification is thus Embedded Image [3]

The sampling weights provided by GfK enter the empirical model in the typical manner as multipliers of each respondent’s contribution to the likelihood function (see Manski and Lerman 1977).

Hypotheses Tests

To examine the quality of benefit transfer in our study, we conduct three sets of benefit transfer tests that are most common in the literature (Colombo, Calatrava-Requena, and Hanley 2007). Each test is implemented either across location for a given habitat or across habitat for a given location. The first test is of whether the model parameter estimates in their entirety are equal across any two locations or any two habitats. This test involves controlling for the possible differences in the scale parameter across any two treatments (Swait and Louviere 1993). The second test is of whether estimates of mean willingness to pay for an increase in an attribute (sometimes referred to as the “implicit price” of the attribute) are equal across any two locations or any two habitats. This test is conducted using the complete combinatorial approach of Poe, Giraud, and Loomis (2005) over the simulated distributions of willingness to pay for attribute changes. The third test is of whether direct estimates for compensating surplus (a suite of attribute changes) are equal to transferred estimates of the same, where the transferred estimates are calculated using the estimates from the study site and the variable means (e.g., sociodemographic variables) of the policy site (the site where estimates are desired). These tests are also conducted over simulated willingness–to-pay distributions using the complete combinatorial approach.

V. RESULTS

The regression results for each treatment are presented in Table 4. Recall that in some treatments, respondents answered four choice questions (rather than a single choice question as in the other treatments), each between implementing one of two potential alternative projects or implementing neither. The number of observations specified, N, is therefore the number of unique choices in each location/ habitat subsample.

TABLE 4

Regression Results

Because of the Carson and Czajkowski transformation, the price coefficient requires a slightly different interpretation from what is typical. The estimates displayed in Table 4 are estimates of η in equation [2]. The estimate of the full coefficient of price is therefore exp(η), which of course is positive. A negative and significant value for η, as seen in Table 4, implies a price coefficient that is less than 1 but greater than 0.

Because this is a choice experiment study, our primary focus is on the signs and significance of the attribute parameters for each treatment.3 The lowest level of each project attribute is omitted as the base level. As one would expect, the signs on the attributes are all positive, and they are generally highly significant (with the glaring exception of the mangroves treatment), indicating that the ecosystem services chosen play a significant role in respondent choices. Based on postestimation tests of equality of parameter means, the parameter on the higher level of each attribute is not always significantly greater than the parameter on the lower level of the same attribute within each treatment. In the Alabama oysters treatment, the two fisheries support parameters are not statistically different from each other, and in the Louisiana oysters treatment, the flood protection and bird population parameters are not statistically different from each other, respectively. In the Alabama salt marsh treatment, only the fisheries support parameters differ statistically from each other. In the Louisiana salt marsh treatment, the fisheries support and bird population parameters are not statistically different, respectively.

And parameters for different levels of the same attribute are generally not statistically different from each other in the Louisiana mangroves treatment. These results indicate a significant utility gain for initial increases in the attributes, but not necessarily significant gains for increases beyond that.

The action alternative dummy, which equals 1 if the alternative is one of the proposed project alternatives and 0 otherwise (i.e., if it is the “status quo” alternative), is significant and negative in all treatments except Louisiana mangroves.4 This indicates that respondents are more likely to choose the status quo alternative over an action alternative with attribute levels at the omitted base levels, all else equal.5 Inherent respondent preference to maintain the status quo is well-documented in the literature (e.g., Adamowicz et al. 1998; Boxall, Adamowicz, and Moon 2009; Meyerhoff and Liebe 2009; Samuelson and Zeckhauser 1988), although its opposite, an inherent preference for taking action, has also been found (e.g., Patt and Zeckhauser 2000; Petrolia, Interis, and Hwang 2014). In each of the Louisiana treatments, the parameters on some of the dummy variables indicating if the choice question was the third or fourth in the sequence are negative and significant. This indicates that respondents were less likely to choose a project alternative in later choice situations. Because this is a choice experiment study, our focus is primarily on project attributes (the ecosystem services), so we simply note that there is limited consistency of the effect of the nonattribute variables on respondent choices across treatments.6

Tests of Benefit Transfer

In the benefit transfer literature, there are three tests of transferability that seem to be the most common (Colombo, Calatrava-Requena, and Hanley 2007). The first test of transferability is of whether the parameter estimates themselves are equal across two subsamples. To conduct this test, one must account for the potential difference in scale parameter that is confounded with the parameter estimates. Swait and Louviere (1993) propose a two-step procedure for conducting this test. In all tests, we find a rejection of the null (p < 0.01 for all tests) that the parameters across two subsamples are equal, given that the scale factor is allowed to differ across subsamples (this is Swait and Louviere’s hypothesis 1A).

The second common test is a direct comparison of willingness-to-pay values across any two subsamples, which we conduct on the willingness-to-pay values for the various attribute levels (sometimes referred to as “implicit prices”). Table 5 displays the willingness-to-pay estimates for each attribute level (again, with the lowest level of each attribute as the omitted base).7 The confidence intervals were estimated using the Krinsky and Robb bootstrapping approach (see Haab and McConnell 2002) with 10,000 draws. After exponentiating the price coefficient one can straightforwardly employ the Krinsky and Robb technique, as moments of the willingness-to-pay distribution are now well defined (Carson and Czajkowski 2013).8 The results of the hypotheses tests are displayed in Table 6. They were conducted using the complete combinatorial approach of Poe, Giraud, and Loomis (2005).9

TABLE 5

Mean Willingness-to-Pay Values for Attributes

TABLE 6

Equality of Means Tests for Willingness to Pay for Attributes

The left half of Table 6 displays the test results of whether value estimates for a particular incremental change in an attribute differ across the two locations, holding the providing habitat constant. For the oyster habitat, five of the eight attribute increment values statistically differ across the two locations at the 10% level or stricter. For the salt marsh habitat, two attribute increment values statistically differ. Thus, we cannot generally conclude that value estimates are the same across these two locations with 7 of 16 tests resulting in a rejection.

The right half of Table 6 displays the test results of whether value estimates for a particular incremental change in an attribute differ across habitats, holding the location constant. Here we see limited evidence of statistical difference of distributions: in Alabama, three of eight willingness-to-pay measures statistically differ, and in Louisiana only one or two values differ across habitats for each comparison. Thus, although value estimates do not statistically differ in almost 80% of paired comparisons, there are still enough pairs in which value estimates do differ that we cannot conclude that estimates are generally equal across different habitats providing the same ecosystem services.10

The third common test of transferability— value function transfer—is of whether direct estimates of compensating surplus for a scenario (a suite of attribute changes) at the policy site (the site where value estimates are desired) are equal to transferred estimates of the same. The transferred estimates use the parameters of the study site (the site in which the original study was conducted) and the means (e.g., site attributes and demographic information) of the policy site. Several studies argue that this test is the most important benefit transfer test because benefit transfer is typically used for estimates of compensating surplus (e.g., Morrison et al. 2002; Hanley et al. 2006; Colombo, Calatrava-Requena, and Hanley 2007). Given our choice experiment design of four attributes, each with three levels, there are 34 possible combinations of attribute levels. Following Morrison et al. (2002) we chose a one-ninth subset of the full set of possible combinations on which to conduct the benefit transfer tests.11 Doing so reduces the chances that the conclusions of the tests are dependent upon the combination of attributes chosen. Compensating surplus is calculated by multiplying the mean of each variable except price by its corresponding parameter estimate, summing these, and dividing this sum by the (exponential of the) price parameter (Haab and McConnell 2002).

The results of the compensating surplus equality tests are presented in the fifth column of Table 7. For each of the comparisons in Table 6, there are two transfer tests that can be conducted (transferring values in each of the two possible directions), and so there are therefore 12 sets of tests in Table 7. The table shows that, except for transfers involving salt marsh in Alabama, transfers across location or across habitat work fairly well. The fifth column shows the number of rejections among the nine scenarios for each transfer. There are zero rejections for every transfer that does not involve salt marsh in Alabama, indicating that compensating surplus measures estimated using the function of the study site but the means of the policy site do not differ statistically from direct estimates of the same at the policy site. Note, in particular, that transfers work quite well across habitats for a given location (only seven rejections out of 72 tests).

TABLE 7

Tests for Equality of Compensating Surplus Measures

Table 7 also shows in the last column the average absolute value of transfer errors across the nine scenarios examined for each transfer. Average transfer errors are particularly high for transfers involving salt marsh in Alabama, but the rest are fairly consistent with other measures of transfer error in the literature (e.g., Johnston 2007; Johnston and Duke 2010).

One wonders why transfers involving Alabama salt marsh do not seem to perform well. The means and confidence intervals of the direct estimates of the compensating surplus measures across the five treatments may give insight into the poor performance of Alabama salt marsh transfers (see the fourth column of Table 7). In particular, the mean compensating surplus measure for the nine scenarios in the Alabama salt marsh treatment is $169, which is much less than the corresponding value for any of the other four treatments. The mean maximum of the 95% confidence interval is $296, which is low enough to preclude much overlap with confidence intervals of other treatments. Comparing the third and fourth columns, we see that the greater the overlap between average confidence intervals of the transfer estimates and the direct estimates, the fewer the number of test rejections; the confidence intervals (either transferred or direct) for Alabama salt marsh are generally so low that they do not overlap with the confidence intervals of the comparison site. Incidentally, the variable means are virtually identical across the five treatments, so we do not believe any minor differences in these means across treatments to be the cause.

As one reviewer point out, a possible explanation for inequality of value estimates across habitats might be that respondents believe that different habitats provide different levels of other ecosystem services or attributes not included in the choice experiment. Unfortunately, we did not ask respondents about this possibility, however the focus group feedback may give us some insight into other explanations. For example, several focus group members were concerned about the particular types of wildlife that might be affected by the project. In the survey, respondents were told only that the proposed project would support fisheries of “oysters, crab, and shrimp,” and support the habitat of “wading birds.” However, a respondent might have believed, for example, that restoring oyster reefs would affect oyster harvesting relative to other seafood species differently from restoring salt marsh. Another possible reason why estimates might differ across locations is that, despite the percent changes of attributes being the same in the two location treatments, the current values of the attributes were not specified within the survey. Respondents in the two locations may therefore have had differing perceptions about current attribute levels, which affected their choices.12 A similar study conducted in several locations, each with different starting levels of the attributes, would allow some variation to examine these effects.

VI. DISCUSSION AND CONCLUSIONS

Overall we find strong evidence in each treatment of positive marginal utility for increases in water quality, flood protection, fisheries support, and wading bird population along the Gulf of Mexico. However, mean willingness-to-pay values for these attribute increases differ greatly across locations for a given habitat and across habitat treatments within a location. In some cases, the mean value of an increase in a given ecosystem service can be about four times as large when provided by one habitat instead of another. Across habitats, these differences may be partly explained by differences (real or perceived) in the performance of one habitat providing a particular service relative to another. Across locations, these differences may be partially explained by differences in the status of each habitat across locations. For example, oyster reefs are more abundant in Louisiana than in Alabama. At the same time, Louisiana is facing a coastal wetland loss crisis, an issue of relatively less importance in Alabama. Tests of differences in dispersion of welfare estimates, however, indicate that about 44% of the mean estimates are statistically different when comparing service values across locations for a given habitat, and only about 22% are statistically different when comparing service values across habitats for a given location.

We also test the equivalence of transfer estimates and direct estimates of compensating surplus measures. We find that 50% of the tests of equivalence across locations for a given habitat result in a rejection, and 10% of tests across habitats for a given location result in a rejection. However, all of these rejections occur in a transfer involving salt marsh in Alabama. Direct estimates of compensating surplus in Alabama have much lower means and much lower confidence interval maximums than estimates of compensating surplus in the other treatments, which may explain the poor performance of Alabama salt marsh transfers.

To summarize, ours is the first study to directly test the transfer of ecosystem service values across habitats that provide overlapping ecosystem services, and we find that these transfers perform well; only 22% of tests of equivalence of implicit prices and 10% of tests of equivalence of compensating surplus measures result in a rejection. With the growth of literature examining ecosystem service values using choice experiments, these results support the use of benefit transfer from habitats other than the policy habitat. Furthermore, those involving Alabama salt marsh excepted, transfers of compensating surplus perform very well in our study, both across location and across habitat. Overall, transfers across habitat for a given location perform better than transfers across locations for a given habitat.

Brander, Florax, and Vermaat (2006) and Moeltner and Woodward (2009) both control for habitat type when estimating wetland willingness-to-pay values in meta-regression analyses, and the former controls for location as well. In these studies the dependent variable was the value of the wetlands in their entirety, but our results would imply that controlling for providing habitat and location is more essential for benefit transfer studies that estimate values of particular ecosystem services than for studies of compensating surplus. Similarly, for valuation researchers in general who use stated preference surveys, our results imply that details provided to the respondent about the providing habitat are more essential for estimation of willingness to pay for attribute changes than for estimates of compensating surplus.

Acknowledgments

This publication was supported by the U.S. Department of Commerce’s National Oceanic and At-mospheric Administration under NOAA Award NA10OAR4170078, which included funding by the U.S. Environmental Protection Agency Gulf of Mexico Program under Interagency Agreement DW13923068-01-1, Florida Sea Grant College Program, Louisiana Sea Grant College Program, Mississippi-Alabama Sea Grant Consortium, and Texas Sea Grant College Program. This work was also supported by the National Institute for Food and Agriculture and Mississippi Agricultural and Forestry Experiment Station via Multistate Project W-3133 “Benefits and Costs of Natural Resources Policies Affecting Ecosystem Services on Public and Private Lands” (Hatch #MIS-033130). This publication does not necessarily represent the views and policies of any of these agencies. The authors thank John Cartwright for help in creating the maps used in the survey.

Footnotes

  • The authors are, respectively, associate professor and associate professor, Department of Agricultural Economics, Mississippi State University, Mississippi State.

  • 1 The effects of question format on service values are examined more closely by Petrolia, Interis, and Hwang (2015).

  • 2 We initially controlled for whether respondents were on-panel or off-panel. However, no significant difference between these respondents was detected.

  • 3 Here and in following regressions, we also experimented with random coefficients on the attributes (see Greene 2012), but the resulting empirical evidence strongly suggested nonrandom attribute coefficients. Also, despite a potential relationship between being unemployed and income, removing the former from the model does not affect the significance of any parameter estimates, so we chose to leave it in the model, as its coefficient is significant in two treatments.

  • 4 Testing indicated a failure to reject the hypothesis of equal alternative-specific constants for each of the two action alternatives in every treatment. Hence we specify only a single constant for the action alternatives.

  • 5 This is true despite the fact that due to our study design, an action alternative with no improvements beyond the omitted bases would still include a 5% increase in the number of homes protected from flooding and a 10% increase in annual seafood catch. The random component of the effect of the action-alternative constant captures the heterogeneity of its effect.

  • 6 The parameters on the ω’s are displayed in Table 4 as well. A significant ω parameter indicates that there are individual-specific random effects. The parameter on the random element of the action alternatives is significant in four of the five treatments, indicating that one should allow individual-specific error terms for these alternatives.

  • 7 The study most comparable to our own is by Petrolia, Interis, and Huang (2014). They estimate mean willingness to pay for a 30% increase in the number of homes protected from storms in Louisiana through the construction of coastal wetlands to be between $149 and $165, whereas our estimated willingness to pay for half that increase (15%) ranges from $78 to $112. Their estimates of willingness to pay for a 30% increase in fisheries productivity ($204 to $210) is much higher than our estimate of the same, however ($44 to $60).

  • 8 We also estimated the confidence intervals resulting from a traditional specification of the price parameter (that is, without imposing the Carson and Czajkowski transformation). The intervals were largely the same as those presented here, except that in the Alabama salt marsh model and the Louisiana mangroves model, several of the intervals were $8 to $13 larger. This held true over several simulations, so we do not believe it to be an artifact of the random draws used in the simulation. Also, the results of the hypotheses tests of equality of willingness-to-pay values across models were unaltered.

  • 9 This approach involves subtracting each element of one simulated willingness-to-pay distribution from each element of the second simulated willingness-to-pay distribution and observing the proportion of observations that lie above or below zero.

  • 10 We should point out that one would reach a much stronger (and erroneous) conclusion using only t-tests of the means and Kolmogorov-Smirnov tests of the variances across distributions; when we conduct these tests, we find that the willingness-to-pay distributions are statistically different from each other for each attribute level, both across locations for a given habitat and across habitats for a given location. As Poe, Giraud, and Loomis (2005) point out, tests that assume normality can often lead to incorrect conclusions when conducted over distributions that are not normal (as willingness-to-pay distributions often are).

  • 11 Morrison et al. (2002) also had 34 possible combinations of attributes. For consistency in the literature, we chose the same one-ninth subset as they did (see Morrison et al. 2002).

  • 12 Johnston (2007) finds that “context” similarity of two locations, which might include baseline levels of attributes, is more essential for benefit transfer than simple geographical proximity.

References