Abstract
Agricultural conservation programs aim to improve environmental quality by using payments to support voluntary adoption of environmentally sound practices. Supported practices, however, yield additional environmental gain only if they would not have been adopted without payment. We estimate additionality for selected practices using propensity score matching to analyze data from the Agricultural Resource Management Survey (ARMS). We find that greater than 95% of off-field structural practices (filter strips, riparian buffers) supported by payments are additional but that less than 50% of conservation tillage payments yield additional adoption. The effect of nutrient management payments varies across nutrient management practices and crops. (JEL Q28, Q52)
1. Introduction
Between 2000 and 2015, annual U.S. federal government spending on voluntary agricultural conservation programs increased from $3.5 billion to more than $5.5 billion, measured in constant (2012) dollars (USDA-ERS 2016).1 The largest of these programs are the Conservation Reserve Program (CRP), Environmental Quality Incentives Program (EQIP), and Conservation Stewardship Program (CSP). Since 2000, the majority of growth has been in working lands programs (e.g., EQIP and CSP) that provide financial and technical assistance to encourage conservation practice adoption on land in agricultural production. Other government and nongovernmental entities (including many agricultural states) also have voluntary payment programs designed to support conservation.
For voluntary conservation payment programs, additionality is an important measure of performance (Chabé-Ferret and Subervie 2013; Mezzatesta, Newburn, and Woodward 2013; Pufahl and Weiss 2009). Practices supported by conservation payments are additional if they would not have been adopted without a payment. From an environmental perspective, the practices supported by the voluntary programs can yield additional environmental gain2 only to the extent that conservation payments are necessary for practice adoption. Environmental gain flowing from nonadditional practices cannot be attributed to the payment program because the gain would have been realized without the payment.
We measure additionality in terms of practice adoption for voluntary conservation payment programs that support nutrient management, conservation tillage, terraces, grassed waterways, filter strips, field borders, and riparian buffers. Overall national rates of adoption for these practices are between 2% and 43%, while payments are received by only 1% to 50% of those adopting a practice, suggesting that for some farm fields the benefits of some conservation practices exceeds the cost, even without a payment. Previous studies have linked variation in adoption behavior to a large number of observable proxies for on-farm benefits and adoption cost, including field characteristics such as productivity and erodibility; climate conditions; individual traits such as age, education, environmental awareness, and environmental attitudes; and farm-level characteristics such as farm size; and primary products (Chouinard et al. 2008; Ervin and Ervin 1982; Featherstone and Goodwin 1993; Fuglie and Kascak 2001; Lambert et al. 2006; Soule, Tegene, and Wiebe 2000; Traoré, Landry, and Amara 1998; Wu and Babcock 1998). State and local regulations could also affect adoption decisions. Some states, for instance, require livestock producers to develop and apply nutrient management plans (Ribaudo et al. 2003).
The challenge in measuring additionality is that the counterfactual—what farmers who received payments would have done in the absence of the payment—cannot be directly observed. While adoption among producers who did not receive payments can be observed, the rate of adoption among “nonpayment” farmers is likely to be a biased estimate of the missing counterfactual. If factors that affect adoption also jointly affect farmers’ decision to seek and receive a payment, nonpayment farmers may be systematically more or less likely to adopt than payment farmers in the counterfactual. Using a propensity score matching (PSM) estimator, we are able to estimate the counterfactual behavior of paid farmers, that is, what they would have done without payments.
Our data are largely from the 2009–2012 Agricultural Resources Management Survey (ARMS). The ARMS program included a series of annual cross-sectional crop-specific surveys of U.S. wheat (2009), corn (2010), barley (2011), sorghum (2011), and soybean (2012) fields carried out by the U.S. Department of Agriculture (USDA-ERS 2009–2012).3 The ARMS data include field-specific information on conservation practices and payment sources for a sample of fields drawn from 31 states that account for more than 90% of corn, wheat, barely, sorghum, and soybean production. The data also offer a rich source of controls describing the characteristics of the field, farm, and farmer. We augment the ARMS data with additional data on USDA conservation programs, including details on where specific conservation practices were funded and state-level regulations, to help ensure that matches are made only between fields where farmers face similar opportunities for receiving payments and regulatory requirements.
For most of the conservation practices we consider, additionality is defined as a percentage of paid adopters who would not have adopted in absence of the payment. Our estimates indicate that payments appear to be effective at inducing conservation practice adoption, although effectiveness varies across practices. We find high levels of additionality for the adoption of three off-field structural practices: filter strips (98%), riparian buffers (96%), and field borders (95%); more moderate additionality for two structural soil conservation practices: grassed waterways (75%) and terraces (68%); and relatively low additionality for conservation tillage (47%).
We also find high additionality for nutrient management plans (92%). Additionality in nutrient management, however, may be best defined in relation to underlying nutrient management practices that specify the rate, timing, and method of fertilizer and manure application.4 We find that payments for nutrient management plans did not lead, on average, to reductions in nitrogen application rates. In corn fields, however, we find that payments for nutrient management plans largely eliminated the use of two practices widely regarded as poor nutrient management: fall application of nitrogen fertilizer and broadcast application without incorporation of nitrogen fertilizer.
2. Measuring Additionality
To measure additionality of voluntary payments programs, we estimated the average treatment-effect-on-the-treated (ATT) (see Caliendo and Kopeinig 2008). In the program evaluation literature, the ATT is commonly used to evaluate the mean effect of a “treatment” (e.g., a government payment) on an outcome of interest (e.g., conservation practice adoption) on those receiving the treatment (payment). More formally, we let treatment (D) equal 1 when a farmer receives a payment for a particular conservation practice or plan on a specific field, and 0 when no payment is received. The outcome for farmer i is denoted Yi1 when receiving a payment and Yi0 when not receiving a payment. We examine 15 different outcomes related to farmer adoption of conservation practices. When the outcome represents adoption of a nutrient management plan—terraces, grassed waterways, filter strips, field borders, riparian buffers, conservation tillage, or soil testing—Yi1 is binary. In contrast, Yi1 is a continuous-valued measure when the outcome is the nutrient application rate, application timing (proportion), or application method (proportion). We exclude subscripts indicating the type of conservation practice because the structure of the additionality measure is the same across conservation practices.
We define the ATT as the expected effect of a payment (treatment) on the adoption of a particular practice or plan (outcome), conditional on a payment:
[1]
The challenge of calculating [1] is that Yi0 represents unobserved counterfactual behavior and must be estimated. If payments were randomly assigned to fields, the unobserved mean adoption outcome on fields where a payment was received, E[Yi0|D = 1], would be equal to the observed outcome for fields where a payment was not received, E[Yi0|D = 0]; and additionality could be calculated as E[Yi1|D = 1]–E[Yi0|D = 0]. If practice adoption and payment seeking are endogenously determined and depend on one or more common factors, treatment may not be orthogonal to the outcome, and E[Yi0|D = 0] ≠ E[Yi0|D = 1]. If the relationship is positive—farmers who are likely to adopt a given practice on a field are also more likely to seek (and receive) a payment—a model that fails to control for factors affecting both adoption and payment will overstate the effectiveness of payments in inducing conservation practice adoption, and additionality estimates will be biased upward.
One solution to the inference problem is to rely on the conditional independence assumption (CIA), or unconfoundedness (Rosenbaum and Rubin 1983). If a set of covariates, Z, satisfies the CIA, the outcome and treatment assignment are independent. Under such a set of covariates, additionality is measurable as
[2]
The CIA is also known as “selection on observables” because, for the assumption to hold, Zi must be observable to the researcher.5 In our case, Zi might include covariates such as characteristics about the field, farm, and farmer, and controls for the objectives of the agencies that implement voluntary payment programs and the budget constraints they face.
3. Kernel Matching with Survey Weights
Estimation of [2] is most commonly achieved with matching estimators. The most basic type of matching is covariate matching, which requires matches to have identical values for all the possible covariates in Z. This type of matching estimator works best when there are a small number of covariates each of which has a small number of discrete values. For more complex problems (including ours) valid estimates of the ATT under the CIA are also achievable with PSM (Heckman, Ichimura, and Todd 1997, 1998). Instead of conditioning on Z, PSM alternatively conditions on propensity scores, Pi = pr(Di = 1 | Zi)∈(0,1), estimated from a binary model for treatment (Rosenbaum and Rubin 1983). Because propensity scores are one dimensional, matching is always feasible. In contrast, matching on the multidimensional Zi can be difficult or impossible when the number of covariates is large or there are a large number of discrete values for individual covariates. While matching estimators are relatively new to agricultural economics, a number of applications exist (Chabé-Ferret and Subervie 2013; Liu and Lynch 2011; Lynch, Gray, and Geogehegan 2007; Mezzatesta, Newburn, and Woodward 2013; Pufahl and Weiss 2009).
The validity of PSM as a policy evaluation method also rests upon the overlap, or common support, condition. Fields available for matching must have some positive probability of receiving conservation payments. Satisfying this condition ensures that fields in the payment group will not be compared to nonpayment fields that are inherently different. Fields with relevant field, farm, and farmer characteristics that lie outside a specified range of common support for payment and nonpayment fields should not be used for matching. One way to ensure common support is to restrict the set of observations available for matching so that the largest propensity score of the treated observations is no larger than the largest propensity score of the control observations and that the smallest propensity score for the treated observations is no smaller than the smallest propensity score of the control observations.6
Based on equation [2], the conventional matching estimator for obtaining the ATT can be formulated as
[3]
where ωi = 1/n1, where n1 is the number of fields that reported receiving a payment (treatment), and i indexes treated observations. The matching weight ωij is the weight given to outcome of the jth control observation when matched to the ith treated observation. Under simple nearest-neighbors matching, a treated observation is matched, as measured by predicted propensity scores from a treatment model, to the n0 nearest control observations. Weights in this case are ωij = 1/n0. The ATT is then simply the average difference in the outcome of the treated observations and their respective matching weighted counterfactual estimates.
The additionality estimates we report are obtained with a kernel-based matching estimator. This type of matching estimator has the advantage of using all the control observations to construct the counterfactual, resulting in greater statistical efficiency than other types of matching estimators (Heckman, Ichimura, and Todd 1997, 1998). The weights ωij in equation [3] under kernel matching are given by
where G(·) is a kernel density function (e.g., Gaussian, uniform, triangular, and Epanechnikov), κ is the bandwidth parameter for smoothing, and the
are the estimated propensity scores of the nonpayment fields (j) and the payment fields (i). These weights are designed to decline in value with distance between the estimated propensity scores,
. The declining values result in a greater emphasis on the outcomes of nonpayment fields that are most similar in terms of probability of receiving a payment when estimating the counterfactual behavior on a payment field.
The bandwidth parameter κ in ωij is set by the researcher. Larger bandwidths lead to a smaller potential bias but larger variances for ATT estimates. Thus, the researcher must choose between bias and efficiency. We choose a bandwidth using a leave-out-one cross-validation optimization algorithm that minimizes the mean squared error of the ATT estimate (Black and Smith 2004; Galdo, Smith, and Black 2008; Liu and Lynch 2011). The optimization procedure searches over a range of possible bandwidth values. Based on a preliminary investigation, we search over the discrete set of values {0.2, 0.1, 0.06, 0.05, 0.04, 0.03, 0.02, 0.01, 0.006} for each additionality estimate.
We adjust equation [3] by the survey weights, , in our sample. Estimates of additionality are given by
[4]
where
and
This adjustment provides estimates of additionality for the practice outcomes that are nationally representative. Survey weights are unnecessary in the binary outcome model and the kernel-based matching because the estimated propensity scores and matching weights (ωij) are used only for measuring the similarity of observations in the sample and not to infer behavior about the underlying population.
The propensity scores are estimated from a binary choice (logit) model using maximum likelihood estimation (MLE). It is well known that parameter estimates using nonlinear MLE are biased when the sample size is small. What is less well known is that nonlinear MLE is also biased with rare events. For some conservation practices adoption is somewhat rare. For all of the practices, payment from conservation programs is even rarer. The bias due to rare events can be reduced by increasing sample size, using estimation methods that exactly compute parameters estimates (i.e., exact logistic regression), and/or computing a bias-corrected estimate of the parameters. Exact logistic regression is applicable only when the sample size and the number of covariates are small and the covariates are discrete. Since this is not the case in this study, we estimate bias-corrected propensity scores using the penalized ML (PML) proposed by Firth (1993). This method reduces bias by removing the first-order term from the bias. Using the penalized likelihood function
where the penalty function | i(θ) | is Jeffreys’s (1946) invariant prior, and the bias-corrected parameters (θ) are estimated.
Another common problem with nonlinear MLE with rare events (or very common events) data is separation. Separation occurs when covariates perfectly predict the binary outcome, producing infinite parameter estimates. When separation does occur, either the perfectly predicting covariates or the observations that are perfectly predicted must be dropped. It is important to recall that while a covariate (e.g., state dummy) might perfectly predict the outcome in the sample (e.g., no payment), this may not be the case in the population (e.g., the population has nonadopters even through the sample does not). Logistic PML solves the problem of separation (Heinze and Schemper 2002). In logistic PML the contribution of each observation to the score function (i.e., first derivate of the log-likelihood function) can be split into two parts: one for the observed outcome (Yi) and the other for the unobserved outcome (1 – Yi) with iteratively updated weights. Therefore covariates no longer perfectly predict the outcomes, and logistic PML produces finite parameters estimates.
4. Data
To control for the complexity of conservation programs and underlying differences in producer costs and benefits of conservation practice adoption in our treatment models, we use multiple data sources. The ARMS consists of separate field-level and farm-level surveys that provide extensive information on production practices and input use, crop and livestock production, contracting, farm finances, and producer demographics (USDA-ERS 2009–2012). In most years, roughly 60% of farmers who respond to the field-level survey also respond to the farm-level survey. Key features of the field-level survey that make our analysis possible include questions about conservation practices in use on the surveyed field, when practices were installed or first used, whether cost-sharing or an adoption incentive payment was received, and the program from which the payment originated. Questions about conservation payments for specific practices were first included in the 2009 survey. We use data for 2009 (wheat), 2010 (corn), 2011 (sorghum and barley), and 2012 (soybeans).
For the field survey we obtain adoption and payment information about three off-field structural practices: field-edge filter strips, field borders, and riparian buffers, two soil conservation structures: terraces and grassed waterways, and two conservation management practices: conservation tillage and nutrient management. For nutrient management, we are also able to examine specific nutrient management practices for both nitrogen and phosphorus: nutrient application rates, nutrient timing, application method, and nutrient soil testing.
Appendix Table A1 shows the various restrictions we apply to the observations in the ARMS data to obtain estimation samples. The number of observations in the final estimation sample is equal or smaller than the numbers reported in Appendix Table A1, as some observations are subsequently removed to satisfy the common support condition of our matching estimator or because of missing variables. In the nutrient management analyses, we exclude fields on farms that are required to have nutrient management plans under state regulation prior to estimation of treatment models. Including these observations would bias estimates of counterfactual behavior because the nutrient management plan is required with or without payment. To the extent that regulated farms would not have adopted such plans in the absence of regulation, including them as control observations could bias ATT estimates downward.
For soil conservation structures and buffer practices, we exclude observations where the practice was in place prior to the respondent’s tenure. These practices can be effective for decades and often continue when land is sold or rented to a new tenant. We assume that respondents did not participate in the decision to adopt practices that preceded their tenure.7 We exclude all barley and sorghum observations from the soil conservation structure and buffer practice models because questions about the length of the respondents’ tenure were not included in these surveys. Nutrient management and conservation tillage, like other conservation management practices, are assumed to be readopted annually. The 2009 wheat data are excluded from the conservation tillage analysis because that survey did not ask about conservation tillage payments.
For the determination of treatment status, we consider a field to have received a payment for nutrient management and conservation tillage only if the farmer indicated receiving an EQIP payment, a CSP payment, or a payment from some other local or state program because practices are not funded by the CRP. CRP payments compensate farmers for the costs of buffer practices and grassed waterways. We therefore include CRP payments, EQIP payments, and other local or state payments in the definition of treatment for the three buffer practices and grassed waterways. We consider EQIP payments and other local or state payments in defining treatment for terraces.
After merging the field and farm surveys, keeping observations that have usable location information,8 and incorporating the other restrictions, the sample available for estimation includes 3,503 observations for nutrient management (plan adoption, nutrient application rates, and nutrient testing), 1,170 observations for practices related to nutrient timing and methods, 2,490 for conservation tillage, and 3,012 for soil conservation structures and buffer practices.
Each field-level observation in the ARMS includes a survey weight for generating population estimates that are representative of U.S. fields. We use these survey weights in the reported adoption rates in Tables 1 and 2 and treatment model variables models in Appendix Tables A2 and A3. Table 1 reports the means and standard deviations of the fraction of surveyed fields that reported adopting a practice for each practice type and treatment status.9 For the surveyed fields that reported receiving a payment, the fraction adopting is always one, consistent with the requirements of a conservation program contract.10 On fields where payments were not received, the rate of adoption varies by practice. Conservation tillage has the highest rate at 40%, suggesting that conservation tillage is a profitable practice on many fields even without payments. Structural practices, which likely have larger upfront costs, can be expected to have lower levels of adoption. The reported adoption rates for soil conservation and off-field structural practices are between 2% and 16% on fields where payments were not received. The 5% adoption rate for nutrient management plans represents the mean number of U.S. fields included in a nutrient management plan.
Summary of Adoption Rates and Number of Fields by Treatment Group and Practice
Summary of Application/Adoption Rates and Number of Fields by Treatment Group and Nutrient Management Practice
Table 2 reports the mean and standard deviation for the application/adoption rates associated with specific nutrient management practices. Unlike the conservation practices in Table 1, the rates for fields where nutrient management payments were received can differ from one. Nutrient management plans do not require specific practices but allow the farmer to determine a set of practices that best fits his or her operation. Payment and nonpayment fields seem to have similar amounts of nutrients applied, with notable exceptions. In wheat fields, farmers who received nutrient management payments applied about 20% more nitrogen than farmers who did not. This result is somewhat counterintuitive but not implausible. Nutrient management plans do not require farmers to reduce fertilizer use, only to balance use against crop needs.11 In corn fields, farmers who received a nutrient management payment applied less than half as much phosphorus than those who did not.
Appendix Tables A2 and A3 summarize the variables we use to address the CIA in our models of treatment status. We include information on soil productivity (the National Commodity Crops Productivity Index, NCCPI) (Dobos, Sinclair, and Robotham 2012) linked using geocoordinates from the ARMS. We also provide binary indicators of whether a field was classified by the Natural Resources Conservation Service (NRCS) as highly erodible (Highly erodible = 1) or adjacent to a wetland (Wetland = 1), as reported by the producer on the ARMS questionnaire. A binary, field-specific indicator of manure application is also included (Manure = 1). Farm size is represented by the log of total cropland acreage (Log(acres)). Since adoption and payment patterns may also depend on individual characteristics, we include binary indicators for whether the farmer is primarily occupied in farming (Primarily a farmer = 1), has a college degree (College degree = 1), owns the surveyed field (Owns field = 1), and a continuous covariate for his or her age (Age of operator). Our data do not include any direct indicator of environmental awareness or environmental attitudes.
To control for the complexity of conservation policy and the producer willingness to adopt practices in exchange for payments, we include variables on historical average EQIP and/or CRP payments (EQIP and CRP, $ per acre) from 1998 through 2011 for both the per acre payments made to all farms in the same county of the surveyed field and the per acre payments made to all farms in adjacent counties.12 Since within-state differences in geography, environmental conditions, and local regulations may also affect producers’ willingness to adopt conservation practices, treatment models include the county-level population density (Population density, from the 2010 census) and the average adoption level of the conservation practice for the Major Land Resource Area (MLRA adoption rate).13 Other unobserved state-level factors such as regulatory environments and overall conservation goals are controlled for with a set of binary indicators for states.14
Statistically significant differences between payment and nonpayment fields for the means of the variables reported in Appendix Tables A2 and A3 suggest payment fields were not randomly selected. For example, farmers that reported receiving a payment for a nutrient management practice are, on average, more likely to have a field classified as highly erodible, apply manure, be located in a county with larger EQIP payments for nutrient management, and be located in an MLRA with higher levels of adoption of nutrient management plans. The other practices highlighted in the tables also have a variety of controls with mean values that are significantly different across treatment status. These differences in the unconditional means motivate the need for an estimator such as PSM for obtaining valid inferences of the effects of payments on practice outcomes.
5. Propensity Score Estimates
Appendix Tables A4 and A5 report the estimated coefficients for the treatment models. We use the estimates from the treatment models to obtain predicted propensity scores, , for the treatment (i) and control observations (j). In addition to the controls in Appendix Tables A2 and A3, the models include indicators for crops and states, interactions between the MLRA adoption rate and field characteristics, and interactions between the EQIP and CRP average county payments and the crop indicators. Column (1) of Appendix Table A5 reports coefficient estimates for the nutrient management treatment model, which includes observations for all five crops. We use this model to measure the additionality of payments for nutrient management on nutrient management plan adoption rates (the percentage of adopters with payments that would not have adopted without payments) and soil testing rates. The model and parameter estimates reported in column (2), estimated using only corn observations, are used to generate propensity scores for the analysis of nutrient application timing, methods, and corn nutrient application rates. Columns (3), (4), and (5) in Appendix Table A5 report the treatment model estimates for soybeans, wheat, and barley used to generate propensity scores for the analysis of nutrient application rates by crop.
There are several variables in Appendix Tables A4 and A5 whose effects appear consistent across groups of the practice types. Field ownership has a consistent positive effect across the structural practice models, suggesting that the incentive to seek payment is greatest for farmers who are owners. The effect is not significantly different from zero for the annual conservation practices (i.e., conservation tillage and nutrient management). Likewise, the operation size of the farm has a consistent nonnegative effect for many practices on receiving payment, which is consistent with evidence that small farmers have relatively larger transaction costs when seeking payments (see McCann 2009). For many of the other controls in the model, we find that the effects have consistent signs but are not always statistically significant. High erodibility, for instance, leads to a significantly greater likelihood of payments for soil conservation structures and nutrient management plans but insignificant positive effects for payments supporting other practices. The effect of operator age appears to be inconsistent across practices, with a positive effect for long-standing soil conservation practices (grass waterways and terraces) but negative effects for conservation tillage and nutrient management (not significant for some crops). We reason that older farmers may be less inclined to master the details of complex, information intensive practices (like nutrient management) that have come into widespread use only in recent years. County-level EQIP payments tend to have a positive effect on receiving a payment. County-level CRP payments are positive and significant for riparian buffers and field borders.
We report two goodness-of-fit measures at the bottom of Appendix Tables A4 and A5 in order to measure the predictive power for the propensity score models. The coefficient of discrimination is the difference in mean predicted probabilities between the fields where payments were received and fields where payments were not received (Tjur 2009). It measures the ability of the model to discriminate between the payment and control groups. Values of the coefficient of discrimination range between 0.06 for conservation tillage and 0.27 for nutrient management plans in barley. We also calculate for each model the adjusted count pseudo-R2. This is simply the percentage correctly predicted of the least frequent outcome. Because our data include fewer fields that received payments than fields that did not receive payments, this measure provides a gauge for the effectiveness of the models to correctly predict the actual treated.15 The adjusted count pseudo-R2 ranges between 24% and 44%.
Although the goodness-of-fit measures indicate that the treatment models are moderately successful in their ability to predict which fields receive payments, superior predictions are not a requirement for obtaining unbiased estimates of the ATT. The essential characteristic necessary for a treatment model is that the estimated propensity scores are order-preserving of the true propensities. That is, as long as the CIA is satisfied, the estimated propensity scores will reflect a monotonic transformation of the true propensity scores, and ATT estimates will be unbiased but might have some efficiency loss (Imbens 2004).
6. Estimates of Additionality
Table 3 presents our estimates of additionality for conservation practices and nutrient management plans. Column (1) reports the optimal bandwidths that minimized the mean squared error of the estimates under the kernel-matching estimator of equation [4]. Column (2) reports the proportion of payment fields where the practice was adopted. Column (3) reports the weighted average estimated counterfactual based on the kernel-matching estimator,
Additionality Estimates for Conservation Practices and Nutrient Management Plans
For comparison, column (4) reports the weighted proportion of all nonpayment fields where the practice was adopted—a naïve (nonmatched) estimate of the counterfactual,
For every practice except filter strips, the kernel-matching estimates are larger than the nonmatched mean estimates for the nonpayment fields, indicating that estimates of additionality would be biased upward if we did not control for differences between payment and nonpayment fields. Column (5) reports our estimates of the ATT—the difference between the estimated means for payment fields (column 2) and matched nonpayment fields (column 4). Finally, columns (6) and (7) report the percentile-based 95% confidence interval for the ATT estimate.
We find very high levels of additionality for the three off-field structural practices (filter strips, 98%; riparian buffers, 96%; and field borders, 95%). These estimates suggest that payments are effective in leveraging adoption where it would not have occurred without the payment. We also find high levels of additionality for the two soil conservation structures (terraces, 68%; and grass waterways 75%), although our estimates are lower than for off-field structural practices. These practices may be adopted without payment by farmers and landowners seeking to protect land from soil erosion and productivity loss. Additionality is also high for nutrient management plans. Payments appear to have leveraged adoption on 92% of the fields that received payments. This result refers only to the existence of a written nutrient management plan. Written plans, however, do not necessarily indicate that nutrient management practices are actually being implemented. Finally, conservation tillage has the lowest level of additionality, at 47%.
The additionality estimates in column (5) of Table 4 show how nutrient management practices vary (or do not vary) across fields that did and did not receive nutrient management payments. Because there is no fixed set of practices that must be included in a nutrient management plan, the application rate/adoption rate on fields where nutrient management payments were received (column 2) can vary. In Table 4, for example, the rate of soil testing on payment fields is not necessarily equal to one. Column 5 in Table 4 reports our estimates of additionality for nutrient management plans on soil testing. We expected that such payments will increase soil testing, so the ATT estimate is expected to be positive. In line with that expectation, we estimated that nutrient management payments almost double the percentage of fields where soil nitrogen testing is done. Although our ATT estimate for soil phosphorus testing is also large, it is not significantly different from zero at the 95% confidence level. This result may reflect the fact that soil phosphorus levels tend to change more slowly over time than soil nitrogen levels, so there may be less need for phosphorus testing.
Additionality Estimates for the Effect of Nutrient Management Payments on Nutrient Management Practices
Column 5 in Table 4 reports our estimates of the effect of nutrient management payments on the proportion of fertilizer nutrients applied to corn in the fall before planting and by broadcast without incorporation. We note that fall application of nitrogen fertilizer before a spring-planted crop would rarely, if ever, be allowed in a nutrient management plan supported by conservation payments. So an estimate at or near zero is expected in column (2), and a negative (or at least nonpositive) ATT is expected. We find that farmers are significantly less likely to apply nitrogen fertilizer in the fall on payment fields than matched nonpayment fields. On matched nonpayment fields farmers apply 11% of nitrogen fertilizer in the fall. On payment fields farmers apply only 2% of total nitrogen in the fall. Likewise, broadcasting any fertilizer without incorporation is unlikely to be allowed by a nutrient management plan supported by payments. So an estimate at or near zero in column (2) and a negative (or at least nonpositive) ATT are expected. We estimate that nutrient management payments decrease the percentage of nitrogen applied using broadcasting without incorporation by 20 percentage points. Although we also find that these payments decrease the percentage of phosphorus applied using broadcasting without incorporation by 15 percentage points, our estimate is not significant at the 95% confidence level.
Finally, column (5) of Table 4 reports our estimates of additionality in nutrient application rates. Nutrient application rates have been unaffected by nutrient management payments, with two notable exceptions: payments have decreased phosphorus application rates on corn fields and increased nitrogen application rates on wheat fields. As already noted, these payments do not require a reduction in nutrient application rates, and there is evidence to suggest that application rates can increase with the development of a nutrient management plan (Genskow 2012).
7. Robustness Checks
For each of our treatment models, we check robustness of the CIA assumption. Although there are common methods for assessing the validity of the CIA assumption, the extent to which treatment model misspecification can bias ATT estimates is debated in the literature (Dehejia 2005a, 2005b; Smith and Todd 2005a, 2005b; Zhao 2008; Fitzenberger, Lechner, and Smith 2013; Lee 2013). In covariate balancing, t-tests are conducted for each of the covariates in Z to test for the similarity between means for the matched treatment and control groups (Rosenbaum and Rubin 1985). If tests indicate a high-level imbalance is present between the two groups, in other words, a large number of covariates with statistically significant differences, the researcher can attempt to reduce the imbalance by including interactions of variables and higher-order terms in the treatment model.
We report the results of the covariate balance tests for each treatment model in Appendix Tables A6 and A7. For the full sample, the balance tests suggest that the means of covariates are well matched for most but not all covariates. In the models for riparian buffers, grass waterways, terraces, and conservation tillage, t-test indicate that all covariates are balanced. The same is true for nutrient management models for corn and soybeans. Nearly all of the covariates balance for filter strips (all but three), field border (two), nutrient management for wheat (three), nutrient management for barley (three), and overall nutrient management (two).
8. Conclusion
To estimate additionality in agricultural conservation programs, we used data from the ARMS (USDA-ERS 2009–2012) in a PSM model. Our results include practice-specific estimates of the ATT for a number of individual off-field structural soil conservation structures, conservation tillage, and nutrient management practices. These results represent the first systematic evaluation of U.S. voluntary payment programs for agricultural conservation at a national scale. Our analysis focused on practice adoption because the effect of agricultural production on environmental quality is difficult to observe or measure (Smith and Weinberg 2004).
Our results suggest that additionality is highest for practices that have high up-front cost, little or no on-farm benefit, or both. Estimates of additionality are very high (≥95%) for the adoption of conservation off-field structural practices: filter strips (98%), riparian buffers (96%), and field borders (95%). These practices may require payments because they take land out of crop production (which is costly) and function largely as a barrier to keep sediment and nutrients from leaving the farm (which provides little on-farm benefit). Our results are consistent with those of Mezzatesta, Newburn, and Woodward (2013) in the sense that they also find high additionality for practices that take land out of crop production or otherwise impose costs while providing little on-farm benefit in the short run. For example, they estimate that additionality is greater than 80% for filter strips.
Soil conservation structures also have relatively high levels of additionality. Our estimates indicate additionality of 68% for terraces and 75% for grass waterways. While up-front costs of installing these practices can be high, they will (with proper maintenance) continue to control soil erosion for many years. Landowners who control erosion will eventually realize the benefit of higher soil productivity.
In contrast, conservation tillage, which has the lowest estimated additionality among the practices we studied (47%), can be profitable in the short run. Our data show that conservation tillage is frequently adopted without payment support, implying that reducing or eliminating tillage operations reduces production costs by enough to offset any additional cost or yield reduction that might be associated with less tillage. Nonetheless, our estimate is considerably larger than the 25% estimate reported by Mezzatesta, Newburn, and Woodward (2013). We hypothesize the reason for this difference is that conservation tillage provides a clearer benefit to farmers in the portion of Ohio studied by Mezzatesta, Newburn, and Woodward than to farmers overall. The adoption rate for conservation tillage in the Mezzatesta, Newburn, and Woodward study was 58%, whereas the overall adoption rate estimated from our data is 42%.
Our analysis also shows very high additionality for nutrient management plans overall (92%) and for individual crops: corn (89%), soybeans (85%), wheat (98%) and barley (91%). These estimates, however, apply only to the fact that the producer has a plan, not to the application of practices specified by the plan. We estimated the effect of nutrient management payments on several nutrient application practices that are likely to part of a nutrient management plan. Using the corn data, we found evidence that payments reduce fall application of nitrogen and broadcast application of nitrogen without incorporation. These practices would seldom, if ever, be allowed under a nutrient management plan supported by conservation payments. The study also suggests that payments increase the use of soil testing (all crops). We find little evidence that nutrient management payments affect nutrient application rates, with two notable exceptions. Our results may, however, suffer from lack of information about the rate of nutrient application allowed under the nutrient management plans. Data on practices specified in individual nutrient management plans could improve estimates.
While our research represents a significant step forward in our understanding of additionality in voluntary conservation payment programs, future research could benefit from more complete information on conservation practice adoption and producer attitudes toward and awareness of environmental quality. For most of the practices we considered, our data provide only a binary indication of adoption, which may not fully capture the extent or intensity of practice adoption. Conservation tillage, for example, includes a range of practices including mulch-till, strip-till, and no-till. A continuous variable, such as the Soil Tillage Intensity Rating, could help to more accurately define differences in tillage across payment and nonpayment fields. Likewise, knowing the specifics of individual nutrient management plans could help refine estimates of additionality for individual nutrient management practices (e.g., application rates, timing, and method).
There is also evidence to suggest that environmental attitudes and awareness are important factors in conservation practice adoption and program participation. Previous studies have found both to be positive and significant determinants of conservation practice adoption (see Prokopy et al. (2008) and Baumgart-Getz, Prokopy, and Floress 2012). Positive attitudes about and awareness of environmental protection may help explain why some farmers adopt conservation practices, even ones that offer little on-farm benefit, without payment support. In the context of PSM, information on attitudes and awareness may also help refine propensity scores and define better counterfactual estimates.
Further research is also needed to understand how estimates of average additionality could help improve conservation program design and delivery. To increase the level of environmental gain that can be realized through voluntary conservation payments, information on additionality must be considered along with environmental benefits and costs. Simply shifting funds to higher average additionality practices does not necessarily imply greater environment gain. Supporting a low additionality practice could yield more overall environmental gain than supporting a high additionality practice if the low additionality practice yields larger environmental gain per dollar of cost on fields where assistance leads to additional adoption. For example, it may be more cost-effective to support a practice that is 50% additional over a 100% additional practice, if the environmental gain per dollar of cost of the former is at least twice as large as the latter. Likewise, reducing the payment rate for a low additionality practice could have the perverse effect of further reducing additionality. Lower payments could mean that farmers who would have adopted at the original payment level would chose not to adopt at the lower payment level. So, the lower payment could mean the loss of additional environmental gain that would have been captured at the original payment level. Meanwhile, the lower payment would have no effect on participation or adoption by farmers who will adopt without support and, therefore, provide no additional environmental gain.
It may also be possible to modify existing conservation program benefit-cost indices to incorporate estimates of average additionality. Claassen et al. (2014) suggest that estimates of average additionality could be used to modify the expected benefit in the benefit-cost indices used to rank conservation program applications for acceptance. For example, a 50% additional practice would out-rank a 100% additional practice only if it delivers at least twice as much environmental gain per dollar of cost. Given sufficient competition for program enrollment, this approach could make it more likely that high additionality practices are enrolled, but only to the extent that they are more cost-effective (accounting for additionality) than lower additionality practices.
Acknowledgments
The views expressed are those of the authors and cannot necessarily be attributed to the Economic Research Service or the U.S. Department of Agriculture. We thank John Horowitz for helpful comments on earlier drafts and Ryan Williams for assistance with data. We also acknowledge the helpful comment of two reviewers.
Footnotes
This article was prepared by a U.S. government employee as part of the employee’s official duties and is in the public domain in the United States.
↵1 Annual spending data are obtained from yearly budget summaries from the Office of Budget and Policy Analysis, U.S. Department of Agriculture (available at www.obpa.usda.gov/budsum/budget_summary.html).
↵2 We use the term environmental gain to refer to any outcome that improves or could improve environmental quality, for example, reductions in soil erosion or nutrient and pesticide runoff and the provision of wildlife habitat.
↵3 The survey is administered jointly by the USDA Economic Research Service and the USDA National Agricultural Statistics Service. Fields are selected annually to represent the target crop and are not included in the survey for more than one year.
↵4 Our measure of nutrient management plan adoption is based on an ARMS survey question that asks whether the operator has a written nutrient management plan. While written plans are easily verifiable, actual implementation of practices on the field is less easily verifiable. In comparison, questions for soil conservation structures, buffer practices, and conservation tillage ask respondents about the adoption of actual practices. Soil conservation structures and buffer practices are easily verified; conservation tillage is more difficult to verify because it depends on residue cover at planting.
↵5 A necessary condition for this assumption to hold, allowing us to identify the mean impact of treatment on the treated, is E[Yi0|Zi,D = 1] = E[Yi0|Zi,D = 0].
↵6 This is a one-sided support and is considered sufficient for satisfying the common support when estimating ATT (Imbens 2004). We tested the sensitivity of our results by further restricting nonpayment fields to have propensity scores no lower than the minimum propensity score of the payment fields. Across the different practices, additionality estimates for the fraction of adopters are no more than 2 percentage points lower than the rates we report in Table 4.
↵7 Many soil conservation structures were in place before the beginning of the current producer’s tenure because these practices have been promoted and cost-shared by the USDA since the 1930s.
↵8 We lose some observations because not all fields have geographic coordinates.
↵9 Mean estimates are produced using survey weights from the ARMS surveys (2009–2012) and are representative for the U.S. population of corn, wheat, barley, or sorghum fields.
↵10 Farmers who receive incentive payments for conservation tillage and nutrient management are not obligated to continue using the supported practice after incentive payments end. We do not know whether practices have been “un-adopted.” To the extent that supported practices were discontinued, however, these fields could have been considered payment fields. If we could identify them, the probability of practice adoption on payment fields could be less than 1 and our estimates of additionality for management practices could be overstated. This issue does not apply to farmers who received cost-sharing on a structural practice, because they are obligated to maintain the practices for their full useful life, which can be as long as 25 years.
↵11 In the USDA-NRCS Conservation Effects Assessment Project cropland reports, nitrogen application rates of 1.4 times crop uptake in corn and 1.6 times crop uptake in wheat are considered the maximum agronomic rates consistent with good nutrient management. In a study of Wisconsin farmers, Genskow (2012) found that some increased fertilizer application while others increased application rates after developing a nutrient management plan. CEAP cropland assessment reports can be found at https://www.nrcs.usda.gov/wps/portal/nrcs/main/national/technical/nra/ceap/.
↵12 We were able to construct these variables using detailed contract data from the NRCS (for EQIP) and the Farm Service Agency (for CRP). These variables were calculated by creating county summary variables directly from NRCS and FSA contract data. Data on NRCS conservation programs can be requested using directions found at www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1045976.pdf. County-level summaries of CRP acres and payments can be found at www.fsa.usda.gov/programs-and-services/conservation-programs/reports-and-statistics/conservation-reserve-program-statistics/index.
↵13 MLRAs are geographical associations of large areas defined by common geology, climate, water, soils, biological resources, and land use. Definitions are maintained by the NRCS. In cases where there were two or more MLRAs with a county the MLRA with the largest area in the county was assigned. For data on MLRAs and EQIP-funded practices, contact NRCS. Directions for requesting NRCS data can be found at www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1045976.pdf.
↵14 We considered weather variables such as temperature and precipitation in an initial analysis but found these variables to be statistically insignificant in their ability to explain treatment status after controlling for state effects.
An alternative approach in the PSM literature for controlling for differences that might exist across a stratum such as states is to run separate treatment models for each state, restrict matches to be within states, and then average estimated additionality levels across states. We attempted this with our data but found that the small number of treated observations in our sample led to several instances of perfect multicollinearity between our payment indicator variable and some of our binary explanatory variables when using state-level treatment models, which restricted the usable observations for estimation. This was also the case for some practices when we considered the spatially more aggregate Farm Resource Regions (USDA-ERS 2000).
↵15 A probability threshold for this measure is selected so that the fraction of observations where treatment (payment) is predicted to receive payments is approximately equal to the observed fraction of individuals receiving treatment (payment). The % of treated predicted correctly is then the fraction of those treatment predictions that are observed with actual payments (Wooldridge 2009, 581).