Background/Question/Methods The justification and targeting of land protection often requires reliable measures of the expected cost of conservation to landowners and conservation funders. However, data on conservation costs is rarely available at the right scales and resolution. Conservation studies often have to rely on proxies that have not been validated against observed cost. This talk presents novel insights from efforts to develop high-resolution estimates of fair market value for private lands in the contiguous United States and to examine the extent to which it correlates with observed costs of public land acquisitions for conservation. Estimation and calibration involve a training dataset of 6 million land sales derived from large-scale, high-dimensional parcel datasets and Zillow's real estate database ZTRAX, spatial tree-based ensemble regressors, and validation data from public land acquisitions for conservation.
Results/Conclusions High-resolution land value estimates predict observed conservation cost of >4,000 public land acquisitions) with up to 8.5 times greater accuracy than previously available proxies. Studies using coarser cost proxies underestimate conservation costs, especially at the expensive tail of the distribution. This has led to underestimations of policy budgets by factors of up to 37.5 in recent work. Recent advances in machine learning methods help reduce the spatial prediction error of tree-based ensemble methods and support the delimitation of the geographic regions for which land predictions are expected to be valid. More accurate cost accounting will help policy makers acknowledge the full magnitude of contemporary conservation challenges and can help improve the targeting of public ecosystem service investments.