Back

**Session: Meta-analysis in Ecology: Challenges, Advances, and Successful Applications**
# OOS 1-5 - Cutting effect size down to size: Decline effects and regression to the mean in ecological meta-analyses

**Background/Question/Methods**

The scientific evidence base on any given topic changes over time as more studies are published. These changes occur for two reasons. First, there might be non-random, directional changes over time in the scientific evidence base. In particular, if studies finding large effect sizes (e.g., large differences between treatment and control means) tend to get published before those finding small effects, the net result will be a non-random decrease over time in the estimated magnitude of the mean effect size (a “decline effect”). If decline effects are common, then meta-analyses will provide a biased guide to management decisions, and to the allocation of future research effort. Second, there will be regression to the mean. Even if effect sizes are published in random order, the first few published effect sizes on any given topic will provide a very imprecise estimate of the true mean effect size. As additional effect sizes are published, the estimated mean effect size will become more precise, and will tend to shrink towards its true value. Using a compilation of 466 ecological meta-analyses, I test for non-random, directional changes in mean effect size over time, and quantify regression to the mean.

**Results/Conclusions**

Decline effects are rare in ecological meta-analyses. Only ∼5% of ecological meta-analyses exhibit non-random, directional trends in mean effect size over time, usually although not always in the direction of decline. Rare cases of decline effects remain important to identify and rectify, but ecologists should not overgeneralize from them. Decline effects in ecology are notable precisely because they are rare. In contrast, regression to the mean is ubiquitous. Many ecological meta-analyses report imprecise, implausibly large estimated mean effect sizes. Those means likely would shrink strongly towards zero if more studies were published. I explore several different ways of estimating the appropriate amount of shrinkage.

114 Views

Organized Oral Session

Monday, August 15, 2022

2:30 PM – 2:45 PM EDT

Location: 520E

- JF
Jeremy W. Fox

University of Calgary

Calgary, Alberta, Canada

The scientific evidence base on any given topic changes over time as more studies are published. These changes occur for two reasons. First, there might be non-random, directional changes over time in the scientific evidence base. In particular, if studies finding large effect sizes (e.g., large differences between treatment and control means) tend to get published before those finding small effects, the net result will be a non-random decrease over time in the estimated magnitude of the mean effect size (a “decline effect”). If decline effects are common, then meta-analyses will provide a biased guide to management decisions, and to the allocation of future research effort. Second, there will be regression to the mean. Even if effect sizes are published in random order, the first few published effect sizes on any given topic will provide a very imprecise estimate of the true mean effect size. As additional effect sizes are published, the estimated mean effect size will become more precise, and will tend to shrink towards its true value. Using a compilation of 466 ecological meta-analyses, I test for non-random, directional changes in mean effect size over time, and quantify regression to the mean.

Decline effects are rare in ecological meta-analyses. Only ∼5% of ecological meta-analyses exhibit non-random, directional trends in mean effect size over time, usually although not always in the direction of decline. Rare cases of decline effects remain important to identify and rectify, but ecologists should not overgeneralize from them. Decline effects in ecology are notable precisely because they are rare. In contrast, regression to the mean is ubiquitous. Many ecological meta-analyses report imprecise, implausibly large estimated mean effect sizes. Those means likely would shrink strongly towards zero if more studies were published. I explore several different ways of estimating the appropriate amount of shrinkage.