Odum School of Ecology, University of Georgia Athens, Georgia, United States
Background/Question/Methods Meta-analysis helps to answer pressing ecological questions and identify knowledge gaps by quantitively summarizing information from primary studies. But the quality of the meta-analytic methods can affect the accuracy and robustness of those answers. We reviewed the literature and generate new data to evaluate current practices in meta-analysis and highlight the areas that need improvement. We focused on issues related to ‘execution’ (e.g., weighted effect sizes by study precision) and ‘reporting’ (e.g., reported the inclusion/exclusion criteria), which are key elements of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) criteria. We first compiled information from 14 reviews of meta-analyses to obtain a list of most common criteria evaluating execution and reporting. To compare information across reviews, we used the percent of meta-analyses included in each review that complied with the different criteria. We also inspected all the meta-analyses available from one of the review papers (Pappalardo et al., 2020) to extract new information on ten additional criteria, including a deeper analysis on the recognition and treatment of non-independence.
Results/Conclusions We found good compliance with some reporting criteria, such as providing the list of references, specifying the meta-analytic model, and identifying the software used; but we found low compliance with other reporting criteria, such as including details on the literature search, or providing the analytic code. In general, there was lower compliance with execution criteria such as conducting a multifactorial analysis of moderators (vs. multiple single factor analyses), exploring temporal changes in effect sizes, controlling for phylogenetic non-independence, testing for publication bias, and conducting sensitivity analysis; in contrast, most papers explored the possible causes of heterogeneity. Although most meta-analyses reviewed by Pappalardo et al., 2020 (n = 96) included multiple effect sizes per study, only 63% of them acknowledged some type of non-independence. Most often (n = 43), the non-independence that was acknowledged was related to the design of the original experiment (e.g., a common control used for different treatments) and how data was collected. Acknowledging non-independence from other sources of correlation (e.g., multiple experiments per publication) was less common (n = 23). Providing specific training and encouraging reviewers and authors to follow the newly develop PRISMA EcoEvo checklist (O’Dea et al., 2021) can improve the quality of ecological meta-analysis.