2023-24 Seminars and Recordings
November 17, 2023
Robust Bayesian meta-regression: Model-averaged moderation analysis in the presence of publication bias
-
Speaker: František Bartoš, University of Amsterdam
-
Description: Meta-regression constitutes an essential meta-analytic tool for investigating sources of heterogeneity and assessing the impact of moderators. However, existing methods for meta-regression have limitations that include inadequate consideration of model uncertainty and poor performance under publication bias. To overcome these limitations, we extend robust Bayesian meta-analysis (RoBMA) to meta-regression (RoBMA-regression). RoBMA-regression allows for moderator analyses while simultaneously taking into account the uncertainties about the presence and impact of other factors (i.e., the main effect, heterogeneity, publication bias, and other potential moderators). We offer guidance on how to specify prior distributions for continuous and categorical moderators and introduce a Savage-Dickey density ratio test to quantify the evidence for and against the presence of the effect at different levels of categorical moderators. We illustrate RoBMA-regression in an empirical example and demonstrate its performance in a simulation study. We implemented the methodology in the RoBMA R package. Overall, RoBMA-regression presents researchers with a powerful and flexible tool for conducting robust and informative meta-regression analyses.
-
Video Recording
October 20, 2023
The logic of generalization from systematic reviews of intervention effects to policy and practice contexts
-
Speaker: Julia Littell, Bryn Mawr College, Graduate School of Social Work and Social Research
-
Description: Systematic reviews and meta-analysis (SRMAs) of controlled studies of intervention effects are potent tools for generalized causal inference, but the logic of generalization from SRMAs to diverse policy and practice contexts is woefully underdeveloped. Using recent SRMAs of two widely disseminated psychosocial interventions as examples, I explore the logic of generalization from these SRMAs from three perspectives: 1) probability theory and representative sampling, 2) principles for generalized causal inference, and 3) common rubrics used by reviewers and clearinghouses. I show that, based on nonprobability samples of studies that relied on nonprobability samples of programs and participants, SRMAs can produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Application of Shadish, Cook, and Campbell’s (2002) principles for generalized causal inference is hampered by insufficient descriptive data and risks of bias in impact evaluations. Common rubrics used to formulate generalizations from systematic reviews are not well supported by theory or evidence and tend to over-estimate the generalizability and applicability of prominent interventions. Results of systematic reviews are widely misinterpreted as evidence that can be easily generalized and applied to diverse populations and settings. SRMAs can be used to test claims about the generalizability of treatment effects and to identify directions for further research that would support stronger generalized causal inferences and better applications. Their usefulness in developing new insights into issues of generalizability and applicability may be compromised by limitations of available data. Additional work is needed to articulate principles and best practices for formulating generalizations based on results of SRMAs.
-
Video Recording
September 22, 2023
Designing for Epistemic Uncertainty in Research Synthesis
-
Speaker: Alex Kale, University of Chicago, Department of Computer Science
-
Description: Summarizing what can be learned from bodies of scientific literature requires difficult judgments about which study results can be meaningfully compared, and whether it makes sense to aggregate evidence in a meta-analysis. Numerous tools for assessing quality of evidence offer guidance on identifying sources of epistemic uncertainty, such as common threats to internal or external validity of study results. However, existing software for systematic review and meta-analysis does little to emphasize how epistemic uncertainty should inform analytic choices for synthesizing findings. I present MetaExplorer, a prototype web application designed to provide a guided process for reasoning about epistemic uncertainty in meta-analysis. I also summarize findings from interviews with research synthesis methodologists and practitioners in biomedical science, education, computer science, and statistics. This work highlights the cognitive pitfalls, technical hurdles, and inconsistent standards across research communities that pose challenges to addressing epistemic uncertainty in research synthesis. I reflect on opportunities for future software development and invite the audience to join me in discussion.
-
Video Recording