ARGIE — Analysing the Reliability of Government Impact Evaluations
Government departments across the UK spend hundreds of millions of pounds each year on impact evaluations — quantitative studies that measure the effects of policy interventions. These evaluations are highly integrated into the policymaking process, with many billions of pounds of public spending resting on their results.
Recent metascience research has shown that quantitative findings in many academic disciplines are sensitive to the analysis decisions researchers make: which data to use, how to measure outcomes, which model to estimate, and so on. Different research teams analysing the same data frequently produce different results. If the same problem affects government evaluations, policymakers cannot be confident that the conclusions of any single evaluation are robust.
ARGIE addresses this through three complementary approaches. First, a many-teams analysis in which independent research teams reanalyse data from existing government impact evaluations. Second, a survey-based analysis soliciting defensible analysis choices from a broad pool of researchers. Third, a multiverse analysis simulating the full universe of possible analysis strategies for a given evaluation.
The project partners with social researchers in the Cabinet Office and aims to translate findings into practical guidance for evaluators, including contributions to the Magenta Book, the UK government's core resource for evaluation practice.
If you would like to participate as an analyst in the many-teams component, you can sign up here.
Evidence Exchange (EvEx)
The Evidence Exchange is a new national network designed to connect civil and public servants with university researchers across the UK. The consortium, led by Cambridge's Centre for Science and Policy, is developing UK-wide infrastructure to enable more universities and research organisations to offer "Policy to Research" opportunities for government professionals.
Key initiatives include a Digital Campus to promote academic-policy exchange schemes and courses, and professional development programmes to equip public and civil servants with the skills needed for effective evidence-informed policymaking. The project aims to build the civil service's internal capacity both to create and consume high-quality research, and to empower policymakers to draw on the UK's world-class research base.
As part of the UCL team, I contribute to the design of evaluation and evidence-use components of the programme, drawing on my experience at both UCL and the Cabinet Office's Evaluation Task Force.
Evaluation Task Force, UK Cabinet Office
I am a Specialist Evaluation Advisor at the Evaluation Task Force in the Cabinet Office. In this role, I advise on the design and implementation of impact evaluations across government departments, contribute to cross-government evaluation standards, and work to strengthen the connection between academic evidence and policy decisions.
I am also a member of government's Evaluation and Trial Advice Panel, where I advise on the use of experimental and quasi-experimental methods in policy evaluation.