Research impact in the community based health sciences: an analysis of 162 impact case studies submitted to the 2014 Research Excellence Framework (REF)
The problem
The 2014 REF required UK higher education institutions to produce impact case studies (see http://results.ref.ac.uk/Results).My research questions were: what kinds of impacts were submitted to REF subpanel A2 (Public Health, Health Services Research and Primary Care); what models and mechanisms were invoked to account for those impacts; and were these evidence-based?
The approach
In a theoretical phase, I searched the literature for relevant models and approaches to impact.In an empirical phase, I considered all 162 impact case studies submitted to subpanel A2. Using an Excel spreadsheet, I extracted data on types of research undertaken, impact activities, measured impacts and assumed mechanisms of impact. I also undertook an in-depth interpretive analysis of five case studies selected as examples of good implementation practice.
Findings
Literature review: Most impact assessment follows a ‘logic model’, depicting a linear link between upstream research and later impacts, mediated and moderated by linkages formed (or not) between researchers, funders and potential users. Social scientists reject such models as deterministic and mechanistic when in reality (they say), the research-impact link is organically emergent, indirect (dependent on ‘enlightenment’ of policymakers), and largely unpredictable. They favour process descriptions of interactions and activities over the futile pursuit of metrics. Critical researchers prefer participatory (action research or ‘Mode 2’) models in which research and impact are co-created through cross-sector collaborations.Empirical findings: Of 162 impact case studies, the overwhelming majority described quantitative, university-led research studies – especially randomized trials and meta-analyses – in which the research-impact link was depicted unproblematically as direct and linear. The commonest impact described (in more than two-thirds of cases) was influencing a clinical guideline; half documented a change in clinical practice. Yet only around a quarter provided firm evidence of an improvement in morbidity and a tiny fraction documented a clear reduction in mortality. Only around one in ten described any qualitative component (usually a minor aspect of mixed-methods research), and only one described a predominantly participatory study design.Encouragingly, more than half the case studies described strong and ongoing linkages with policymakers, but only about a quarter described targeted knowledge translation activities. But in 40 case studies, no active efforts by the research team to achieve impact were described. Models of good implementation practice, whether quantitative or qualitative, were characterized by researcher enthusiasm, institutional commitment and a proactive, interdisciplinary approach.
Consequences
Findings
are consistent with (but do not prove) the conclusion that the way we measured impact in the REF privileged ‘hard’, quantitative, university-led studies at the expense of qualitative and participatory designs (whose impacts tend to be more diffuse and difficult to demonstrate). This has implications for multi-stakeholder research collaborations such as CLAHRCs, which are built on non-linear models of impact.
Credits
- Trisha Greenhalgh