In this post, Dr Katherine Smith and Dr Ellen Stewart discuss the research impact agenda of social policy.

Academics are increasingly encouraged and incentivized to seek research impact beyond the academy. Researchers working in and around social policy – an academic discipline founded in explicit pursuit of ‘real world’ impact – might be assumed to have a head start in this. But in our new article in the Journal of Social Policy, we suggest that social policy’s discussions about the research impact agenda have so far been remarkably muted.  We argue that there is an urgent need to redress this for at least two reasons.  First, our analysis of impact-related guidance, published debates in other disciplines, REF impact case studies and interviews with academics combine to suggest there are reasons to be concerned about aspects of the current impact architecture. Second, the policy-focused expertise available within social policy offers valuable opportunities to consider how the current architecture might be reshaped.

Our interviews with 52 academic researchers concerned with health inequalities capture the complexity and divergence of researchers’ perspectives on, and experiences of, research impact over the decade that this agenda has emerged in the UK.  Everyone we spoke to in this field of research wanted to influence policy but people had very different views about how best to achieve this.  Some researchers were comfortable working closely with policymakers and often understood this work as a fundamental responsibility.  These researchers largely welcomed the new prominence of impact, which they felt sought to reward and recognise their preferred mode of working.   However, even these researchers expressed concerns about aspects of the current impact system and other researchers, including earlier career researchers, were much more fearful and wary of the potential ‘impacts of impact’.  Collectively, we identified three broad sets of concerns about the impact agenda and its implications for academic practice.

First, many interviewees worried that the research impact agenda might encourage and reward what Ben Baumberg Geiger described as ‘bad’ impact. This fear included the possibility that REF structures encourage researchers to seek impact for (potentially misleading) single studies, rather than seeking to promote wider bodies of research. While the ESRC’s guidance states that “you can’t have impact without excellence”, many interviewees felt that this was entirely possible and not yet sufficiently guarded against. Others recounted instances where their work had been reinterpreted, and sometimes completely misinterpreted, by research users in troubling (sometimes ethically problematic) ways, and expressed concerns that neither REF nor UK research councils do much to acknowledge the fact that impact may not always be desirable.

Second, even researchers supportive of the aims of the research impact agenda expressed concerns about whether its current architecture captures impact in a meaningful way. Many described the disconnect between their knowledge of the complex ways in which evidence influences policy, and the experience of ‘playing the game’ of depicting much more straightforward impact: one interviewee wryly stated that one achievement of the impact agenda has been to make “people lie more convincingly on grant applications”.   We use Figure 1 to illustrate the fact that it is often the most far-reaching kinds of research impacts that are most difficult to demonstrably track (whereas it is the less ambitious forms of impact that more easily enable a clear audit trail).

Figure 1: ‘Impact ladder’ – significance versus demonstrability

Figure 1 Impact Ladder

Finally, researchers highlighted some troubling consequences of the research impact agenda. These included the risks of overloading busy policymakers with more and more research and the potential for impact reward systems to reify traditional academic elites (those who already have networks and credibility with senior policy actors). This imbalance arises both because of the timing of key opportunities for ‘impact’ (e.g. evening and weekend events) and because, several interviewees noted, the demands of achieving impact are not being met by commensurate reductions in other workloads. Our analysis of impact case studies submitted to the REF 2014 Social Policy & Social Work panel supported these concerns, finding that they were overwhelmingly led by Chair-level individuals.  Around a third of our interviewees discussed the challenges of maintaining critical, intellectual independence while trying to align one’s research ever more closely with policymakers’ concerns. The ‘fudge’ which several of our interviewees described resorting to, involved phrasing policy recommendations in strategically vague ways, softening perceived criticism and (as one put it) “bend[ing] with the wind in order to get research cited”.

Many of these concerns will be familiar to colleagues from debates in other disciplines (for example Greenhalgh & Farly, Pain et al, Back) but also, perhaps, from discussions in the offices, staff rooms and corridors of the UK’s universities. Our goal in writing this is to contribute to these debates, and especially, in this hiatus between REFs, to encourage social policy colleagues to make suggestions for more constructive approaches to encouraging and rewarding research impact. The current architecture is not a ‘done deal’: researchers who care about the relationships between research and policy must try to improve how we recognize and reward research in the ‘real world’.

Leave a reply

Your email address will not be published. Required fields are marked *