Revisiting the Scottish National Performance Framework

by Ken Gibb

 

The purpose of the Scottish Government is to focus government and public services on creating a more successful country, with opportunities for all of Scotland to flourish, through increasing sustainable economic growth’

In 2007 the first minority Scottish National Party Government established the Scotland Performs framework based on a core national purpose (2016 version above), five strategic objectives, a series of (currently) 16 high level national outcomes, and a set of 55 national indicators that operate at high level but can also drill down to clusters of indicators within specific sectors. This approach has been widely acclaimed internationally (it was initially, in part, the product of drawing on a similar model in the Commonwealth of Virginia in the United States) but has also undoubtedly found sceptics, critics and critical friends.

The Scottish Government is now consulting over the framework, its outcomes and indicators and is undertaking a large-scale stakeholder discussion exercise to support this process . Last week they started with an academic roundtable in Glasgow co-organised by What Works Scotland [link]. The session was held under the Chatham House Rule and involved an historical contextualisation of the origins of the framework, a presentation on the outcomes approach to public policy by Ailsa Cook (shortly to be published by WWS) and a detailed discussion on the structure, format, uses and functions of the framework. Below are my personal reflections on the meeting.

The framework sits at the heart of the so-called Scottish approach to public policy, one that stresses pursuit of agreed high-level outcomes consistent with the national purpose and the application of these objectives down to local level through agreeing objectives with each community planning partnership across the country. It is also about a decisive shift to prevention, stressing partnership working and co-production, community empowerment, inclusion and the breaking up of departments and silos in the way Government is structured and led. The touchstone document for all this is the 2011 Christie Commission on the Reforming of Scotland’s Public Services.

One person commentated that we know, for all the critique that may justifiably exist, that Scotland is ahead of the curve on this accountability-outcomes-performance nexus of public policy. How do we now go forward to better work with the complexity of governance and public service reform rather than adding to it?

A first point that came out of the discussion was an exploration of the implications of the different and not necessarily consistent elements of the national purpose. Economic growth, inclusion and sustainability all feature and may well in normal circumstances represent a series of trade-offs – i.e. increasing one may be at the expense of the others. So, how do you determine the weight to be attached to each element and how does that accord to societal preferences? This quickly moved into a conversation about Kenneth Arrow and social welfare functions in economics and the wider appeal of Sen’s capability approach (which is the underlying normative framework used in much of the work of What Works Scotland).

A second theme was that while there was a perhaps surprising degree of consent around the table for an outcomes-focused approach, recognising that there remains little rigorous evidence at a national level about the impact such an approach has on wellbeing, there was much more concern with the relationship between outcomes and indicators that act as performance proxies. As one commentator noted, there is world of a difference between attributing performance to a conscious service or intervention approach and recognising that it may contribute to it (and that this is located in a credible theory of change).

The critique of performance indicators in general is well known – cream-skimming, parking, only counting ‘what can be counted’, focusing on the indicator rather than the broader outcome or purpose, the scope for a wide range of other perverse incentives that undermine a service or intervention. The meeting also discussed the need for consistently rigorous, generalizable, valid and reliable evidence and operational indicators with which to make meaningful judgements. There is often quite a gap between the outcome statement and the indicators in terms of specificity and measureability.

This would seem to make a case for 1. a greater investment in the evidence and data audit required to build better indicators and 2. a comprehensive attempt to ensure minimum indicator quality. On the latter point, I have always taken the view that there is nothing intrinsically wrong with performance indicators, or with the use of sharper incentives or indeed (as came up in the discussion), the use of payment by results mechanisms – what matters is the appropriateness of their design and the careful assessment of how they are used and concern for unintended consequences.

Perhaps this suggests that Government might consider the creation of an independent review group who could support the performance team, comment and propose amendments to the indicators, evidence and data used? Academics and independent researchers could play a potentially valuable role (and the potentially complementary relationship between quantitative measures and qualitative evidence on the ground was stressed by different speakers). This could be an opportunity from the top of Government down to evangelise the use of evidence in accounting for government and public service performance against desired outcomes.

A third element of the story is the fit between local and national level approaches. With single outcome agreements and now with local outcome improvement plans, local community planning partnerships sign up to specific goals which nest into the national performance framework. On the one hand, this provides for a clear place-based representation of these ideas in localities all over Scotland, but it also brings with it the danger of compounding the performance indicator problems and the over-zealous focus on indicators discussed above at the level of the local authority and below.

There were other useful points highlighted. First, make distributional or social justice outcomes and indicators more explicit and more benchmarked consistently with other nations (in the way for instance economic productivity performance is measured against OECD quartile scores). Second, presentationally, the refreshed set of national outcomes  should be discussed and part of the public policy discourse in their own right,. This should be quite distinct to and separated from the mechanism that seeks to use the best practice theory of change and credible analytical evidence (which is valid, reliable and generalisable) by creating high quality indicators of the journey towards the outcomes (and unlike at present – those indicators should nonetheless be mapped on to the outcomes they seek to measure).

At the end of the roundtable I said I thought it had been a valuable exercise on the criteria that I had both learned a lot and we had produced a genuinely multi-disciplinary conversation – economist shall speak unto sociologist, etc. I think the Scottish Government team also felt there was genuine value from the day and I wish their endeavours well.

 

Advertisements