Breadcrumbs

Beyond Balanced Scorecard: New Insights for Public Sector Agencies

Beyond Balanced Scorecard: New Insights for Public Sector Agencies

Peter Ryan,
Corporate Performance Manager,
Christchurch City Council, New Zealand

Introduction

Around the world a large percentage of successful companies use some form of Balanced Scorecard. The science of building and executing the scorecard in the private sector is now widely understood.

One reason for this is that the bottom line of a company is actually fairly easy to describe. No matter how vast or complex a private sector agency is, its annual or quarterly bottom line can be expressed quite simply.  
Some government agencies work in narrow regulatory fields. Their success or failure is easy to understand, and can also be expressed in fairly straightforward terms.    

The same is not true of government agencies with diverse services. This literally means many bottom lines. The services they offer may differ so widely that aggregating them makes no sense.

So far attempts to roll up diverse bottom lines into a number, or small handful of themes, have not proven convincing. Local government is a classic example: the community may love the city’s parks, but despise the quality of the water supply. How can aggregating a series of these results provide meaning? How can it avoid concealing key issues? 

Most people understand what local government does, because roads, waste, planning, libraries, recreational facilities, parks, water supply and so forth have a material bearing on our daily lives. Yet local government has not shown itself to be highly responsive to good scorecard rollout.  

If that seems an overstatement, consider this. Around the world the number of local governments implementing scorecards and related systems runs into the tens of thousands. Many have been running these programs for years, yet comparatively few can be cited as best practice case studies.

Awards are not the sole arbiter of quality, but it is worth noting that in the past ten years only two cities in the world have received the Balanced Scorecard Collaborative Hall of Fame award for successful scorecard implementation (Charlotte City Council, USA, and the City of Brisbane, Australia). During the same period scores of private sector agencies have been so awarded.  Are city governments around the world inept? Or is there a better explanation for this phenomenon?

To point up the differences between a successful implementation and a failed one we must look to examples. On the ‘worst practice’ side, consider New Albany, a medium sized city. It’s a mythical place, but serves to collect and highlight mistakes made by various local government scorecard implementations around the world.

By contrast Christchurch City Council, New Zealand, is a real case study. It is a practical example of a positive approach. Christchurch is a city government of 2500 staff and an annual budget of $300 million. I must confess an involvement in this implementation. However the credit for what has been achieved belongs solely to the organisation itself. 

Between the two it should be obvious to the reader what works…and what doesn’t.

Background

New Albany

In the first instance the New Albany scorecard initiative was raised by HR. What followed was a debate between the finance section and the corporate planning team as to who would run the implementation. A small group was then put together (in the end a newcomer, the strategic planning department, won the battle) to make the rollout happen.

Several years later, after exhaustive lists of KPIs had been developed, the scorecard program had little or no traction. The CEO and executive team were quietly surprised that the scorecard gave them so many numbers, and so little meaning.
Nobody wanted to wind up the program – it would seem embarrassing to do so – but nobody was interested either. After so much initial fanfare it had become mere infotainment, a kind of corporate ornament no one could bear to throw away. 
How could this happen?
 
From the outset this organisation understood performance management to be a list of measures designed to ‘keep an eye on things’. Its leaders did not grasp the link to strategy, or understand the price of the programme…that it means changing how business is done, and moving organisational culture.

Turf wars were fuelled by this misunderstanding. Policy and planning staff resented what they saw as an incursion into their field. HR felt that the framework was too mechanistic. Many finance staff resented what they saw as a superficial view of financial information. All continued to work and report independently of the scorecard program, which was undermined as a result.

Christchurch

There was no misunderstanding in this case. The performance system was introduced by the CEO and executive team with an express intention: its task was to change how the organisation worked, and move its culture forward. 
Its philosophy was based around delivering business results, not monitoring. The scorecard was immediately linked to key systems: finance, executive targets, and staff performance reviews. Executive performance plans were converted to scorecard perspectives and then based on scorecard KPIs. These links were then quality assured and independently audited.

Any legacy material which would compete for staff attention (such as local, ad hoc business/operational plans) was immediately disbanded.  

IT: Friend or Foe?

New Albany

This organisation had no business intelligence system, and much of its information was spread across the finance and HR systems, as well as various ad hoc spreadsheets. At the very beginning of the scorecard rollout the IT section was brought in to help purchase an application to support the scorecard. Tenders were quickly called. The application which met the criteria best did so by virtue of its comprehensive functionality and competitive price – on the face of it, a good deal. 

That advanced functionality proved irresistible to New Albany’s IT section, who promoted every feature to the user screen. The planning section then took longer than expected to draft up suitable KPIs, as they wanted the system to be ‘just right’ before it was launched.

This all proved a burden in daily life. The system was greatly delayed, and when it was finally launched, proved too complex for most senior managers to use. It made them feel inept. Due to operational pressures many failed to attend the required training course. Others logged in once and baffled by a bewildering array of dials, meters, and ‘spaghetti and meatball’ diagrams, never logged in again. Key results remained un-entered.

The scorecard practitioners then sent terse messages to managers in pursuit of missing information. These tasks were then grudgingly relegated to junior staff. Even where the information was made available, it was never seen by decision makers. 

The application also proved more expensive than imagined. It was not purpose built for balanced scorecard use, and was essentially aimed at private sector companies. Its architecture and reporting functions were heavily geared towards financial results.

Naturally, its parent company reconfigured the application to place customer and stakeholder results at the top of its scorecard logic chain, but the modifications were costly in terms of consultant time. Further costs were driven by trying to link the new application to the existing finance system and data warehouse, a task which was only partially successful.      

The intention was for the whole organisation to use the system regularly. However, the recurrent licence fee for the whole organisation proved quite expensive. Limiting access to a smaller group would save money, but would also reduce user numbers.

The ‘bargain’ system suddenly presented an unpalatable choice – much more expense, or much lower visibility. Worse still, there was growing confusion in the minds of many staff that the IT system and the balanced scorecard programme were the same thing.

For the City of New Albany, grappling with cultural change had been imperceptibly replaced by grappling with IT implementation. 

Christchurch

This organisation recognised that it would need to store information, but saw building a strong performance framework as the first priority. This meant getting a sharper focus on organisational strategy, outcomes, outputs, and accountabilities.
Christchurch also had no business intelligence system. With a strong performance framework in place it then took three months for the organisation to:

• build high quality performance measures and targets across all services
• put operational tasks behind each performance measure to ensure delivery
• link both to executive and staff performance plans

Once these key performance building blocks were well developed it then chose an application selected for its simplicity of use. All complexity was kept out of the user’s view (within the data model) while the user interface was built to have a deceptively simple look and feel.

The overarching ethos was this: ‘keep it very simple - let users drive complexity as they need it.’

The scorecard information was then loaded into the application. All information was transparent to all staff via intranet. Systems training was minimal. Many staff successfully used the system with no training at all.
 
The Measurement Maze 

New Albany 

New Albany has no clear set of outcomes for the city, the ‘desired end state’ so beloved by academic commentators. It has a series of positive but rather vague statements that could apply to any city of a similar size. There is no mention of tensions or tradeoffs between outcomes, merely an implication that the city should try to be good at everything.

From the beginning there was an ongoing debate, often quite fierce, about what the organisation’s performance measures should be. Many of the departments argued flatly that their pre-scorecard measures should not be changed. The performance team recognised that many of these measures were poor, or were designed to conceal poor performance, helped shore up claims for further budget, or were simply unable to be measured at all. But units remained firmly wedded to them.

There was insufficient support at the executive team level to overturn this status quo. This was because senior executives by now perceived the scorecard program to be a passive observer of the organisation, rather than a re-shaper.

The fact that measures underwent constant churn was also a reflection of the organisation’s lack of direction. Lacking a clear cut vision and strategic direction, no supporting measures could be sifted from the infinite number of possibilities, and so measures swarmed like bees in a jar. 

Altering measures from year to year did not add meaning, and also destroyed any hope of creating trend lines or developing a benchmarking program. Ultimately there were even fewer results available under this scenario than previously.  
There was also a strong push to weight all measures and to arrive at aggregate scores, which could in turn be used to derive a combined organisational score. This consumed a great deal of staff time, but proved ultimately fruitless. The complexity of weighting each measure for city services bogged the program down, and the resulting aggregations often concealed meaning, risks or performance issues.

Oversimplification through contrived scoring had created its own complexity. 

Christchurch

Executives and staff first worked through a model of how the council needed to change if it planned to become a high performance organisation. This straightforward model was to become known simply as The Big Picture. (Figure 1)

beyond balanced scorecard new insights for public sector agencies-figure 1 

Figure 1: The Big Picture

By setting clear measures of desired outcomes at the top of this cascade, it became possible to sift supporting measures quite easily. The presence of a ‘true north’ meant that many of the usual measurement debates were minimal, or non existent. The outcomes for the city are wider than the responsibilities of Christchurch City Council, and therefore require the work of many agencies to come to fruition.
Christchurch established an extensive monitoring program for these city-wide outcomes, which is shared with other agencies and published on the world wide web as a resource.

Having established desired outcomes for the city, developing strategies - and scorecards to drive these strategies - also became much simpler.

The Big Picture model also proved valuable in helping staff understand how their tasks were arrived at, what they contributed to, and why.

The focus was on communicating alignment in a straightforward way.

Scorecard Structure

New Albany 

One of the early tenets of BSC philosophy was that a good scorecard program should address two fundamentally different questions:

• are we doing the right things?
• are we doing them right?

New Albany, like almost all government agencies, was structured along departmental lines. A scorecard and set of KPIs for each department seemed like a natural, inevitable approach. In fact no other options were ever explored.

While these silo based scorecards could answer the operational question ‘how well are we doing things?’ they proved unable to address the second, more strategic question. Again, this reinforced the organisation’s growing perception of the scorecard as a monitoring tool.
New Albany’s silo-based views of performance were invariably dominated by outputs…typically, quantities of service delivered, process times, unit costs and so on.

But for a city the bottom line – the ‘right things’ - can only be understood by looking at city-wide outcomes. To use a simple example, how clean is the city’s air? Depending on how air quality is performing (good, bad or indifferent) then questions around what mix of tasks the local government should be doing (fines for backyard fires, installing bike paths, planning for industry, promoting public transport and car pooling) start to make sense.

Even in the small example cited above, it is obvious that the activities which drive a genuine outcome have their origins in different parts of the organisation. That is why purely silo-based views of performance struggle to answer strategic questions like ‘are we doing the right things?’ 
 
Without a desired outcome as ‘true north’, silo based outputs tend to be somewhat arbitrary and chaotic.

New Albany struggled and ultimately failed to achieve alignment. 

Christchurch

To answer the two key scorecard questions, Christchurch built two forms of scorecard.
The first sets a city-wide outcome at the top of its scorecard and then gathers all supporting outputs, irrespective of which organisational silo they might come from. These two levels (outcomes and their supporting outputs) form the customer perspective.

From there the normal scorecard approach applies, with key finance, process and workforce measures supporting this layer.

This strategic form of scorecard makes answering the question ‘are we doing the right things?’ fairly straightforward. It is the ability to compare and contrast the two classes of information – outcomes and contributing outputs - which gives this approach value.

The second form of scorecard runs on departmental lines. It provides silo leaders and their staff with a traditional, departmental view of their contribution to organisational outcomes and outputs, and is the administrative mechanism for cascading and aligning targets to all staff.

Christchurch holds the view that neither outcome nor silo based scorecards can, by themselves, answer the two questions posed by Robert Kaplan and David Norton at the very beginning of the BSC phenomenon.

Two different questions require two different answers.    

The Value Proposition

New Albany

After reading various publications and attending some conferences, New Albany decided to stick with the orthodox view of a not-for-profit scorecard. They realised that private sector companies needed to put finance at the top of the scorecard value chain, but that a different situation applies to most government agencies.

So their customer perspective went to the top of their scorecard value chain. There was general agreement that this was a good, sensible and obvious thing to do.

However, it wasn’t long before this approach began to cause angst at many levels of the organisation. Services and projects which were widely known to be failures showed up on this form of scorecard as successes.

One very large IT project was a classic example: the project was on track and delivering the intended service to the community more or less on time. Consequently its scorecard KPI looked successful.

However, everyone knew that the cost of the project was millions of dollars over budget. It was widely regarded as an embarrassing reflection on the executive team, so its status as a ‘green light’ on the corporate scorecard raised eyebrows. It also raised cynical questions about the worth and integrity of the scorecard model.
 
Christchurch

Christchurch City saw the value proposition differently. To them, the value being delivered by one of their many ‘bottom lines’ must be understood as more than how well it is delivered, or how happy the community is with it. Their view is service is only one half of the equation, and that the cost of that service is also important.

Their argument is that a bottom line delivered by a government agency is just like buying a house. The buyer wants to know what they are getting, but also what it costs. A lovely house for half a million dollars is much better value than buying exactly the same house for twice that amount.

Few people buy a house, no matter how lovely, without asking its cost.

House buyers are usually so aware of the value proposition when they look at homes that they can tell almost without asking what the going price of a property should be. If they can’t, they can always hire specialists, who are known (not surprisingly) as valuers.

So Christchurch made sure that in their customer perspective the performance of a service was accompanied in every instance by the cost of that service. The philosophy was ‘put the bang and the buck together’. Scorecard targets which spelled out both service delivery and cost were then created, and these were built over a ten year time frame.  

That granted executives and staff a powerful new view, quite different from seeing customer results in one perspective of the scorecard and quite possibly a different form of financials in another. For example, if the cost of a service grew and its performance didn’t, its value proposition could be easily understood and questioned.

Doing this meant making some changes to the way the finance system worked and restructuring how this system viewed services. It also meant understanding inflation projections and other long term planning tools in much more detail than is usual in a public sector agency. Finally, projecting strategic and operational performance targets, costs and accountabilities over the ten year time frame demanded courage and conviction. It is a cathartic exercise for a local government agency.

These value propositions were then published to the community in great detail and opened for consultation. Several thousand submissions were received from citizens and government agencies, with hundreds being presented at hearings with the elected representatives. This enabled Christchurch to formulate a final, well balanced version of the value proposition for all its services.

This process helped to reinforce a constant theme of the scorecard rollout: that funds provided by the community are given on the understanding that they are paying for a defined standard of service, in exactly the same way as a builder is hired to build a certain type of house for a certain sum.

Varying the type of house, or charging more, is not considered good value for a contracting job, nor should it be for a government agency. Christchurch calls this ethos their ‘contract with the community’.

Below the value proposition contained in the customer perspective, Christchurch retained an orthodox scorecard structure, which focuses on the financial, process and workforce enablers required to deliver on those customer value propositions. 
 
The Folly of Interim Results 

New Albany

Initially, the scorecard team set quarterly targets for customer measures, in exactly the same way that a company would set quarterly targets for financial results at the top of their scorecard value chain.

This quickly proved untenable for some measures, for example, greatly increasing the cost and effort required to conduct customer satisfaction surveys across the city.

This approach was abandoned, but the team continued to set quarterly targets for other customer performance measures. This too resulted in difficulties. 

Operational staff quickly objected that many forms of service delivery could not be planned by simply dividing an annual target into four milestones. On examination, delivery of a typical local government service turned out to be non-linear. Maintaining the grass in the city’s parks was heavily concentrated into three months of the year, for example.

Worse still, this peak period moved from year to year depending on rainfall, temperature and other seasonal factors. Service delivery during the year turned out to be both non-linear and often unpredictable, with climatic, economic and other genuine variables.

Many managers objected to requests for interim results, and the difficult mathematics and convoluted technical logic surrounding each instance enabled many to argue successfully that an interim measurement was either impossible, or too costly to calculate.

New Albany’s focus on interim results was solely restricted to numbers and percentages, a crude echo of the traditional company financial model. Unfortunately for them, this proved constricting and difficult to implement. Worse still, when a numeric or percentage-based milestone did prove feasible, it was immediately relegated to a junior officer to provide and therefore had little strategic impact. 
Christchurch

As an organisation with a history of measurement, Christchurch had some understanding of the difficulty city managers face in presenting interim service results during the course of the year.

After some debate, they opted to proceed down a quite unusual path. Where an interim measure could be reasonably calculated as a percentage or number, those results would be expected. But where such a calculation proved difficult, the manager concerned became responsible for providing a judgement call (in green, red or amber) as to whether or not progress towards the annual target was on track.

As an option the manager could provide notes on scenarios and preferred options.
The repercussions of this unusual step took some time to become clear. Asking managers to effectively sign off a personal judgement call each quarter removed any argument that an interim call could not be made. By definition managers are paid to know what is going on and to make such calls.

Secondly, only the manager concerned could make this entry on the system. This meant that delegating the judgement call became less likely. The executive team now has access to complete quarterly data, which additionally ensures that managers focus on their targets and must make judgement calls on progress.

The knowledge that they will ultimately be faced by a concrete result (a number or percentage) at year end and the sheer transparency of their results throughout the organisation means that there is little incentive to varnish interim results.
 
Where Are They Now?

New Albany 

The business system supporting the scorecard received little fanfare in its rollout and progressively fell from the organisational view. The team supporting the program also faded from view.

The scorecard had little real effect on strategy or key decision making. Critically, it failed to have any significant impact on setting the annual budget. The implementation was not stopped, but it began a long slow slide into organisational obscurity. Other approaches to performance sprang up and are being piloted in various parts of the organisation. All are encountering the same issues that faced the scorecard. 

Christchurch

Outcomes, strategies and plans have now been converted to scorecard KPIs, targets, budget and accountabilities. These were identified by staff and then formally resolved by the elected representatives.
Uniquely, these drivers do not operate on a one, two or three year timeframe. They are set over a ten year period, a very unusual step in municipal government.  

Every scorecard target is further supported by a defined set of tasks, accountabilities and due dates, which in turn form the basis of staff performance agreements. 

Despite its status as a newcomer to the scorecard field, Christchurch has recently been nominated for the Society of Local Government Managers Excellence Award, to be judged in late 2006.   

Conclusions

In most fields, implementation generally fails because of basic mistakes. The same holds good of the Balanced Scorecard.
Orthodox views of the scorecard and performance management require some careful consideration before being used in not-for-profit agencies, especially those offering quite diverse services. The principles remain the same, but how they can be deployed effectively requires imagination.   

New Albany might be a mythical place, but long term observers of the BSC will recognise many of its characteristics and choices as horribly familiar.

Christchurch represents a good (but by no means perfect) example of a successful  implementation. The organisation still has a long way to go, but has the right foundations in place to meet its objectives.

To achieve this it has had to reject much of the dogma surrounding the scorecard. It has gone back to basic principles - and avoided basic mistakes - before creating new ways forward. 

Perhaps, in the final analysis, that simple truth is the most valuable lesson of this case study.