Leading measures

What to measure may be the most important management decision an organisation makes

Measurement. It sounds boring and technical, and that’s how most companies treat it. In fact, as made clear in a fascinating (and frightening) recent event on ‘results-based management’ run by the consultancy Vanguard, what to measure may be the single most important management decision a company makes.

For an indication of why, take the case of a typical local authority child protection department which operates to two standard measures. For children at serious risk, it must carry out a fast initital assessment of 80 per cent of cases within seven days. For a full core assessment, the standard is 35 days. The department meets both standards; under the widely-used ‘traffic-light’ signalling system (red-amber-green) it rates a green, so managers judge that no further action on their part is necessary.

Now look at the same department through a different measure: the end-to-end the time taken to do the assessment from first contact to completion. The picture that emerges is very different. The urgent assessment predictably takes up to 49 days, with an average of 18.5, while the 35-day assessment takes an average of 49 days, but can equally take up to 138. Worse, the clock for the core assessment doesn’t automatically start when the initial assessment finishes but only when it is formally opened. So the true end-to-end time for the 35-day assessment is anything up to 250 days. ‘Now tell me Baby P and Victoria Climbié were one-offs,’ says Vanguard consultant Andy Brogan, who gathered the data, grimly. ‘They weren’t – they were designed in.’

So how could the department have been meeting its standards? Consider what has happened to the department’s purpose. From assessing and protecting children, the imposition of the government-mandated measures, plausible but arbitrary (why seven days? why 80 per cent?), has shifted the de facto purpose to meeting the standard within officially laid-down parameters – which it does by recategorising, shutting and reopening cases as permitted by the guidelines.

No learning takes place, because these measures are not about learning but ‘accountability’, in this case to government. Remember, management thinks that because it is meeting the standards, no further action is necessary. It’s not far from here to Mid Staffs, where Sir Robert Francis was strumped to ascribe blame because everyone met their targets and thus covered themselves (which is what accountability really means). In the end he could only attribute the failings vaguely to ‘the culture’.

Unlike standards, the end-to-end measure on the other hand throws light on how well the department is meeting its purpose. Learning takes place. The workplace conversation is no longer about how to meet the standard but what accounts for variation and how to how to save time in assessments to make children safer. Contradicting the traffic lights, action is urgently needed. As the process is repeated, improvement becomes continuous.

Momentous conclusions ensue from looking at measurement this way. The ‘why’ of measurement (purpose) precedes the ‘what’. If the measures are not related to real purpose, the measures become the purpose, and better ones signal improvement that is dangerously illusory.

The bottom line is that measures can be used for either accountability (outcomes, targets) or for learning (purpose-related, commonly end-to-end times or total not unit costs) – but not both. Yes, this is our old friend Goodhart’s Law (which says that the moment a measure is used to manage by it loses its validity as a measure) in a different guise. It’s management’s uncertainty principle. Accountability measures can’t be used for learning and improvement because a) they don’t aay anything useful about what works and why, and b) as in the child protection department, the story they tell is a false one. Measures for learning and improving, on the other hand, dial down the need for external ‘accountability’, since they cause people to respond directly to the customer or person in need.

Importantly, the choice of measure affects many other aspects of organisation, including structure. An organisation using outcomes-based measures like targets and service levels to manage performance naturally adopts devices designed for accountability such as incentives, functional organisation, outsourcing, shared services and separate front and back offices – ‘dangerous idiocies’, in Brogan’s words, that effectively blind managers to what is really going on in their organisation. When things go wrong, it is not because people are wicked, stupid or uncaring; it’s because they are working in a system where data is constructed not for learning and improving but holding people to account.

The horrible results of bad measures, from banks that bankrupt societies to hospitals killing their patients, are all around us. Given the evidence that hitting the target so often misses the point, why is the stranglehold of the ‘dangerous idiocies’ so complete? One reason, argues the Newcastle University researcher Toby Lowe, is that the problems are conceived of as technical challenges that are capable of solution, for instance by sharpening sticks and carrots and increasing accountability. The result is an ever sharper focus on doing things better that shouldn’t be done at all. Jeremy Hunt’s proposed criminal sanctions for hospitals that fiddle their mortality figures fall straight in this category.

‘Having lost sight of our purpose, we redoubled our efforts’, as W. Edwards Deming sardonically summed up this process half a century ago. It’s time for a change. A choice has to be made. Either we go on using accounting and accountability measures to manage performance in a predetermined way, in which case we shall continue to be surprised when the things they bring about go spectacularly wrong; or we switch to measures that, as Brogan says, ‘help and act on the causes of variation in performance so that we can connect actions with consequences.’ That’s what changes management from a succession of hunches t a systematic, scientific endearvour; that’s why measurement is a leadership not a technical issue.

Leave a Reply

Your email address will not be published. Required fields are marked *