This isn’t an abstract problem. Targets can kill

MRSA, Baby P, now Stafford hospital. The Health Commission’s finding last week that pursuing targets to the detriment of patient care may have caused the deaths of 400 people at Stafford between 2005 and 2008 simply confirms what we already know. Put abstractly, targets distort judgment, disenfranchise professionals and wreck morale. Put concretely, in services where lives are at stake – as in the NHS or child protection – targets kill.

There is no need for an inquiry into the conduct of managers of Mid Staffordshire NHS Foundation Trust, as promised by Alan Johnson, the health secretary, because contrary to official pronouncements, it is exceptional only in the degree and gravity of its consequences. How much more evidence do we need?

Stafford may be an extreme case but even where targets don’t kill, they have similarly destructive effects right across the public sector. Targets make organisations stupid. Because they are a simplistic response to a complex issue, they have unintended and unwelcome consequences – often, as with MRSA or Stafford, that something essential but unspecified doesn’t get done. So every target generates others to counter the perverse results of the first one. But then the system becomes unmanageable. The day the Stafford story broke last week, the Daily Telegraph ran the headline: ‘Whitehall targets damaged us, says Met chief’, under which Sir Paul Stephenson complained that the targets regime produced a police culture in which everything was a priority.

Target-driven organisations are institutionally witless because they face the wrong way: towards ministers and target-setters, not customers or citizens. Accusing them of neglecting customers to focus on targets, as a report on Network Rail did just two weeks ago, is like berating cats for eating small birds. That’s what they do. Just as inevitable is the spawning of ballooning bureaucracies to track performance and report it to inspectorates that administer what feels to teachers, doctors and social workers increasingly like a reign of fear.

If people experience services run on these lines as fragmented, bureaucratic and impersonal, that’s not surprising, since that’s what they are set up to be. Paul Hodgkin, the Sheffield GP who created NHS feedback website Patient Opinion (www.patientopinion.org.uk) notes that the health service has been engineered to deliver abstract meta-goals such as four-hour waiting times in A&E and halving MRSA – which it does, sort of – but not individual care, which is what people actually experience. Consequently, even when targets are met, citizens detect no improvement. Hence the desperate and depressing ministerial calls for, in effect, new targets to make NHS staff show compassion and teachers teach interesting lessons.

Hodgkin is right: the system is back to front. Instead of force-fitting services to arbitrary targets (how comforting is hitting the MRSA target to the 50% who will still get it?), the place to start is determining what people want and then redesigning the work to meet it.

Local councils, police units and housing associations that have had the courage to ignore official guidance and adopt such a course routinely produce results that make a mockery of official targets – benefits calculated and paid in a week rather than two months, planning decisions delivered in 28 days, all housing repairs done when people want them. Counterintuitively, improving services in this way makes them cheaper, since it removes many centrally imposed activities that people don’t want. Sadly, however, the potential benefits are rarely reaped in full because of the continuing need to tick bureaucratic boxes and in the current climate of fear, chief executives are loath to boast of success built on a philosophy running directly counter to Whitehall orthodoxy.

The current target-, computer- and inspection-dominated regime for public services is inflexible, wasteful and harmful. But don’t take my word for it: in the current issue of Academy of Management Perspectives , a heavyweight US journal, four professors charge that the benefits of goal-setting (ie targets) are greatly oversold and the side-effects equally underestimated. Goal-setting gone wild, say the professors, contributed both to Enron and the present sub-prime disasters. Instead of being dispensed over the counter, targets should be treated ‘as a prescription-strength medication that requires careful dosing, consideration of harmful side effects, and close supervision’.

They even propose a health warning: ‘Goals may cause systematic problems in organisations due to narrowed focus, increased risk-taking, unethical behaviour, inhibited learning, decreased co-operation, and decreased intrinsic motivation.’ As a glance at Stafford hospital would tell them, that’s not the half of it.

The Observer, 22 March 2009

Business as usual while the foundations crumble

HOW FAR have we got in rethinking the management of banking and financial services? Almost nowhere, was the verdict emerging from a recent workshop by the Centre for Research on Socio-Cultural Change (Cresc) in London. The session gathered a rich cross-section of politicians, bankers, academics and commentators who sharply challenged the official analysis of the crisis.

It may be, as The Guardian‘s Larry Elliott suggested, that we are still in the fourth, ‘panic’ stage of the crunch – following the bubble phase (’it’s different this time’), denial (’don’t worry, the fundamentals are sound’) and acceptance (’more serious than we thought, but well placed to recover’) yet underneath all the frenetic activity, the remarkable thing is not how much underlying assumptions have changed, but how little.

For make no mistake: the tectonic plates are shifting. On the one hand, as Professor Mick Moran of Manchester University made clear, the crisis has fatally holed the grand project of the past three decades to shrink democratic control of the economy and deposit it in the hands of the technocracy. The edifice built on an independent central bank, independent regulatory agencies and a business-friendly regime for the markets is tottering. With the technocrats in retreat, economic problems are pushing back into the political and democratic domain: ‘politics is flooding back’.

Yet it is unreflected in either institutional or technical reactions to the crisis. Institutions charged with managing the response, such as government investment managers UK Financial Investments and the Shareholder Executive, remain independent agencies run by the usual Treasury/City suspects. The banks may be effectively nationalised, but governance is still at arm’s length and has no other aim but orderly exit. Shareholder value is still the discourse.

In other words, business as usual. But as other presentations demonstrated, it was business as usual that got us into this mess in the first place. An investment banker acknowledged that three of the miscalculations that caused the meltdown – neglect of liquidity, staggering concentration of risk, and failure to allow for the business cycle – were management errors of the most glaring kind that he was at a loss to account for.

For a second academic speaker, Ismail Erturk, however, the explanation was plain: ‘The problem is shareholder value.’ He argued that this concept, much favoured by the business-friendly financial regulators of the grand project, had driven an ‘unsavoury revolution’ in the banks that damaged the interests of borrowers and depositors and showed itself to be ultimately incompatible with banking’s basic utility function.

In retail, the banks turned themselves into mass marketers selling fee-earning financial products that could promptly be removed from the books by securitisation, while the investment banks switched their focus from corporate services to proprietary trading on their own account. Both sidestepped none-too-onerous regulation to build up formidable levels of leverage. Each of these models has now unravelled.

Erturk’s conclusion is stark and far-reaching. As long as shareholder value prevails, some kind of defensive separation of trading from basic banking functions is essential. More positively, the Cresc researchers propose a remutualisation of retail banking: a gradual euthanasia of shareholders, and a substitution of bonds for equities, giving investors ‘predictability and security of returns on a class of paper whose quality could be second only to government bonds’.

Other workshop participants were quick to extend the diagnosis from the banks to publicly quoted companies in general. If – as it is now becoming permissible to suggest – shareholder value is indeed the problem, then, as Einstein said, ‘the significant problems that we face cannot be solved at the level of thinking we were at when we created them’. A wholesale recasting of today’s unfit-for-purpose corporate governance becomes another urgently necessary response. In short, we are a very long way from business as usual.

Of course some people argue that the situation is now so bad that preventing a future crisis takes a distant, second place to getting things moving again. One inhabitant of the real economy feared that the squeeze would suck so much life out of companies like his that we wouldn’t even care about the possibility of another bubble.

Assuming it doesn’t go that far, the dilemma is poignant. The softer the landing, the more the government will be tempted to shore up the crumbling orthodoxy, making another crisis certain. The worse the depression, the better the chances that Whitehall can be pressurised into a fundamental rethink. Neither prospect is a cheerful one. But as the Obama team keeps repeating: ‘Never waste a good crisis.’

The Observer, 15 Mar 2009

Cutting the payroll means unhappy dividends

HAPPINESS HARDLY seems at the top of the management agenda when the financial world is falling apart. But, as participants at a seminar on ‘Recession: health and happiness’, organised by the Economic and Social Research Council, heard last week, it probably should be.

Hard times put a premium on real priorities. One of the founding assumptions (and justifications) of conventional economics is that money CAN buy me love, or at least wellbeing: and if wellbeing increases with wealth, GDP growth is obviously of cardinal importance. But in many countries over the past half century, soaring levels of crime, deprivation, depression and addiction to alcohol and drugs seem to have consumed much of the increases in happiness that ought to have accrued from steadily rising living standards.

The Easterlin paradox, as this is called – after American economist Richard Easterlin – has prompted economists such as Richard (Lord) Layard of the LSE and Warwick University’s Professor Andrew Oswald, both speakers at the event, to argue that the aim of public policy should switch from GDP growth to measures that more directly relate to human happiness. As BBC presenter Evan Davies, who chaired the session, pointed out, this is the first recession since the dismal science began taking happiness seriously: an appropriate time to consider the lessons and act on them.

Fear of recession makes everyone less happy. But one finding from the study of the economics of happiness stands out: the devastating effect of unemployment. Ironically, as panellist Melanie Bartlett, professor of Public Management at UCL, pointed out, unemployment was considered so uninteresting in the 1990s that people stopped studying it. Now it is back with a vengeance.

Indeed, so harmful are the consequences – up there with divorce and separation, with the added complication that they get worse the longer it continues – that Layard believes government must guarantee jobs for those still out of work after a year, with further state support conditional on acceptance. Wefare-to-work is justified, he argues, by ‘the huge jump in happiness that occurs when people go back to work’. Training takes a clear second place to getting people back into work. Young people will be particularly vulnerable as recession deepens, making guaranteed apprenticeships ‘one of the top five tasks for government’.

If the public sector is obliged to pick up the pieces, the private sector needs to stop creating the debris in the first place. In particular, the kneejerk reaction to get rid of what until yesterday were ‘our greatest assets’ makes no sense either economically or socially. Recall that up until the 1970s, most companies tacitly accepted that they had an obligation to employees for whom finding a new job was harder and more traumatic than for investors to buy and sell their shares. Sacking people was therefore the measure of last resort.

Over the past 30 years of shareholder dominance, however, redundancies have become the measure of first resort rather than last. However, while shareholders may be temporarily mollified, sackings frequently cast a pall over the survivors, with dire effects on engagement. Lower costs but higher disengagement is not likely to be a winning trade-off in an environment where attracting customers may be key to survival.

The alternative, employee-centric approach is still used by many Japanese countries, many of which go to extraordinary lengths to avoid lay-offs of permanent staff. Toyota has not laid off full-time workers since 1950 as late as last December, like camera and printer manufacturer Canon, it was committing itself to maintaining lifetime employment, although many agency workers have gone. In the UK, it is discussing with the union alternative approaches to facing the crisis, including work-sharing, shorter hours and pay cuts, as well as voluntary redundancy. Japanese companies often cut dividends first, followed by management bonuses (if any), then pay and working hours, and only then jobs.

There is of course more to this than fairness. Although no one is likely to be made happy in the short term by shorter working hours and lower pay, keeping the maximum number of people on is an obvious expression of confidence in the future. Research by the Engage group suggests that employees take the behaviour of companies under crisis as highly revealing of their real nature: the spirit of the decisions made under pressure will be remembered far into the future.

Paradoxically, the age of economic self-interest has turned out to be as destructive of human happiness as it has of the economy. Conversely, a more inclusive, egalitarian, humanitarian era may benefit not only happiness, but the economy too.

The Observer, 8 March 2009

However good the pay, it doesn’t buy results

IT’S A LAW of management that more is less – and if it’s complicated it’s wrong. On both these scores, nothing embodies management’s current ruinous disarray better than the knots companies are getting themselves into over pay. In a classic case of vanishing returns, in attempting to construct ‘better incentives’ and ‘closer links between pay and performance’, they are expending more and more effort on trying to get right something that cannot, and should not, be done in the first place.

Endless exhortations to ‘do it better’ are, to put it politely, whistling in the wind. Companies get it wrong because it’s impossible to get right. In Jeffrey Pfeffer and Robert Sutton’s forceful plea for evidence-based management, Hard Facts, Dangerous Half Truths and Total Nonsense , the myths and fallacies surrounding incentives and performance pay are a prime exhibit. As they point out, it’s a hard fact that incentives do change people’s behaviour – but unfortunately that’s the problem. If you pay bankers to dream up fancy new financial products to sell to greater fools, that’s what they’ll do. But it’s total nonsense to expect them to blow the whistle to prevent the products from capsizing the company down the line – that’s not what I’m being paid for, guv.

The Catch-22 – the fatal flaw with all numerical targets and quotas – is that to be understood and acted on, incentives must be simple. But if they are that simple, in any organisation with objectives more multidimensional than a whelk stall, they are simplistic: inadequate to carry the information necessary for the accomplishment of other goals. It’s impossible to specify a simple target for a complex organisation. Hence (thanks for this to a thoughtful reader) the lament of Andy Grove, formerly CEO of Intel, that for every incentive the company devised it had to implement at least one more to mitigate the harmful effects of the first.

Simple incentives make clever companies stupid, like the banks, zapping even the instinct for self-preservation. But complex ones turn them into hotbeds of confusion, envy, fear and loathing, which is no better. Why should some people get bonuses and others not? Why is yours bigger than mine? In any organisation made up of multiple teams and interdependencies, calculating reliable attributions of responsibility for gain or loss is like counting angels on a pinhead. And trying to do it years later, with possible clawbacks depending on it, is a mathematical and legal nightmare.

It’s not even as if money actually satisfies people. As another reader notes, one of the most influential management stud ies ever – with findings replicated many times over – was carried out by psychologist Frederick Herzberg. Investigating motivation at work, he concluded that although pay and conditions could cause dis satisfaction, the reverse was not true: they didn’t generate satisfaction, which came from factors intrinsic to the job itself (challenging work, recognition, responsibility).

People consistently overestimate the importance of money for others but for themselves, money is more likely to be a dissatisfier than a satisfier. Herzberg’s famous dictum rings as true now as it did 50 years ago: if you want people to do a good job, give them a good job to do.

The best thing to do with pay is therefore to stop forcing it do things it is incapable of and instead put it back behind the horse, where it belongs. People can then forget about it and get on with the job. That sounds flip, but in fact is serious, because it reverses the usual twisted logic. A bonus, like profits in general, is a consequence, not a precondition, of doing a good job. In a systems view, a guaranteed bonus is a contradiction in terms, as is the idea of paying any bonuses at all if the organisation is loss-making. Performance can only be optimised at the organisation level, so if the latter has done badly as a whole there is nothing to reward.

Incentive systems quickly become institutionalised: that’s part of the problem. It’s what people learn to expect. As Soviet premier Nikita Krushchev once said: ‘Call it what you will, incentives are what get people to work harder’. But even if that is true, it doesn’t necessarily get you where you want. Financial incentives lead to inequality in rewards – duh, that’s what they’re supposed to do.

For jockeys, loggers and orange pickers (to modify my categorical statement of a couple of weeks ago), that seems to result in higher performance. But it’s death to the co-operation and teamwork on which overall organisational performance depends. From sports teams and university departments to publicly quoted companies, the greater the pay inequalities the worse the results, whether in terms of collaboration, productivity, financial performance or product quality.

The moral of the story is that companies should be very careful what they choose to pay for – because that’s what they’ll get, and nothing else.

The Observer, 22 February 2009

Inside every chief exec, there’s a Soviet planner

THE MOST REMARKABLE thing on show at last week’s banking hearings was the capitalists’ naivety about capitalism – a gullibility that has endangered both of the economy’s major institutions, markets and companies.

Their credulity about markets has been total. In a forthcoming paper, Professor Brendan McSweeney of the Royal Holloway School of Management argues that ‘market-failure denial’ may have actually helped to provoke the economic holocaust. He notes that, by discounting the evident pitfalls of unrestrained purpose and capital-market-driven myopia, assumptions of market infallibility cleared the way for free markets not to self-correct, as believers predicted they would, but to self-destruct.

Likewise self-interest, another article of capitalist faith, was no more effective as a curb on bankers’ capacity to self-harm than on sharks in a feeding frenzy. In the financial Gotterdammerung, hedge funds that had successfully taken down everything else that moved did the same to the banks, only to discover they were destroying the source of the funds they needed to gamble with.

As for companies, the capitalist orthodoxy got it wrong from A to Z. Managers miscalculated risk, misallocated resources and created incentives of such outstanding perversity that they brought the entire global financial system crashing down around them.

How could this happen? One answer is that, in their Tarzan-like celebrations following the Cold War triumph over central planning, the high priests of capitalism neglected to notice the sting that the moribund system had left behind. With exquisite irony, while central planning had been largely discredited at macroeconomic level, at microeconomic level it remained alive and kicking – in their own organisations. Veteran systems thinker Russ Ackoff is not alone in noting that while at the macro level the west is vehemently committed to a market economy, at the micro level almost everyone works in ‘non-market, centrally planned, hierarchically managed’ ones.

The truth is that much conventional management is central planning in western disguise. This is why most companies are zombie-like in their structural and strategic similarity. This is why, too, they are unable to learn. With their faces toward the CEO and their arses towards the customer – in the immortal words of GE’s former CEO, Jack Welch – what would they learn from? No wonder warnings of disaster were suppressed or auto-censored at the banks, or that the only messages heard were those that fitted with the earnings targets that managers managed by. In turn, the learning failure explains why so many companies adhere to the Zimbabwe school of change management – altering course only after ruin, by coup d’etat.

Central planning imposes a huge co-ordination burden – which is why there’s just so much management. If work is fragmented so that people have no direct line of sight to the customer, people have to be driven by signals from above rather than below. So each company has its own little State Planning Committee, a management factory remote from the people doing the real work, where managers devise top-down production schedules, targets, procedures and carrot-and-stick performance management and pay schemes, with complex systems to keep track of it all.

The cost of maintaining the management factory is immense. Consider that the world’s most efficient large conventionally managed corporation, GE, spends 40% – that is, $60bn – of its revenues on administration and overheads. For every direct worker there’s an indirect one to check or ‘manage’ the work. If anything, overhead costs are increasing as work breeds more work. In less effective organisations, of course, hidden indirect costs are much higher.

None of this adds value for the customer. But although management clearly isn’t costless, we weren’t expecting the balance sheet to be negative. Yet while capitalists chortle over the absurdities of Soviet-style central planning – not to mention over-management in our own public sector – nothing in the pantheon of Soviet nonsenses (plants making a single giant steel bar or only left-foot shoes) compares, for destructive firepower, with the complex instruments rolling off the production lines of Wall Street and the City of London. Warren Buffett’s description of derivatives as weapons of mass destruction was exactly right.

As it turns out, the managers of large western corporations have much more in common with the apparatchiks of the command economies than is recognised. The aim may be different, but they share the same conception of management – faith in targets and incentives, separation of ‘management’ and ‘work’ – and, ultimately, the same neglect of customers and product markets. If capitalism is to be saved, don’t expect salvation to come from the capitalists.

The Observer, 15 February 2009

We can’t afford to give bosses a blank cheque

SOMETHING approaching panic is stirring the rarefied atmosphere of Planet CEO. Last week, President Obama did the unthinkable, in effect imposing a maximum wage ($500,000) on top executives of firms that receive ‘extraordinary help’ from the US government.

The Senate is athrob with other proposals. One senator has proposed an Income Equity Act under which pay that is more than 25 times that of the firm’s lowest-paid worker would cease to be tax-deductible. As part of the bail-out, another wants a five-year, 10% surtax on earners over $500,000.

The pay cap is a blunt instrument, and it won’t affect many of those who deserve it most they have already departed with so much swag that they will never have to work again (one of the many things wrong with present arrangements). It’s a historic moment nonetheless – the moment when the land of the free and the home of the free market decided that enough was enough, and that shelling out $18bn in Wall Street bonuses – that is, payments for performance over and above normal pay – in a year of record losses was way beyond it.

As part of the bail-out, there is repor tedly to be a conference to discuss an overhaul of executive compensation. This is a big opportunity, provided it resists predictable calls to ‘leave adjustments to the market’. It was the market that created the problem: top pay has been one of the most egregious market failures of the last 20 years. But what Nassim Nicholas Taleb, author of The Black Swan, calls ‘asymmetric compensation’ (heads I win, tails I don’t lose) is not just a symptom of distortion: by absolving individuals from responsibility for the consequences of their actions, it is also a substantial cause of the meltdown. Unregulated top pay is simply unsustainable.

To root out the perverse incentives with which chief executives’ pay is riddled, Obama’s conference needs to go far beyond the size of the bonus. At the heart of the pay spiral is governance, in the shape of the disastrous ‘agency’ doctrine that demands the ‘alignment’ of managers with shareholder interests through monetary incentives. Agency theory is management’s very own Ponzi scheme. It is a self-reinforcing enrichment device for top managers and privileged shareholders who, in unholy alliance, have combined to loot the company at the expense of employees, customers and, as we now know, society as a whole. Breaking out of the corrupt and self-serving agency model is an essential first step to lasting pay reform.

When managers are paid for service to the company rather than service to shareholders, other things fall into place. There are three interrelated considerations to be taken into account in setting pay: internal fairness, external fairness and what the company can afford. The agency model has caused most companies to ignore the first and even the third, concentrating exclusively on external benchmarks. The result is that ratios of CEO pay to average pay are beyond grotesque. Ratios of 300-plus, as in the US, wreck internal cohesion, which in the long term is organisationally unaffordable. As the bankrupt Wall Street firms amply demonstrate, they are unaffordable financially, too. A better balance between the three is the second essential of pay reform.

The third essential is to diminish – preferably abolish – the role of bonuses in pay setting (there’s nothing wrong with shared retrospective payments, as at John Lewis). The bonus culture is so ingrained it comes as a shock to find – it’s worth spelling it out – that evidence to show monetary incentives improve performance is simply non-existent. On the other hand, studies demonstrating that it is counterproductive are plentiful.

Actually, that may not be so hard to understand. The proposition behind all incentives – ‘do this and you’ll get that’ – is a crude behaviouristic device to secure compliance. It’s how you train a dog. But in a human context it damages intrinsic motivation – the desire to do a good job – and fatally displaces the focus of effort. In the words of Alfie Kohn, whose book Punished by Rewards remains the definitive statement on such matters: ‘’Do this and you’ll get that’ makes people focus on the ‘that’, not the ‘this’. Do rewards motivate people? Absolutely. They motivate people to get rewards.’ Money doesn’t attract the best it attracts the greediest. Worse, by insisting that bonuses form the largest part of overall pay, current governance guarantees that the tail wags the dog – in the case of the banks, to bits.

If incentives don’t work, what does? Simple. Pay people well and fairly, Kohn and others recommend – and then get them to think about the job in hand. You’d rather like doctors and nurses to be thinking about your particular condition rather than the bonuses they could notch up by taking your blood pressure or giving you a flu jab. The same goes for executives, too.

The Observer, 8 February 2009

We must decide to keep the red flag flying here

WHAT WAS Sir Fred Goodwin thinking when he committed Royal Bank of Scotland to the fateful pounds 48bn takeover of ABN Amro? And the bankers who piled into sub-prime CDOs and 100%-plus mortgages? As the City’s turkeys come thudding home to roost, one by one, like howitzer shells, an urgent question is how and why previously successful people made such awful decisions – and whether such catastrophes can be avoided in future.

Actually, Goodwin may not have thought about it very much at all. If, as seems likely, he was confident he was on familiar ground, he may have leapt straight to his preferred course with only a cursory consideration of alternatives. And if he did, he wouldn’t be alone.

In textbooks and conventional wisdom, improving decision-making is a matter of better analysis: clearer objectives and more astute discrimination between a range of options. Yet as a timely new book (Think Again , by Sydney Finkelstein, Jo Whitehead and Andrew Campbell) makes clear, rational analysis of this kind plays a surprisingly small part in most decisions, and the emotions a surprisingly large one.

Paradoxically, the problem is not that people are bad at making decisions. It’s that, for most purposes, we are so good at it that we don’t even know how it’s done. Research described in the book shows that the extraordinary human ability to act on the basis of incomplete information in complex conditions can be subverted by the very short-cuts that are at the heart of the processing miracle: pattern recognition and what the authors call ‘emotional tagging’.

Pattern recognition is self-explanatory. Emotional tags attached to the patterns of experience recognised are equally crucial. Emotions actually lead the decision-making process, providing focus and impetus to action – ‘feed, fight, flee or any other of the f-words’ – without which the process would just be data-processing. The trouble is that both the short-cuts can be tricked, and since they are unconscious we have no way of recognising that it has happened.

For instance, although the banks’ fall from grace happened after the book was finished, the authors speculate that Goodwin may have been misled by both his heart and his head. His previous experience, notes Whitehead, told him he could create value by buying sprawling rivals and aggressively taking out costs. This approach had worked well in the past, winning him a knighthood to boot.

Past success (which is a neglected hazard, the authors note) may well have reinforced his determination to push ahead, even as the economic storm clouds were gathering. But conditions differed in critical respects from those he was familiar with. In a credit crunch, the balance sheet – both RBS’s own and that of the target – was more important than cost-cutting potential. With the same information, rival Barclays backed off. Not only did RBS go ahead – it amplified the risk by offering cash.

Powerful prejudgments (for example, a strong belief that growth comes from high leverage and sweating capital) and emotional attachments (self-interest, ambition) all pushed in the same direction, apparently making Goodwin blind to the risks inherent in the worsening climate. With variations, similar stories could be told about Dick Fuld at Lehman Brothers, who also misinterpreted his experiences and missed opportunities to change course, and many other actors in the financial drama.

Which raises a second question: why didn’t they shift their views until too late? Reversing the usual emphasis, Think Again argues that, given the unconscious nature of decision-making, it’s all but impossible for individuals to eliminate mistakes through self-correction. Better, in their view, to develop awareness of ‘red-flag conditions’ that signal danger and establish external safeguards that can challenge distorted thinking.

Interestingly, it is errors that seem to be the key teachers here. Having spectacularly miscalculated over the Bay of Pigs, President Kennedy recognised the danger of getting the decision wrong in the Cuban missile crisis and counteracted it with a variety of measures – including setting up a ‘decision group’ to provide challenge and debate, chaired by his brother. Contrast this with Tony Blair’s inability to spot giant red flags over Iraq that were visible to most of the UK population. The invasion was apparently never even voted on in cabinet.

Safeguards at individual and corporate level have their limits. Some of us would argue that the credit crunch is the product of the biggest mistake of all – a lethal compounding of misleading experience, ideological prejudgment and turbocharged self-interest – in the shape of the financial sector’s unshakeable belief in market efficiency. What are the safeguards against such meta-mistakes? Keeping the red flags flying suddenly takes on an unexpected new significance.

The Observer, 25 January 2009

Darwin’s theory turned bosses into dinosaurs

THERE’S A case for saying that the credit crunch is all down to Charles Darwin.

Keynes wrote: ‘The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist.’

Now, technically Darwin is a defunct biologist rather than political philosopher or economist. But his interest in economics was keen, and equally keenly reciprocated. One perceptive interpreter of On the Origin of Species , 150 years old this year, saw it as ‘the application of economics to biology’. As the crowning expression of Victorian individualism, continental writers argue, the theory of natural selection, with its underlying theme of competition and struggle, could only have originated in the laissez-faire England of the period.

Bastardised and coarsened, the concept of ‘the survival of the fittest’ (a phrase only later adopted by Darwin from Herbert Spencer) has powerfully shaped modern business. The robber barons of the early 20th century quickly latched on to the self-serving idea that ‘might is right’ – cut-throat economic competition was the normal state of affairs and the rise to the top of the strongest was part of natural law and the inevitable outcome of history.

This mentality persists, especially in the US, and indeed the idea of the inevitability, and desirability, of individual struggle in weeding out the strong from the weak is what distinguishes Anglo-American from Rhine capitalism. It perfectly informs the ethos of the financial sector over the last two decades, as well as the rise of the Russian oligarchs and the development of the virulent Chinese strain of capitalist competition.

Darwinism endows such phenomena with a veneer of scientific rationale. Republican senators’ reluctance to intervene to prolong the lives of US banks, the chilling belief of City traders in their own superiority, as uncovered in interviews by The Guardian‘s Polly Toynbee and David Walker, the self-justifying arguments in favour of stratospheric pay rises for chief executives and cutbacks for the less fortunate – all have uncomfortable echoes of the crude social Darwinism that influenced not only the robber barons but also the far greater 20th-century monsters, Hitler and Stalin.

Natural selection may be, as some argue, the most important idea in human history – the nearest thing to a ‘theory of everything’ to exist. Richard Dawkins, among others, has proposed a ‘universal Darwinism’ – a process of variation, selection and retention that applies to business, social and cultural phenomena as well as biology. In recent years, evolutionary versions of economics, psychology and ecology have all burgeoned.

But while such ideas are genuinely attractive and interesting, as the evolution of evolution ironically demonstrates, for practical purposes natural selection is a devious and treacherous taskmaster. If companies have no inevitable life cycle – some last for months, others for centuries – and don’t reproduce, how does the process work? Darwin himself, as cautious in his research as he was bold in his thesis, would no doubt be aghast at some of the wilder application of his ideas. His version of evolution is blind mutation is random, and outcomes determined by functional improvement.

Companies, on the other hand, are intentional entities, able to strategise towards long-term purpose – taking one step back to take two steps forward in the future, for example. What’s more, no one studying management could possibly argue that ‘progress’ was historically inevitable: indeed, the reverse argument can be made, that bad management is driving out good. As Ricardo Semler, the iconoclastic head of the free-form Brazilian company Semco, observes, most corporate forms are colossally inefficient as well as environmentally disastrous – an evolutionary nightmare. In this situation, there’s no time left for painstaking improvement on an evolutionary scale: only disruptive innovation will do.

Meanwhile, the simplistic ‘might is right’ case has been blown apart by the force of events. However it originated, the credit crunch is the meteorite that is causing the mass extinction of what now can be seen as financial dinosaurs. Suddenly the once mighty are so no longer – in the new credit-starved world investment banks are extinct, by the end of the year most hedge funds will have gone out of business, and even Russian oligarchs are finding food hard to come by.

As Darwin cautioned: ‘It is not the strongest of the species that survive, nor the most intelligent, but the ones most responsive to change.’ Or in the words of Orgel’s second law (after Leslie Orgel, an eminent biochemist of the 1960s – history doesn’t record his first law), ‘Evolution is cleverer than you are.’

The Observer. 18 January 2009

It’s got so horrible that we ought to be revolting

IN RETROSPECT, one of the most remarkable things about the events of 2008 is that there weren’t any. In 1968 the streets of Paris and London rang with protests over the Vietnam war and class solidarity in 1984 the miners went on strike for more than a year. By contrast, over the past year, banks, jobs and money in colossal quantities have disappeared with barely a murmur of dissent, let alone the explosions of outrage that you might expect.

This apparent fatalism is no doubt partly numbness in the face of figures that are truly incomprehensible. Where the liabilities of high-street banks are multiples of GDP, and a single hedge fund is responsible for write-offs that equal the UK’s defence budget, it’s hard to feel anything other than helpless.

More insidiously, it is also a measure of how completely the message of ‘One Market under God’ (to quote the title of an entertaining and telling polemic by Thomas Frank) has been internalised.

Yet outrage and contempt are sometimes in order, not least to ensure that we don’t get fooled again. Even now, some would argue that the crunch is the result of a bold experiment in financial innovation gone wrong – a mistake, certainly, but justifiable in the sense that, if it had come off, the resulting era of ultra-cheap money would have led to the prospect of capitalist prosperity without end.

Another view would be that the reasons why it nearly came off also meant that it couldn’t – the reliance on personal incentives untrammelled by any wider sense of responsibility left the system permanently teetering on the knife-edge where risk shades into outright fraud. As such, the disasters of 2008 are not an aberration but the culmination of a rewriting of the management project that now leaves many companies with a vacuum at their centre.

What’s been lost over the last three decades is only now becoming clear. Some of the warning signs were already visible in the succession of increasingly frequent panics and scandals of the last decade and a half – Enron, the dotcom boom, LTCM. Less obviously, the last 30 years have seen a steady erosion of balance between stakeholders. While layoffs of staff – ‘the most important asset’ – were once a last resort for employers, they are now the first option. Outsourcing is so prevalent that it needs no justification. And the company’s welfare role is now so attenuated that it barely exists. First to go was the notion of career more recently, the tearing-up of company pension obligations is another unilateral recasting of the conditions of work – a historic step backwards – that has aroused barely a ripple of objection.

The justification for this behaviour is, of course, the pressure of the market. But this is to disguise a betrayal. As a class, ever since the separation of ownership and management in the 19th century, managers have always occupied a neutral position at the heart of the enterprise – neither labour nor capital, but charged with combining the two for the benefit of both the company and society itself.

Everything changed in the 1980s, however, with the advent of Reagan, Thatcher and Chicago School economists who preached the alignment of management with shareholders in the name of ‘efficiency’. In effect, ‘efficiency’ came to mean short-term earnings to the detriment of long-term organisation-building; what was touted as ‘wealth creation’ was actually ‘wealth capture’, from suppliers, clients and employees as well as competitors, on the grandest scale since the robber barons. Its purest expression was private equity.

Managers never looked back. As late as the 1980s, a multiple of 20 times the earnings of the average worker was perfectly adequate CEO pay. But under the compliant gaze of shareholders and remuneration committees, performance-pay contracts boosted the ratio to 275 times by 2007.

As we now know, ‘performance pay’ was a misnomer, an incentive for financial engineering that has destroyed value on a heroic scale. But it’s not just shareholder value that has suffered. By severing any common interest between top managers and the rest of the workforce, fake performance pay has fatally undermined the internal compact that makes organisations thrive in the long term.

Perhaps the most poignant emblem of this dereliction is the British pub. The pub is the archetypal small business – the simplest, most rooted organisation there is. Pubs have thrived for centuries. But they are now closing at a rate of around 30 a week. Part of this is due to changing social habits. But it is also the case, not to put too fine a point on it, that pubs have been rogered frontwards, backwards and sideways by financial whizzkids who piled them with complex debt and left them desperately underinvested – at the same time extracting exorbitant fees for the privilege. The death of the local is a fitting monument to a bankrupt management model. It’s time to get angry.

The Observer, 11 January 2009