The doctor’s dilemma

Last week (after a three-week wait) I went to see my GP, a senior member of a very busy and ethnically diverse central London practice. Knowing the surgery was scheduled soon to relocate to a big new redevelopment nearby I asked when the move would happen.

Usually brisk and businesslike, this time she sighed. She looked knackered. ‘We don’t know,’ she said. The practice needs to move, she explained, because it is running out of space in the grand but inconvenient Georgian townhouse it currently inhabits, where in any case the lease runs out in eight years’ time.

The reasons it is running out of space are twofold and very much of our time. The first is that more and more medical care is moving out of hospitals and into the community, and extra space – not to mention GPs, nurses and support staff – is needed to accommodate it.

The second is that another local practice, unable to cope with the mounting pressures, recently closed down (which it is perfectly entitled to do, since GPs are private contractors to the NHS, not employees). Its patients have to re-register somewhere else, and there are financial penalties for a surgery that chooses not to take them on. In effect, our surgey is in a cleft stick: it suffers if it doesn’t take the new patients on, and if it does it needs more GPs and space.

The snag with moving, however, is that rents in the new development are sky high – and the current landlord will invoke a penalty clause if the centre vacates the premises early. To sum up: the cost of the move is £500,000, which the practice doesn’t have. So relocation, and with it the future of our harassed but functioning and proactive practice, is in limbo. In the meantime, the pressures grow: ‘This is why you have to wait a month to see me’. (My sister tells me that her practice, also in London, books no more than a month ahead, so if that month is full, you can’t make an appointment at all.)

‘Anyway, in eight years’ time it will be someone else’s problem,’ the doctor said. ‘I never thought I’d say this, because I love seeing patients! But I’m 54, and there’s no way I shall stay a day after I’m 60. I’m already doing a four-day week, because of the pressure’ (she uses the other day to catch up on paperwork and read the medical journals). ‘My colleagues of my age are all the same.’ The NHS is doing nothing to induce senior GPs to stay on, she says, and fewer and fewer students are choosing to go into general practice – not because it is unattractive in itself, but because of the pressures and diktats raining down from all sides that make it impossible to do the job with any kind of satisfaction.

This is the moment – with the NHS’s stresses and strains all over the front pages and its deficit predicted to rise £30bn by 2020 – that David Cameron chooses to decree that by the same date all GP practices will open seven days a week. My doctor concedes that better out-of-hours access and possibly Sunday working are desirable, but their arbitrary imposition without discussion or any idea of the demand or resources needed fills her with despair. For many, like the closed neighbouring surgery, it will be the last straw.

In Out of the Crisis, W. Edwards Deming wrote: ‘If you have a stable system then there is no use to specify a goal. You will get whatever the system will deliver. A goal beyond the capability of the system will not be reached. If you have not a stable system, there is no point to setting a goal. There is no way to know what the system will produce: it has no capacity.’ Poor NHS. It’s not just that it has no capacity. Constantly tampered with by ministers and civil servants who have no idea how systems work, or people within them, it is being made almost daily more unstable, more difficult to manage. They are making matters worse.

It takes some doing, but we have somehow managed to contrive a situation where UK public services instead of pointing forward simultaneously unite the worst features of Soviet-style central planning and unreconstructed market capitalism: on one side detailed ministerial micro-management (dictating details of how services should be set up and run – wrong in principle as well as method chosen, since it stifles desperately-needed innovation and locks in today’s Kafkaesque inefficiencies), to the profit of private outsourcers and IT consultants whose priority is not citizens but shareholders.

It is a system riddled with waste, inefficiency and conflict of interest bordering on corruption that harassed and dedicated professionals like my doctor manage to make work, more or less, in spite of itself. They have to fight the system to do the right thing. Given the constraints of these system conditions the wonder is not that there are so many lapses and scandals, but so few. ‘There may not be an NHS in eight years’ time,’ my doctor said quietly as I left.

The power of an idea

[The High Pay Centre is running a series of events looking at the political power of business in the UK. This is the text of my presentation at a lunch on 10 September 2014]

I want to say a few words about the power of ideas – even mistaken ones, even especially mistaken ones. It’s only by understanding how they arose that we can demystify and debunk them.

I’m going to trace the story of why we’re here, why the High Pay Centre exists and why there’s still an unresolved problem with high pay back to 1970, and more precisely 13 September 1970. That’s the date, according to Dominic Barton, global managing director of McKinsey, that capitalism started veering off track, and it was the day that the New York Times published an essay by Milton Friedman called ‘The Social Responsibility of Business is to Increase its Profits’.

‘In a free-enterprise, private-property system,’ Friedman wrote, ‘a corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make as much money as possible while conforming to their basic rules of the society.’

As it happens, this is actually wrong in almost every particular. In law, managers aren’t employees of shareholders, who don’t own the business. Firms are separate legal entities that own themselves, it’s they that employ directors and executives, and they to whom the latter owe fiduciary duty.

It’s hard to credit, but at the time the idea that the purpose of the firm was to maximise returns to shareholders was novel, even revolutionary. Yet by the turn of the century it had the status of ‘holy writ’, more religion than science, as one writer put it. The purpose of the corporation, it was claimed, had been settled once and for all, and it was only a matter of time before the rest of the world fell in line with the US.

It is a truly remarkable story, even more so since how it happened has nothing to do with whether it was right or not, and all to do with institutional ambition, opportunism and unintended consequences. As Rakesh Khurana wrote in his wonderful book on US business schools, From Higher Aims to Hired Hands, ‘The development of economic institutions… is not simply a function of their efficiency; rather it often results from the outcome of contests in the legal, political, social, and cultural realms’. It’s about the play of different interests, including those of class. It’s political, not technocratic.

Six years after Friedman, another article, another milestone on the road to hegemony, this one called ‘Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure’, by Michael Jensen and William Meckling, published in 1976 in the Journal of Financial Economics. It’s full of graphs and equations which make it a tough read for non-specialists, but it is the single most cited article in the business literature.

In fact, the graphs and equations are part of the point. Management at that time was desperate to establish academic and scientific legitimacy, and in that context the idea of optimising the firm around a single measurable point, shareholder value, was a heaven-sent opportunity for academics to do just that. Of course what had to be left out of all this was anything to do with the human side of business, notably ethics and intentionality (not to mention things like luck and power) which can’t be mathematically modelled, so what the theory gained in ‘scientific validity’ it lost in common sense, but that’s a whole other, if fascinating, byway of the story.

Jensen and Meckling’s main assertion was that the fundamental problem in corporations was ensuring that self-interested managers focused on maximising value for shareholders rather than attending to their own concerns. It triggered a wave of scholarly theorizing which soon came to dominate the business-school research agenda in the US and UK.

Unlike most management theories, however, shareholder primacy had obvious appeal to other important constituencies as well. Not too surprisingly, the corporate raiders then on the prowl (they’d now be termed ‘activists), loved the idea because it seemed to justify their restructuring activities, which they accordingly redoubled. Institutional investors approved too, and so did the most powerful constituency, company CEOs, who soon discovered, and readily acquiesced to, the warming material benefits of having their interests aligned with those of shareholders by tying their pay to the performance of the share price.

The interlocking pieces were then fixed in place by governments and regulators as they proceeded to reshape governance and company law to give shareholders more influence over company boards and make managers more attentive to the share price.

In this way, an ideologically-based programme, purely abstract and with no empirical backing, has wormed its way into every crevice of management, to the point where it, and its assumptions, are not only unchallenged but have become invisible to the naked eye. Even now it’s rare to pass a week without reading in the FT or hearing on the BBC – for example during the Pfizer-AstraZeneca merger talks – someone starting a response, ‘of course shareholders own companies, so it comes down to them in the end.’

(I’m told that in the last review of UK company law, one or two bolder members of the review panel were firmly told that they could come up with any organising principle so long as it was shareholder value. The answer came out as ‘enlightened shareholder value’, a typically British compromise, which still leaves the UK as the most shareholder-friendly jurisdiction in the world.)

Ironically, the review took came out in 2006, just when the negative consequences of the four-decade-long practical experiment with shareholder value were beginning to emerge, and just two years before all the worst fears in that regard were confirmed by the financial crash.

It’s now clear that shareholder primacy doesn’t work even in its own terms.

Shareholders are suffering their worst returns since the great depression, and Roger Martin has shown that over the whole period since 1970 they have done worse than they did in the post-war years when their interests weren’t put first. The regime doesn’t seem to do companies much good either. The longevity of publicly-quoted companies has tumbled, and their number is dwindling fast. Astonishingly there are now 50 per cent fewer British and US listed companies than there were 15 years ago.

One particular group has consistently benefited from the shareholder primacy regime, however – short-term shareholders comprising activists (hedge funds) and what Thomas Piketty calls the ‘supermanagers’, the corporate elite who since the 1970s (note the date) have come to constitute the largest part of the 1 per cent.

The mechanism that put them there, of course, was shareholder value and agency theory. That was what triggered the ‘revolution in management pay’ that we heard Andrew Smithers describe here a few months ago. Crudely, the revolution consisted in paying them in shares and options to make them think like shareholders, a wheeze that was instantly successful. Shares and options now make up 83 per cent of total top management pay in the US and somewhat less here.

Paying executives like this changed their behaviour, exactly as it was supposed to do. Instead of ‘retaining and reinvesting’ corporate profits, in William Lazonick’s phrase, benefiting all stakeholders, they started to use them primarily to ‘distribute and downsize’, prioritising shareholders. That did indeed push share prices up (and thus their own rewards), but at the cost of R&D and capital investment – with the consequences for corporate health and mortality that I’ve mentioned.

It’s the link with shareholder value, driven by the idea that shareholders own and control companies, that is the hidden mechanism that continues to pull executive pay upward while keeping the pay of everyone else down, irrespective of the effects on the rest of the economy. This is why Smithers said that dismantling the bonus culture that governs managers’ investment decisions is the single most important task facing economic and social policymakers in the world today.

I think he’s right, and that’s why we’re here today. Empirically and intellectually unjustifiable pay is the superstructure and it’s based on shareholder value which in turn rests on the deeply buried foundation of shareholder ownership. We’ve spent four decades vainly trying to make ownership work. It’s time to recognise that it doesn’t, and the alternative moreover is staring us in the face.

The beauty of the corporation is precisely that it isn’t owned, and it’s that which allows it to make the long-term commitments to all its stakeholders that Colin Mayer talks about in his book. Shareholder ownership, the concept launched on the world by Friedman in 1970, is where the demolition work on executive bonuses and shareholder value has to start if serious change is to take place.

In a celebrated passage on the power of ideas, Keynes wrote:

‘The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back… Soon or late, it is ideas, not vested interests, which are dangerous for good or evil.’

I suspect that he’d have written ‘ideas and vested interests’ if he were writing today. He added:

‘The real difficulty in changing any enterprise lies not in developing new ideas, but in escaping from the old ones.’

The sharing economy: not all it’s cracked up to be?

It has a reassuring name, and at first sight the emerging ‘sharing’ or peer-to-peer economy has plenty going for it. Greens as well as mainstream economists are in favour of, as it were, sweating under-used personal or even corporate assets rather than scrapping and churning out ever more of them. And consumers have taken to cheaper alternatives to regular hotels or taxis as enthusiastically as they have embraced no-frills air travel. You can now rent almost anything from a dog to Jimmy Choos or a party frock to someone else’s driveway to park in.

Yet it’s wise to take a hard look at the bargain that’s being signed up to. A more significant portent of things to come may be the spat between German (and other) taxidrivers and Uber, the San Francisco taxi service in all but name that is spreading like a rash all over the globe. Taxidrivers aren’t the most popular category in the world, and it would be easy to take their protests as the doomed last throw of old-fashioned regulation and restrictive practice in the face of the individualised internet economy. Indeed in a contest between the regulated Hackney carriage and a crowd of digitally-enabled freelances in Mercs there can only be one winner.

But what’s actually going on here? As with so much on the internet, appearances are deceptive. The truth is that the internet doesn’t so much create jobs as eat them, chewing up full-time regular employment, swallowing much of the sustenance and spitting out what’s left as part-time, freelance mini-jobs. Thus, while a tiny number of founders and full-time staff at Uber or airbnb will become millionaires (or more) when their companies float, the average San Francisco host rents out a room 58 times a year for a total of $9,300, while a car owner nets $250 a month from RelayRides.

Don’t get me wrong. This is not nothing, and the technology that enables it, and many other forms of cooperation, is stupendously potent. But let’s not kid ourselves – these are not jobs. Hosts and renters aren’t employees, they foot their own costs and insurance – and then compete with each other and fulltime employees of conventional companies to keep prices down. For those at the centre who hoover up the value created this is a business model of stunning brilliance. For those delivering the actual service, not so much. It provides a hobby and pin money, not financial or emotional security. In this light Linux and Wikipedia, icons of the sharing economy, take on a rather less glowing significance, more important as harbingers of a coming no-wage economy than as the marvels of human generosity and cooperation that they undoubtedly are. For a less friendly example of where this leads, consider Amazon’s Mechanical Turk, an internet marketplace (or ‘digital sweatshop’, take your pick) where Workers, or Turkers, can volunteer for menial tasks that computers can’t yet complete for payment of around a dollar on hour

In fact, the freelancification of the economy is already well on the way. Much of the UK employment growth that the coalition boasts of is accounted for by freelancers earning much less than a full-time wage, and than they used to. For many freelance is not a choice – there’s no point in waiting for traditional companies because they don’t do job creation any more, any capital investment they carry out being more likely to cull jobs than generate new ones. Larry Summers, previously US Treasury Secretary, fears the west faces ‘secular stagnation’ (ie structural not cyclical); the McKinsey Global Institute reckons that 140m knowledge workers could be displaced by smart machines (including, incidentally, those eager Uber drivers if, as one would wager, Google’s driverless cars come to pass). Wage inequality, as the World Economic Forum, and even ratings agencies like Standard and Poor’s, are beginning to fret, will continue to increase, leading to political and social stresses and protests, further dampening growth, if not far worse.

The rise of the misnamed sharing economy, in other words, is just one more indication that the free-market new world order as summed up in the ‘Washington consensus’, which once appeared the embodiment of capitalist end of history, is now cracking up. Its monolithic certainties now resemble nothing so much as those of the Soviet communist regime – frozen, lumbering, out of synch and time – before it fell apart in 1989.

The paradox, of course, is that, as the burgeoning sharing economy demonstrates, all the technology to enable dispersed, local, more democratic ways of working already exists. 3-D printing and other new techniques mean that physical production is likely to go the same way as services. For most intents and purposes the tyranny of economies of scale, an essential element of the old new world order, is lifted – unequivocally a good thing. What’s lacking is the institutional imagination that would enlist machine intelligence to amplify and reinvent work rather than kill it, as now. This is a political as much as an economic issue; if a quarter of the energy were channelled into it that now goes into job-eating Silicon-Valley start-ups, we would be well on the way. As Aditya Chakrabortty wrote in a recent article on why cleaners in New York earn three times more than their counterparts in London (it’s here – read it): ‘One thing ties together good jobs and crap jobs: both are produced not only by economics but by politics too’. Unless that effort is made, the sharing economy will come to look like just another example of Orwell-speak: sharing for the 99 per cent, but not for the 1 per cent playing a different game called winner-takes-all.

Rotherham: management failure multiplied 1400 times

Appalling as the Rotherham sexual abuse revelations have been, their awfulness and the immediate desire for heads to roll shouldn’t blind us to a scandal within a scandal. Sexual exploitation of children evokes a special kind of horror, but for purposes of prevention it is no different to domestic violence or any other kind of serious neglect. Like them it is preventable. The most important thing about Rotherham (or Rochdale, or Blackburn, or Oxford) is not the ethnicity and culture of the culprits, it is that it is in a direct line from Victoria Climbié, who was tortured to death in 2000, and Baby Peter Connolly in 2007 – a graphic illustration of the same management failures, multiplied 1,400 times.

At their heart is a Fordist concept of public services driven by fear, risk aversion and obsession with cost, all of which magnify the factors they are trying to control.

All post mortems of public service shortcomings find the same collective failure of understanding that serves to amplify demand rather than reduce it. Users are treated as separate incidents or episodes, leading to repeated assessments, referrals and opening and closing of cases without ever solving them. Families can be on a council’s books for years at direct and indirect costs of hundreds of thousands of pounds and end up no more stable, sometimes less so, than before. Because agencies work in separate silos, no one joins the dots, red lights are missed and opportunities for intervention are passed up.

For all social workers’ groaning caseload, few of the cases they see are actually new. Almost everything that comes across their desk is a manifestation of repeat or ‘failure’ demand (demand caused by a failure to do something or do something right the first time) – and it typically emanates from a small number of families, some of whose dysfunctions are registered by separate agencies but none of whom is ever seen in the round. Whatever the form the demand takes – mental or physical health problems, violence, truancy, drug and other abuse, antisocial behaviour, crime – that or other aspects of the chaotic family life of which it is a symptom will be known to one agency or another. In exploratory work on organised crime by Greater Manchester Police, officers were astonished to find that gang members, far from being undetectable masters of crime, were well known to other agencies, if not the police, and exhibited many of the characteristics of other dysfunctional families. Organised crime, said one police officer working on the project, was just one more symptom of out-of-control lives and incoherent responses to it by the community and public services. Sexual abuse is part of the same syndrome: as it now emerges, spouses and children of the abusers will likely have come to the notice of the police or council services – and have a high probability of suffering similar horrors if they aren’t rescued in time.

Why do these things keep happening? One problem, says Joanne Gibson, a senior consultant at consultancy Vanguard, which works across many public services including child protection, is the kneejerk reaction of politicians, media, policymakers and policy implementers to treat it as a people issue: the assumption being that risk can be reduced by controlling and monitoring what frontline social workers are doing. As a result the work is driven by process and bureaucracy designed to meet the hierarchy’s need to be accountable, not the needs of children or families.

‘Time and again when we study these systems end to end there is a catalogue of system, not people, failures,’ says Gibson. Social workers’ attempts to engage are hampered by an inflexible, form-driven process that prevents them from taking the time to understand the family context and history. Because the work is fragmented, no one gets underneath the surface narrative the family has invented to get into (or out of) and navigate the system – ‘no one really knows the child or family’.

This is compounded by a regime of thresholds (rationing) born of perceived financial pressures that has the perverse effect of keeping vulnerable people out of the system until abuse has happened, making rescue or a return to stability that much harder. The result is a game of pass-the-parcel with people who are already bouncing around the system, leading to the perception that demand is rising. But underlying demand is stable; what is going up is demand created by the system itself. As usual, attempts to manage by cost rather than purpose just push costs up.

By the time that families are officially ‘troubled’ it may, brutally, be too late to get them back; containment may be all that’s realistically left. As pioneering councils like Portsmouth, Stoke and Bromsgrove are discovering, the earliest possible intervention is essential to get lives back on track while it’s still doable – and that principally involves spending time to understand families in the round, in their context, not the producer’s. On the basis of work with an admittedly small number of needy families, Stoke is finding that a ‘rebalance me’ approach of prevention and understanding need in context reduces levels of dependency (and thus demand) across a spectrum of services. This isn’t a cost, it’s a human investment – and it has the side-effect of potentially saving millions.

Could this catch on? In 2011, Eileen Munro, a respected social work academic, published a report on child protection that laid bare many of the faults of the defensive, rule-bound current regime and recommended a shift away from targets and statutory guidance towards a child-centred approach that emphasised learning and local innovation. Frustratingly, says Gibson, although there is lip-service to the report at the centre, little has so far fed through to the front line where managers paralysed by fear of recrimination are even more reluctant to trust professional judgment, particularly when austerity measures are slashing staff levels and ramping up caseloads.

As in all public services, the issue is not one of resources but system design. ‘If there is to be accountability and something held to account, then it should be the system and its current design,’ says Gibson. ‘Blaming people publicly will just reinforce the risk aversion that inhibits people on the front line from doing the right thing. The result is a whole load of new bureaucracy and inspection that effectively locks down the service and by forcing skilled caring people to comply with it increases the risk of yet more young people losing their lives’.

How outsourcing backfires

It’s hard to conceive of it now, but in the late 1960s and 1970s, Whitehall and the public sector knew as much about IT, and in some areas more, than the private.

Then began a process which would later come to be popularised as outsourcing.

Roll forward 40 years, and here is John Manzoni, head of the government’s Major Projects Authority, describing the result. The wave of outsourcing that gathered pace in the 1990s, said the head of a body that monitors £500bn of public projects, has left Whitehall bereft of the ‘critical skills’ needed to procure and manage such projects. Execution and delivery were not ‘well-developed muscles’ in Whitehall, said Manzoni, who lamented that as a result of the outsourcing reflex, the government had lost crucial expertise in IT and technology and now lagged ‘five to eight years’ behind industry in these fields.

When outsourcing began to take off in the 1980s, it was sold as a simple win-win transaction. By focusing on what they did best everyone would benefit, as would the economy as a whole as the use of resources was optimised. ‘Outsource everything except your soul!’ exhorted the excitable Tom Peters.

From the outset, it was clear that it wasn’t quite as simple as that. For a start, outsourcers had to be not just a bit but massively more efficient for the arrangement to provide for both their profit margin and cost savings for the customer. Too often, contract terms gave no incentive for providers to improve. In the longer term, both the value of what was being given away and the hidden cost for the customer of not having an important process under its control turned out to be higher than expected.

Take the electronics industry. The Faustian nature of the outsourcing bargain was graphically revealed when Asian component makers eventually started eating not only the lunch but the entire body of western computer firms that discovered too late that in outsourcing ever larger chunks of manufacturing value they had inadvertently given away their soul too. Or aerospace, where obeying Wall Street’s strictures to minimise its asset base, Boeing outsourced so much of the manufacture of the Dreamliner, the 787, that the complexity of its supply chain outstripped its ability to manage it, causing delays to the launch and cancelled orders.

Or, to bring the story up to date, the UK public sector, whose travails, illustrating all the pitfalls described above, were the subject of Manzoni’s strictures. The e-borders fiasco, where the taxpayer has been landed with a £224m bill in damages and costs awarded to US defence firm Raytheon after a contract to upgrade UK border controls was improperly cancelled, is a timely reminder of the knock-on perils of dismal contract management. Calling the award a ‘catastrophic result’, Keith Vaz, chairman of the Commons home affairs committee, said in so many words that the UKBA didn’t have a clue what it wanted from the project.

The economist Joan Robinson once remarked that the point of learning economics wasn’t to acquire ready-made answers to economic questions, but to avoid being bamboozled by economists who put forward such things. The same is true for technology.

Consider the Department of Work and Pensions’ universal credit scheme, another prominent item on the MPA’s watch list. Curiously, UC doesn’t have a ‘traffic-light’ rating (green, amber-green, amber-red, red) in this year’s assessment. The reason given is that it is a ‘reset’. On examination, a reset turns out to be a variant on a dodge commonly used by service organisations to meet arbitrary targets and service-level obligations: closing a case which can’t be resolved within (say) the target time and then re-opening it as a new one, thus winding the clock back to zero. Opening and closing cases is a common ploy of IT help-desks and other outsourced services where the contractor is paid by volume. No prizes for guessing from which quarter the reset idea is most likely to have come, nor for thinking that those who dreamed it up will not have described it to government as what it is: a fiddle, a scam, a cheat.

Nor is it likely that consultancies whose livelihood depends on selling copious amounts of IT will suggest to government departments contemplating large-scale change that IT is the last, not the first place to start. IT is often glibly called an ‘enabler’ of change; but if done first, it is the reverse, locking in a design of work (and basis of payment) that are impossible to change subsequently except at enormous cost. Although less in the public eye than the central government projects, this is a growing issue all over the public sector. In local government, a number of shared-service and other outsourcing deals have descended into reciprocal recrimination when the promised benefits failed (as they do) to materialise because the starting design was wrong. In some cases councils have found themselves stuck with deals which they have learned the hard way to be disastrous, because they can’t afford to cancel them.

The prudent way of buying IT (and avoiding more cases like NPfIT, the aborted NHS computerisation programme, where large damages costs threaten to swell the £10bn already sunk in the failed scheme), is to put it last, not first. Computers are the servant not the master of change, which begins at the other end, redesigning a service to put humans upfront where, unlike computers, they can deal with the variety that human demand comes in. Such redesigns sometimes result in IT having to be removed as a constraint on doing things better; they are always less IT-intensive (and expensive) than before.

The moral of the story is that in order to outsource a process or service effectively, you need to know how to do it yourself, in every important particular. In which case, of course, you may often conclude that you’ll be doing yourself a financial and strategic favour by doing just that.

The greying of business

‘Like the population, the business sector of the US economy is ageing,’ says a research paper from the Brookings Institute, in an arresting phrase. It reports that firms aged 16 or older now account for 34 per cent of all US economic activity – up 50 per cent in 20 years. The share of all younger firms is correspondingly shrinking. As with jobs, housing and income for individuals, the business advantage is with the old and incumbent. With fewer startups (‘especially disturbing’), entrepreneurs are struggling across all sectors, according to the authors, with potentially unwelcome implications for productivity, innovation (where new firms have accounted for a disproportionate share of disruptive new product categories in the past) and employment.

The Brookings findings chime with other disquieting indications of sclerosis, not to mention mortality. The average life expectancy of firms is falling sharply, according to other research. This means that fewer are getting through the perilous pipeline of youth to become old and established. When they do, fading competitive vim means they have a greater chance of becoming entrenched and obese. If business is getting old and fat, ‘it appears to be getting fat because it is getting old – not the other way around,’ confirms the report. While the Brookings authors couldn’t find a direct link between ageing and consolidation, they did note that consolidation had increased over recent decades. In short, business is not only old and fat, it is also becoming more monopolistic.

Perhaps most striking of all, it is the publicly-quoted company, the central economic actor in the west, which is in steepest decline. In the US and UK, the most stock-market-oriented economies, listed corporations have been dying off like flies during the noughties. The universe of quoted US companies, at a paltry 9,500, is a whopping 50 per cent down from the all-time high in 1998. Although less in Asia, the fall is happening worldwide. As the costs of being public go up (regulation, activism, scrutiny) and the value goes down (companies nowadays rarely need outside investment), companies have been going private, not going public, or going bust, in droves. Mergers too have played a part. The quoted company, the engine of capitalism for the last 150 years, is beginning to look like an endangered species.

It wasn’t supposed to happen like this. Ironically a large part of the collapse of the corporation can be put down to the triumph of the cult of MSV, maximising shareholder value. In June, Harvard Business Review ran a special three-article ‘Spotlight’ section asking, ‘Are Investors Bad for Business?’ Two of the pieces, and a third indirectly, answered ‘Yes’. Basically, Wall Street (and especially hedge-funds and other ‘activists’) demands a high return on assets. One way to improve the ratio is to grow revenue and profits (the numerator) organically – but that’s hard and often slow work. Easier and quicker to slash assets (the denominator) by dematerialising, like Nike: outsourcing everything that moves while restricting investment to strictly efficiency-creating measures. Share buybacks, now being implemented in staggering quantities on both sides of the Atlantic, help increase leverage and force up total shareholder returns by the same mathematical tactic. The result is a weird kind of corporate anorexia: behind the apparent obesity, there’s nothing there. Corporations are auto-digesting. Under the impact of their perverse incentives, CEOs and short-term shareholders reap fabulous rewards while Innovation and job-creation rates are slumping. Meanwhile, the bulk of retail shareholders, and those reliant on companies for their retirement, are much worse off – the rates of return on assets and invested capital for US capital in 2011 were just one-quarter of what they were in 1965.

In this perspective, the monopolistic, or at least oligopolistic, tendencies at work in so many industries today – banks, retail, oil, automobiles, energy, phones, utilities, internet, to name a few – should be viewed alongside the Brookings findings as further symptoms of a damaged, unbalanced business ecology whose sustainability is now in serious question: in other words evidence of weakness rather than strength. In one compensating domain, however, giant old companies have undeniably increased their power: politics. With so much vested interest at stake, for monopolists the incentives, and returns, to political lobbying are sky high – which explains why CEOs think it’s more valuable to spend time schmoozing with government officials than selling to customers. Large single industry interlocutors suit governments too: but as the FT’s Tim Harford points out, such a cosy relationship is conducive neither to a healthy democracy nor a vibrant economy. A government in cahoots with Google and Facebook on surveillance doesn’t bear thinking about (although for the sake of prudence we should). From the point of view of the health of the economy as a whole, giving massive injections of cash to prolong the existence of wheezy overweight companies like the banks and General Motors now looks even more misguided than it did at the time.

After the south of England was hit by the hurricane of autumn 1987, the felling of 700 mostly mature trees in the famous collection at Kew Gardens was viewed as an irretrievable catastrophe. It has proved the reverse. The knowledge gained in the storm’s aftermath, say keepers at the arboretum, has revolutionised the science of tree planting and conservation, led to renewed plantations and given a vigorous second lease of life to some of the park’s most venerable growths. There’s an obvious lesson there for those whose job is tending the health of business, too.

Everything you know about management is wrong

Sometimes an overnight revelation takes 20 years.

When I started writing a weekly management column for The Observer in 1993, I didn’t have an overall ‘theory’ about management. I knew it was important, that it was about people, and I sensed it was more craft than science. I knew it was about more than shareholder value, and suspected that, as in other fields, short cuts would turn out to be the reverse: organic growth was likely to take us to a better place than growth by acquisition and financial engineering. I wondered if I would run out of subjects.

There were also a number of puzzles, often to do with the glaring gap between reality and the rhetoric. Big companies were clearly essential to developed economies, but despite the comforting prose of their annual reports, why did they have to be such dispiriting places to work? Could tobacco companies and hamburger chains really achieve model citizenship through programmes of corporate social responsibility while their products were indirectly imposing huge costs on the rest of society? What about giant retailers whose ‘efficiencies’ (ie low prices) came at the expense of suppliers (sometimes whole industries) and low-paid employees?

More generally, if managers were the hard-nosed pragmatists and management the empirically-based discipline that convention says it is, why were they doing so many things that were at best ineffective and at worst harmful, even in their own terms? Research repeatedly said that acquisitions mostly destroyed value, but that didn’t stop M&A hitting record levels year by year. Companies that aggressively pursued shareholder value didn’t seem to do better than those that had a purpose other than maximising investor returns, at least in the long term. Intrinsic motivation was much praised when it applied to nurses and carers, the sense of vocation used to keep wages at subsistence level, so how come CEOs needed extravagant extrinsic incentives to persuade them to deliver a good day’s work? Especially since no one could find a link between CEO pay and company performance (not, it should be said, for lack of trying). Companies have spent most of the last two decades putting their supposed ‘greatest asset’ out of work, and the financial crisis and its aftermath revealed for all to see just what the financial sector really thought of the customer who is meant to be king. A major disappointment was the New Labour government that came to power in 1997, which instead of making the UK a role model for enlightened public-sector management simply grafted on to public services the industrialised practices that were turning customers off in droves in the private sector.

By the mid-Noughties it was hard not to believe that there was as much wrong with present-day management as right. Writing a weekly column was an extraordinary compressed education. On the one hand the work brought contact with leading management thinkers and experimenters, and on the other with readers who brought the fads and theories back to a touchstone: never mind the PR, here’s what it was really like to be managed in modern Britain. It was this combination that counselled caution in the face of the triumphalism that accompanied the fall of communism and later what was optimistically billed as ‘the Great Moderation’. Sceptism was of course vindicated in spades by the spectacular implosion of the financial system in 2008: we were right, management didn’t do what it said on the tin, and now, knitting together what I had sensed before, I thought I could see why, although I struggled to express it.

But although I had most of the pieces, the final epiphany only came later. It arrived in three parts. One was at a conference in Brussels last February, put on by an enterprising Czech-based NGO, the Frank Bold Society, on The Purpose of the Corporation. The briefing included a memo which set out the legal position in black and white: across jurisdictions, as a matter of law, shareholders don’t own the corporation, and directors’ fiduciary duty is to the company with which they have a contract. So in brief, shareholder capitalism, and the whole theory of corporate governance that has evolved to sustain it, including the assumptions about human nature and behaviour that it is supposed to control, is based on a myth.

The second ‘aha’ moment was at a Vanguard conference on health, some of the profound findings of which I wrote about here. One of them was that the thinking that would make the difference between a manageable and unmanageable NHS was not inherently difficult: it was just different. So different, in fact, that the existing management worldview couldn’t be modified to incorporate it – change could only come if that worldview was replaced. That helped to explain why initial resistance to the ideas was so strong.

The third element was an invitation to a workshop put on by the alumni of the Open University’s Systems Thinking in Practice course. The aim of the event was to give support and sustenance to systems thinkers who, for the reasons outlined above, could easily find themselves isolated and discouraged at work. I had expected to be interested and stimulated by the occasion, but it turned out to be rather more than that. Slightly unwillingly I found myself participating in an exercise designed to draw the lessons from a situation where systems thinking had helped in the past and consider how to apply them again in the future.

Bingo! Suddenly, reflecting on my trajectory, I could see what had been staring me in the face all along. It’s a system, stupid. The management apparatus that has been developed in business school and university economics and finance departments to further shareholder value and control is all of a piece, from governance, through the measures and techniques used, right down to performance management on the shop or call-centre floor. If the organising principle of shareholder primacy can’t be justified, it’s not an accident that so much of management designed around it is ‘wrong’ – the surprise would be if any of it were right.

Here’s the reason why management gets inexorably more complicated and regulation more intrusive, as ever tighter rules are drafted to deal with its increasingly harmful side-effects, whether for individuals, society or the planet. Management has become its opposite. It doesn’t solve problems, it creates them. And the companies it animates are a Frankenstein’s monster.

Everything you know about management is wrong. Literally.