People conceptualise strategy in different ways – value chains, five forces, Bowman clocks, parenting matrices, business model canvas – that sort of thing.
For what it’s worth, I tend to visualise a strategic position as a kind of network of conditions, inter-linked in different, complex ways, and constantly evolving and mutating. Some of those conditions are the activities, culture, ideas, policies and resources of the enterprise and it’s stakeholders. Others are determined by the immediate consumer and competitive environments. Others still comprise exogenous factors, sometimes lumped together as the ‘macro-environment’. We might refer to those conditions which are intended or designed as ‘architecture’, and the others as ‘situational’, although this can lead us astray. In any case, the linkages between conditions, in and of themselves, also represent strategic conditions, hinting at the layers of complexity that we’re potentially dealing with.
Other enterprises have their own webs of strategic conditions, and these topologies are constantly jostling against and cross-interfering with each other other. Competition, of one kind or another.
A strategy then, can be viewed as a pre-meditated adjustment to the configuration of strategic conditions (and this is why the idea that a strategy has some kind of pre-determined timescale (‘the five year plan’) is wrong-headed).
Because strategic effectiveness is closely correlated with coherence and mutual alignment of architectural and situational conditions, it will normally be possible to aggregate them (and their inter-linkages) into ‘meta conditions’, what Michael Porter refers to as ‘higher order strategic themes’. As long as we don’t oversimplify things in the process, this can be the key to bringing the complexity of strategy into the realm of the manageable. Not oversimplifying, can be very hard.
We do this all the time in other contexts. Once we’ve understood and built the highly complex hardware and firmware configurations that make up a computer, we can program at the much more manageable and ‘higher’ level of software. This is in a sense an easier situation to cope with than business strategy, because the hardware and firmware platform doesn’t generally spontaneously mutate. All the same, if we make a functional change to the hardware platform, we’ll have to think long and hard about the potential knock on effects at other layers of the system design.
A strategic ecosystem is much more organic than a Turing machine. All the same, we are constantly looking for workable means to unify and simplify the analysis without losing critical insight. Perhaps a better analogy might be the way in which virologists can work with the behaviour of viruses at the cellular and aggregate levels, given a knowledge of their micro-biology, but have to be constantly alive to the potential for mutation. And an important and relevant insight here is that, although mutation is essentially a random event, that is not the same as saying that all outcomes are unpredictable. It is unlikely (but not impossible) for example that Ebola could become airborne by mutation, since this would involve a whole host of coordinated adaptations. For a bacterium to become antibiotic-resistant is evidently much simpler.
Okay so far?
Let’s go back to those exogenous conditions. There are a lot of them. So again, various tools and models try to make the big-wide-world somehow accessible to analysis. The most basic model, that most Suits have at least a passing familiarity with, is the PEST (or STEP, or PESTLE or PESTEL…) classification scheme. In and of itself of course this isn’t so much a model as a list of stuff with subtitles. It’s a bit like the basic ‘SWOT’ matrix – it’s not really a matrix at all, it’s a list with four sub-titles laid out in a box shape. In order to drive useful insight we have to crash things together. So a SWOT matrix starts to become useful when we try matching ‘strengths’ to ‘opportunities’, and so on, a format sometimes referred to as a ‘TOWS’ matrix. (At The Strategy Exchange we refer to this, in a rare flight of whimsy, as a ‘SoWOT? Matrix’).
When it comes to PEST we can do a similar cross-matching exercise, for example by crashing exogenous influences together with the ‘five forces’ to begin thinking about potential impacts on industry structure, or with a pool of competitors to try to get to grips with potential asymmetric effects which might change the balance (or basis) of competitive advantage.
But that doesn’t get us away from the fact that there are still an awful lot of influences to consider. The problem, as Robert E. Grant puts it in ‘Contemporary Strategy Analysis’, is to distinguish the vital from the merely important.
There seems to be a common consensus that the best way to do this is to group exogenous conditions into ‘themes’ (observable confluences of related influences, that often cross PEST boundaries) or ‘scenarios’ (the same, but contingent in nature, to allow for ‘what if?’ thinking). Johnson et al (‘Exploring Corporate Strategy’) refer to ‘drivers of change’, and Rumelt (‘Good Strategy, Bad Strategy’) instead talks about ‘waves of change’.
Well okay, I get the ‘change’ bit, but neither of these descriptions, to me, captures the sort of dynamics that managers have to deal with. Perhaps a better paradigm would be to think in terms of three types of thematic change: ‘cascades’ (multiple influences that mutually reinforce to create an avalanche effect); ‘trajectories’ (cause-and-effect trend evolution; and ‘discontinuities’ (unexpected inflexion points, or ‘cusp catastrophes’). So social media would be an example of a cascade, Moore’s law a trajectory and the iPhone a discontinuity, I suppose. In this article we’ll look at discontinuities. A later blog will examine cascades and trajectories.
Of the three, discontinuities are famously the most difficult to deal with, almost by definition. Rumsfeld’s ‘unknown unkowns’ and Taleb’s ‘black swans’, are both lazily vague ideas. We can break this category down into (a) events which are thought to be impossible, but which happen anyway – a failure of extant theory that causes previous understanding to be overturned (‘theory failure’); (b) transformational events which are believed to be possible in principle, but which typically either evade current efforts or technologies, or are dependent on an uncertain triggering event (‘singularities’); and (c) material events which are perfectly possible, but are simply improbable, or at least ignored as such by society (‘tail risks’).
We don’t have to stumble across faster-than-light travel to experience an overturning exception to what was previously thought to be knowledge. Arguably the global financial crisis of 2008-2009 was at least in part precipitated by a failure of certain economic doctrines, notably market efficiency and the underlying independence of individual price movements. So even this category of discontinuity isn’t altogether inaccessible to reason – provided that we are aware of and willing to question the key assumptions, theories, consensuses and paradigms that underpin our strategic positioning (in the realms of politics, legislation, economics, society, geophysics, theology, technology, aesthetics, philosophy, health, neurochemistry, ecology, and so on).
‘Weakly superhuman’ artificial general intelligence may or may not be just around the corner, and may or may not lead to a rapid upscaling in machine intelligence that leaves humanity trailing in its wake, or even spawn a neo-Nietzscheian species of ‘transhuman’ – you’ll have to talk to Ray Kurzweil about that one – but viable nuclear fusion would certainly be utterly and permanently transformational for humanity, and researchers have been frantically limping towards this singularity for decades. Some of the research driven by advances in gene sequencing may have similarly colossal impact.
Yet other singularities potentially arise from such diverse cusps as a UK referendum on exit from the European Union, peak oil, irreversibility of climate change and so on. These may well be very uncertain, but no one can claim that they weren’t foreseeable.
Most managers are intuitively or explicitly familiar with the idea of a ‘normal distribution’ or bell curve – the idea that outcomes tend to cluster around an average or ‘expectation value’, and that as we move away from the mean, the probability of an ‘outlier’ event rapidly becomes vanishingly small. Outliers can be ignored.
Except in two circumstances. The first is that although the event has low probability its impact is potentially disproportionately massive. The second is where we aren’t really dealing with a normal distribution in the first place.
The validity of the normal model depends, amongst other things, on the independence of outcomes. Once events start to become correlated, the normal model breaks down. One result can be a ‘power law distribution’, much broader and flatter than a bell curve, where tail events are much more likely.
Prior to the global financial crash, investors in sub-prime mortgage-backed securities, plus the rating agencies, were all using basically the same model to evaluate risk and price. Called a Gauss-Copola distribution, it was essentially a kind of normal model, with a fudge factor that allowed some interdependence to be allowed for. But since being prudent meant missing out on big returns, there was a very natural human tendency to err on the side of recklessness. It turned out that the degree of correlation amongst mortgage prices was very high indeed. The ensuing crisis in the inter-bank market was largely a result of the fundamental failure of the pricing model – suddenly no-one had a clue what their counterparty risk was (except that it was much higher than they’d thought), and lending stopped.
What happened was a bit obscure and apparently isn’t generally well understood. But a much simpler example of dangerous correlation occurs every time the demand for an asset becomes dependent on what other people are paying for similar assets – a feedback loop that tends to lead to a bubble. It doesn’t take complex derivatives to stoke such a bubble.
Getting to grips with discontinuities necessarily entails taking a contingent approach – ‘what would it mean for our strategy if such and such happened.’ And doing this in a meaningful way means having a very clear and well tested view of what the assumptions underpinning your strategy really are. What is the story you’re telling yourself about the way the world works, and where might it fail?
The next article will take a look at what happens when exogenous developments correlate over time, as cascades and trajectories.