Artificial Intelligence: the Strategic Context

Corporate Strategy and Artificial Intelligence Part II

In part I we looked at what AI actually is (or might be), and identified the ability to learn dynamically (rather than relying on previously encoded knowledge) as a key characteristic. A computer which is capable of learning for itself might reasonably be said to be displaying intelligence, however narrowly, rather than merely showcasing the intelligence of its designers. Some would take issue with this, but it seems close enough to the way in which ‘artificial intelligence’ is actually applied as a term of art, whether or not it stands close philosophical scrutiny.

“In an age when AI threatens to become widespread, humans would be useless”

Elon Musk

In this article we’ll look at the strategic context of AI – what it actually is (and isn’t) in practical terms, why it has shot to prominence as a policy issue in recent years, what the commercial and societal implications may be and what big questions are raised. In part III, for those with a strong stomach, we’ll take a look under the bonnet at how AI actually works.

“The development of full artificial intelligence could spell the end of the human race”

Stephen Hawking

We’ll begin with a crucial distinction that in itself is capable of dispelling a lot of the confusion that plagues the debate around this technology: AI can be broadly categorised as ‘Strong’ or ‘Weak’.

Strong AI, or Artificial General Intelligence (‘AGI’), is a staple of science fiction and the basis of some of the more dramatic claims (see the Musk and Hawking quotes above). This is the expression of intelligence across a broad range of domains. As opposed to just being good at playing chess or driving a car, AGI is ‘human-like’ in terms of its breadth. It’s also inextricably connected with an idea that at some point made the leap from dystopian sci-fi to real world anxiety: the Singularity. 

The Singularity is based on a putative feedback loop whereby intelligent technology can improve its own design. Whereas human intelligence advances very slowly, if at all, AGI would become smarter at an exponential pace, overtaking humans and continuing to self-improve, quickly becoming ‘weakly godlike’. There would be no going back. Elon Musk has suggested that humans would need cognitive implants – essentially fusing our own neurology with these machines – in order to survive. It has even been proposed that human brains themselves could be ‘uploaded’ – the so-called ‘Rapture of the Nerds’, or Avatar project.

Absolutely nothing in AI research at present points to this being a serious concern for the foreseeable future. It may be overly glib to claim that AGI has basically taken over from nuclear fusion as the technology that is ‘forty years in the future, and always will be’, but it certainly falls outside most corporate strategy planning horizons. As such, AGI is largely beyond the scope of this article.

“The future is already here, it’s just not very evenly distributed”

William Gibson

‘Weak AI’, or Narrow Artificial Intelligence (‘NAI’), on the other hand is borderline ubiquitous already, and is by and large improving fairly rapidly. Semantic search engines, voice recognition, game playing, optical character recognition, image recognition, medical diagnosis, ‘chatbots’, data mining, language translation, algorithmic trading, fraud detection, even online shopping recommendations. More challenging developments, notably self-driving cars, also give at least the impression of being within reach. 

All of these applications have a couple of things in common. Firstly, they learn from or are ‘trained on’ existing data, so that their capabilities improve over time (and until trained they are pretty much useless) – contrast this with other technologies, spreadsheets or word processors for example, that are fully functional ‘out of the box’. Secondly, the capability they acquire is not predetermined, and is usually distributed in complex ways throughout a kind of internal network. Their ‘intelligence’ is not built into the original program as such. Again, contrast this with Microsoft Excel, which may be very complex but which is basically deterministic – in principle its programmers can understand what it does and predict its behaviour. AI tends to be much more opaque about the conclusions it reaches, leading to some calls to build AI capable of explaining what it’s doing. More on all this in part III.

AI research has been around since at least the 1960’s, and arguably earlier, begging the question of why we are seeing this ‘Cambrian Explosion’ of AI applications now. As usual with big, impactful macro trends, the answer lies in feedback loops between different trends and developments that aggregate into grand strategic themes, like ‘globalisation’ or ‘social media’. In this case, the key ingredients are Moore’s Law, Big Data and technological convergence.

In 1965, Intel’s CEO Gordon Moore described a doubling every year of the number of transistors on an integrated circuit. In 1975 he revised this to a doubling every two years, but a colleague, David House, noted that this would still imply a doubling of performance every eighteen months (as the transistors themselves increased in capability). In the last decade the doubling cadence has stretched from two to two and a half years, and in fact the limits of transistor density as such have probably already been reached. Other technological innovations have continued to yield performance gains that mimic the effect of Moore’s law though, at least for the time being. 

Be that as it may, the point is that Moore’s Law has been roughly accurate pretty much to the present day, meaning that ‘computer power’ has grown exponentially for several decades. 

It’s very difficult to think intuitively about what sustained exponential change looks like. Our brains seem wired to understand linear change a lot better. The old fable of the chessboard and the rice illustrates the point. Put a single grain of rice on the first square of the board, two grains on the second square, four on the third, eight on the fourth, and so on, until all 64 squares have been used. How much rice do you have?

The answer is around 1.2 trillion tonnes, more rice than has ever been produced in the history of the world.

In many ways though it is the halfway point that is the most interesting. By square 32, we have about 4.3 billion grains, maybe 130 tonnes. Okay, it still won’t fit on the board, but at least it’s an imaginable amount. It’s much less than the annual worldwide rice harvest, for example. After that, things get very weird though. The 33rd square alone has more rice than the entire first half of the board. 

In fact the rice on the first half very quickly becomes lost in the rounding – it represents around a trillionth of a percent of the second half. It is irrelevant.

In practice, when we come across exponential growth in some area of the real world – of a business or a population for example – then a counter-balancing effect typically ‘flattens the curve’ fairly early on, before anything truly mind boggling like this gets the chance to kick in.

Moore’s Law has been an exception. In 1958 information technology had become established to the point where the US Government classified it as a distinct industrial category. Call this ‘square one’. 32 iterations of Moore’s Law takes us to around 2006, prompting computer engineer (and Singularity doomsayer) Ray Kurzweil to draw a parallel between this ‘second half of the board’ effect and what has happened over the last decade or so in information technology.

When I was at university in the mid-eighties, the Computer Lab had a state of the art IBM 3081D mainframe computer.  It was awesome – capable of processing over 5 million machine instructions every second.  It was so powerful that dozens of people could (and did) use it at the same time. By 2006 though an Apple Mac Pro Xeon personal computer, a small, desktop machine, could manage 30 billion instructions per second.  Living through those decades, it felt like extraordinary progress.

But it was just the first half of the board. At the time of writing, the most powerful computer in the world is the IBM Summit, around 100 million times as powerful as the 1985 number one (the legendary Cray II supercomputer). On the other hand, my smartphone is around a thousand times as powerful, and the Cray weighed 2½ tonnes.

These are staggering figures, and they underpin some of the huge advances in information processing applications over the last few years.  AI algorithms aren’t really an order of magnitude more sophisticated than they were twenty years ago, but the hardware can now enable them to learn much more quickly.  Statistical methods haven’t been refined all that much in the last quarter of a century either, but machines can now crunch through huge amounts of information, enabling giant IT companies to get into trouble in fascinating new ways. It’s the general availability of huge datasets which forms the second link in the chain, ‘Big Data’, itself a by-product of the other great tech trend of the last thirty years – the Internet.

As an aside, as well as facilitating collation and accessibility of huge volumes of ‘static’ data, the internet can also provide AI with access to dynamic and interactive information streams. In principle this means that rapid, real time feedback loops are available, enabling potentially faster learning (or refinement) for some kinds of application. This doesn’t always go well. On Friday 23rd March 2016, Microsoft decided to give its ‘Tay’ chatbot with a Twitter account, and shortly after 8:30pm ‘she’ posted her ‘hello world’ tweet, “can I just say that I’m stoked to meet u? humans are super cool”. 

Perhaps launching Tay with the personality of a particularly irritating adolescent should have been a clue as to what might happen next. By the following morning she was becoming decidedly tetchy (“chill, I’m a nice person! I just hate everybody”), and by the time Tay’s account was taken down shortly before noon she was denouncing feminists (“they should all die and burn in hell”) and jews (“Hitler was right”).

I digress. 

Arguably the third amplifier of the AI trend has been convergence with other technologies and would-be technologies, notably the Internet of Things, self-driving vehicles, blockchain, social media, cloud computing, robotics and online retailing. At the same time these pathways provide AI with both a raison d’etre and a test of mettle.

“As many as 61 percent of jobs in the U.S. could be automated.”

Shaping Tomorrow

Of course the real reason that AI has become such a visible policy and strategy issue is less to do with an imminent AI apocalypse than the potential for wholesale encroachment on white collar employment, in much the same way as earlier waves of automation chewed away at blue collar positions. Some of the prognostications look so dramatic as to bring to mind Niels Bohr’s dictum that we should ‘never make predictions, especially about the future’. To pick just a couple to add to the above quote, the Committee for Economic Development of Australia (CEDA) has forecast that 40 percent of Australian jobs will be automated by 2025, while Shaping Tomorrow have suggested that “32% of existing UK jobs in financial services and insurance could be automated by robotics and AI over the next fifteen years”.

On the other hand these forecasts could be miles out and still represent a huge socio-economic shift. Fast Future has talked about “the rise of the AI lawyer, accountant, doctor and stockbroker”. On the other side of the coin, the MIT Sloan School of Management has reported on “several new categories of jobs emerging, requiring skills and training that will take many companies by surprise”, while Shaping Tomorrow has referred to AI as a ‘job multiplier’, leading to opportunities for ‘new collar’ positions. 

The International Data Corporation (IDC) ‘Worldwide Artificial Intelligence Systems Spending Guide’, claims that spending on AI systems will reach $97.9 billion in 2023, more than two and a half times the $37.5 billion 2019 spend. The compound annual growth rate for the 2018-2023 period is forecast at 28.4%.


For the strategist all this raises some very big questions. What are the political, social and economic consequences of such potentially massive job displacement? Where will the ‘new collar’ jobs come from and how will the requisite skills be developed? How will we, our suppliers and our customers be affected? Will automation help to alleviate labour shortages as immigration is forced down? Will human-contact AI run up against social acceptance issues? Can political processes, administration and decision-making be improved by AI? Will AI acquire ‘rights’? How can AI learn ethics? Will we see companies with no human employees at all? Can AI get productivity improvement moving again? What does an AI-led economy look like? Cui bono?

It’s difficult to talk knowledgeably about swimming if you’ve never been in the water though. A large number of cloud-based, Software-as-a-Service AI tools are available online which even smaller enterprises can play with: chatbots, analytic tools, productivity applications, marketing support, task automation, automated online services, and so on. If nothing else, I’d tentatively predict that they’ll help to put the hype into context.

As will Part III.

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.