“Open the pod bay doors please Hal…”

Artificial Intelligence and Corporate Strategy, Part I – What is AI?

Wikipedia


“Artificial Intelligence (also AI) (noun)

The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Oxford English Dictionary


 

OED’s definition gives us a clue as to why getting a grip on what Artificial Intelligence actually is can be tricky.  Is a spreadsheet program AI?  I guess most would say no, but it certainly emulates a number of human cognitive tasks.  How about a chess program?  Views differ, but history has tended to show that when a task once thought of as being in the province of AI research becomes routine, it ceases to be categorised as AI.

 


In an age when AI threatens to become widespread, humans would be useless.”  

CNBC 13th January 2017, Elon Musk: Humans must merge with machines or become irrelevant in AI age


 

Trojan horses

We all know and accept that technologies have been replacing human labour for thousands of years, but technologies have also been replacing human cognitive tasks for thousands of years.  Most people are at least familiar with the Iliad, the story of the siege of Troy, written by Homer over two thousand years ago.  Except that it wasn’t written by Homer: no one is absolutely certain that Homer actually existed, but if he did he didn’t write the Iliad.  Possibly for generations the Iliad was passed on in epic rendition by elite bards performing extraordinary feats of memory, until someone eventually wrote it down.  And from that point onward the bards were essentially obsolete.  Actually, once it was written down, it’s also likely that the story became far more widely known, and provided employment for generations of actors, echoing in eternity until it found Brad Pitt and grossed half a billion dollars.

Pen and ink may not fit the definition of AI, but it certainly superseded and improved on human cognitive functions.  On the other hand, it gave rise to new categories of human endeavour that greatly increased ‘employment’ in the broadest sense.

 


The development of full artificial intelligence could spell the end of the human race.

Professor Stephen Hawking, BBC News 2 December 2014


 

Goodbye, Professor Higgins

In the 1960s, an AI researcher called Joseph Weizenbaum working at MIT developed a program dubbed ‘Eliza’.  Eliza emulated a psychotherapist, following a fairly simple set of rules in which it would seem to empathise and would ask searching questions.  Essentially it would simply play back the responses it got from its human ‘patient’ in the form of statements of sympathy or requests for more information.  Now by current standards Eliza was very simple, and we probably wouldn’t classify it as AI anymore, but the amazing and unexpected thing is that people actually found themselves valuing their therapy time with Eliza.  Even when they knew it was just a computer program (in some cases especially when they knew it was just a computer program).  

In any case, there’s a pattern here, that we noted above.  Stuff that we would once have classified as AI we tend to reclassify as just ‘technology’ once it’s been ‘done’.  We keep moving the goalposts.  Or maybe there’s a bit more to it than that.

 


By the end of 2017, businesses will have spent $12.5 billion on AI and cognitive systems, according to a recent report from IDC. This is a significant 59.3% increase from 2016 to 2017. During the next three years, as interest in these technologies grows, the market could see a compound annual growth (CAGR) rate of 54.4%, with revenues reaching about $46 billion by 2020.

Enterprise Cloud News 4th April 2017, AI, Cognitive Spending Soaring to $12.5B in 2017


AI and I

Some time in the late 1970s, had you peered through the window of a red brick two-up-two-down, about a mile from what is now the National Mining Museum for England (and what was then Caphouse Colliery), you might have seen a younger me fiddling about with a huge stack of empty matchboxes and different coloured buttons, trying to get them to play tic-tac-toe.  

Fast forward a few years to the mid-eighties and the same ‘me’, in full-on mad professor mode, could be found hacking away in the computer lab at Cambridge at one of the most powerful mainframe computers in the world (although an order of magnitude more feeble than my current phone), developing a system  for intelligently debugging computer programs by allowing the user to ask questions about what their program was actually doing.  

Neither of these was really AI, but one was much closer than the other.  Which one? – dicking about with the matchboxes of course.  We’ll come back to this later.

Shouldn’t AI be able to… ‘think’?

I get the sense that the nub of our struggle to define what we mean by AI is not so much that it becomes a mundane technology once achieved, but that once we can ‘see under the bonnet’ then it’s fairly clear that it’s not ‘thinking’ in the human sense of the word.  Whatever the human sense actually is.  One pioneering computer scientist, with an alarming number of consonants in his name, would disagree however…

 


The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”  

Edsger Dijkstra


 

So that’s that.

A proposed solution

If there is to be a cookie-cutter meaning for Artificial Intelligence, it probably isn’t to do with ‘thinking’, which we’re unclear about in any case, but learning.

A computer which is capable of learning for itself (technically, through ‘reinforcement machine learning’) can reasonably said to be displaying intelligence, for all practical purposes, rather than merely showcasing the intelligence of its designers.  Machine learning is also closely associated with the idea of ‘neural networks’, a technology which explicitly simulates the architecture of animal cognition.

Arthur C. Clarke once said that “Any sufficiently advanced technology is indistinguishable from magic”.  Machine learning certainly sounds that it fits this dictum in spades.  But actually, machine learning isn’t as intractable as you might think.  We’ll address this in the next article, beginning with a closer look at those matchboxes.