Earlier this month I had the pleasure of speaking at the Truphone offices, kicking off a discussion on the impact of artificial intelligence on the future of work. This is a summary of the notes I made ahead of the talk, including some additional points that I left for the open discussion, and hopefully complements the agile elephant write-up.

Working With Technology

Any discussion around A.I. in the workplace has to start with Robert Williams. That might be a name that is familiar to you, but I suspect it is not. Let me add a date to that name: 25th January, 1970. You will find him in the Guinness book of records, and if that seems odd, that’s because it is: Robert Williams was the first person to be killed by a robot.

Much has been done to improve the safety of working with robots; most robots today operate within safety cages that humans are excluded from. It is only more recently that robots and humans have started to work in shared spaces, most notably at companies like VW. When I mention that last point it often results in a sharp intake of breath. Our concerns about robots these days are quite different that those early days: Mis-application and job destruction are the most commonly expressed worries.

Technology, historically, has been introduced into industries in a remarkably gradual fashion. All change “appears quickly, happens slowly and rarely completes“. Where technology has been, or has the potential to be, detrimental to  humans, it has always operated within some sort of “safety cage” – it has been constrained and partial. With artificial intelligence, that is a simple task to achieve, at least with embodied AI, like robots. But what happens when the intelligence is built into our communications? What fences can, or should, be put in place to reduce the impact of AI decision-making in society? In the case of market trading, there have been very few controls on when and how AI can behave, and yet it accounts for most of the trading volume on many markets today. This isn’t to say that AI technology is somehow inherently bad or risky, but rather to make the point that the impact and dangers of technology are often underestimated or overlooked, and move faster today than they have done historically.

AI Coming for Your Jobs

There have been numerous articles in recent months about AI making millions of people redundant, including from many people who should (or do) actually know better. Pause for a minute to think where we are: Yes, computing power continues to grow exponentially, and we have come a long way from the days of IBM wardrobe-sized RAMAC 5 Meg Hard drive of 1956. A few Terabytes of storage today costs little more that a takeaway meal, and computers are now being given away free on magazine covers. Let’s put that into the context within the different technologies and approaches within AI. Emulating the human brain (and more specifically the neurons that make up the brain) has been a scientific pursuit for many decades.

Two years ago researchers from the RIKEN HPCI Program for Computational Life Sciences carried out the largest neuronal network simulation to date. They used the K computer, a Japanese supercomputer that would quite happily fill a football pitch, to simulate 1.73 billion nerve cells connected by 10.4 trillion synapses, using 82,944 processors. That sounds rather impressive, until you realise that it took 40 minutes to complete the simulation of 1 second of activity, and that 1.73 billion nerve cells is 1-2% of the number of nerve cells in a human brain.

We are a long way from having enough computing power to emulate a human brain. Even allowing for Moore’s law, it will be a good few decades before we have that sort of computing power available, and that does account for the curious thing about simulating the brain: as we get closer to it, it usually turns out more complex than previously thought. The RIKEN simulation used about 24 bytes of memory per neuron (about 1 petabyte of computer memory in total); emerging theories in neuropsychology suggest that there may be quantum processes within the brain’s structure,  which would push the computing requirements off of the current scale.

Technical issues aside, there is an economic issue: computers are still relatively expensive. You might spend $300-$500 on a laptop that will last a couple of years, then another $100-$200 on electricity to power it (at today’s energy prices). Already, before factoring in the costs of support, repairs and software, we have run up an annual bill that is greater than the average salary of a third of the planet. Basic human thinking is remarkably affordable; computers will have to work hard to deliver the same sort of economic value.

Thinking Outside of the Box

Of course AI is coming for your “Jobs” in other ways :- your Steve Jobs iPhone is imbued with Siri, while Windows phones have Cortana and Android devices have Google Now. Emulation – directly aping the working of the human brain – isn’t the only approach. We can simulate the functions we are interested in, for example speech recognition, reading or decision-making. This works well where we can clearly define the desired inputs and outputs, and the processes are consistent and repeatable. Emulation falls short when the parameters change or are unclear (try speech recognition when you have a cough or a cold). The challenge with the world of work, at least the knowledge-driven work that is common in the industrialised work, is that its workings are remarkably poorly understood. May I present exhibit 1: Management Science. As anyone who has worked with business process automation software will tell you, figuring out and documenting what goes off inside of a business is incredibly hard. Yes, there are the formal process and mechanisms, but there are also “informal processes” that tend to dominate the working of the business. Organisations are inherently relational, and our brains use some amazing tricks to manage relationships. Whenever we talk to someone, our brains are running a simulation of their own – simulating the other person’s mind. Theory of mind, as it is known in psychology, looks at how our brains work to understand the brains of others. We don’t just think about what the other person is thinking about, we can think about what we think the other person thinks about what we are thinking about. This mental gymnastics is a fundamental requirement for human communication; no wonder your phone struggles.

Re-record Not Fade Away

Replacing humans isn’t the most likely way that AI will impact the work place. It is more likely that AI will augment or subvert what we do. AI is already being used to help medical practitioners with decision making, using expert systems, to reduce the risk of traffic accidents, in collision avoidance systems, and a host of other applications. There will undoubtedly be places where AI subverts jobs – making some old processes irrelevant – just as CDs brought an end to jobs in record pressing plants (although not so much) – and areas where artisanal sensibilities will win over automation (think about the universal ‘unlove’ for the robots that say: “there is an unexpected item in the bagging area”). There will be new skills to learn, and workforce transitions, which leads me on to the greatest mystery about technology in the work place.

I Wasn’t Expecting That

Technology has been pushing into the work place for at least half a century, yet none of us seem to be basking in hours of additional leisure time, and while unemployment in many countries is uncomfortably high, it hasn’t risen dramatically. This is an economic mystery. Investing in technology should improve worker efficiency, which should increase worker output, reducing the need for workers. But that isn’t what has happened. This is best stated in Erik Brynjolf’s “Productivity Paradox”. At the start of the 90’s Brynjolf noticed that IT driven increases in productivity were not quite what they should have been. Robert Solow, Nobel Laureate economist, put it this way: “we see computers everywhere except in the productivity statistics.” – so what happened as a result of all that investment in technology during the 70’s, 80’s and 90’s? That turns out to be a very good question, especially if you are an academic researcher. The mystery deepens, the more you look at it. From communications, to robots, investments in technology have not resulted in mass reductions in the requirement for workers. My personal observation is that this looks like a classic case of reverse causality: As workers have become more skilled (with improved access to education) they have driven investments in technology that have led to improvements in output quality, and efficiency in the use of raw materials.

Not investing in technology is not an option. Technology is required to keep pace with increasing complexity and demands for quality (either from regulatory requirements, e.g. increased safety and quality legislation or product demands – I still haven’t used all of the features of most of the products I own). AI will increasingly be required to do our jobs at the level that is expected. Take the case of technology in surgery. It hasn’t put scores of surgeons out of work, it has meant to improved survival rates (for the patients!), and increased numbers of procedures being performed. Technology can help to reduce human error – rather than “working in the cage” with the technology, increasingly AI will provide a safety cage for the day-to-day work that we do, and tools to manage the volume of information we have to deal with. AI is incredibly effective at pre-processing and categorising information (which helps us in our work here at SocialOptic).

Go To the Future

Yes, the robots are coming, and yes, AI is coming, but the most likely outcome is not the destruction of jobs, it is the creation of new kinds of jobs. There are still many, many problems to solve in the world, and increasingly there are not enough humans to solve them. There is even AI in the Elevators in our work places, but it isn’t coming to take our jobs, it is coming to enable us to deal with the increasing challenges that we face in performing them.