The Alexacist

The Alexacist

I was recently amused by reports of spontaneous eruptions of demonic laughter coming from people’s Amazon Echo devices. It seems that Amazon believe that, by giving Alexa the ability to laugh, it makes their assistant seem more human. Similarly, the latest version of the Google Assistant now punctuates its sentences with “ums” and “ers”.

Personally it all feels a bit wrong. Like some cheap horror film where the Internet of Things possesses a house and the connected devices start harrowing their users. The lights flicker on and off in random colours, Hive turns the thermostat up to Hades-hot, while the connected fridge keeps ordering quinoa and almond milk and unwanted pizzas are delivered at all hours (Hawaiian, of course). All accompanied by Alexa’s ghostly laughter.

We humans need to take back control. We need to avert this digital demonic possession. We need to call in the IT Gurus to perform some sort of Alexacism to help us ensure the preservation of humanity in a machine-led world.

I’m being playful, of course, but there are some serious points lying at the heart of this. Firstly, why are we trying to make machines more human? So many Sci-Fi dystopian futures involve human looking and human sounding automatons threatening the existence of humankind. So why would we want to submit to some sort of Technological Determinism and allow that to happen? Only God supposedly creates in his image: maybe our desire to do this is some sort of manifestation of a God-complex.

Secondly, why do we need automatons with two arms and two legs trying to perform human tasks and make human decisions? We can already do those for ourselves. What we need Robotics and Machine Learning to do is help with the tasks our minds and bodies are not that good at; to perform the mechanical and automated tasks far more efficiently than we can… and leave the human stuff to us.

The best designed machines do one task brilliantly. My washing machine is far better and quicker at washing clothes than I am. It doesn’t need arms or legs and, by focusing on the one task, it enables me to can keep my clothes clean and get back valuable time for other things. It gives me back control of my life with just one touch of a button. It does one thing well: I don’t need it to do more.

And it’s the same for Machine Learning (a specific branch of that horribly unhelpful term “Artificial Intelligence”). Machines are fantastic at recognising patterns and extrapolating to make calculated decisions far quicker and more efficiently than the human brain. Humans’ role is to direct the machine; to tell it which patterns to recognise and what to use them for.

What machines are terrible at is Intuition.

The split-second decisions our Limbic brain makes by reading thousands of subtle signals and combining them with layers of experience. Decisions we don’t really fully understand why or how we make, like the reading of a room full of people, or differentiating sarcasm from an earnest comment. If no foreigner can truly understand what a Brit means when they describe something as “Interesting”, how can we expect a machine to? If we don’t ourselves understand the processes and patterns by which we make intuitive, human decisions, how do we expect to be able to programme a computer to do that?

And this is where the dystopian nightmares begin. When the robots decide to make “evil” decisions that do not benefit their human overlords. But the reality is that machines are neither good nor evil. Morality is a human construct and, like politics, philosophy or creativity, a deeply complex and nuanced topic. Let’s leave these sorts of things to the humans and learn how to get machines to assist us in these pursuits through doing what they do well.

Machines only become “bad” by learning from us.

Whether one is deliberately programmed by a malevolent agent, unwittingly given the biases and world views of a Silicon Valley software engineer, or picks up on our biases, habits and subtly nuanced behaviour and replicates those patterns, the responsibility lies with us.

And this is not to mention the new biases and behaviours we are teaching our children by allowing them to bark commands at a subservient assistant (with an all-too-human, female voice and name).

So let’s stop this before it goes any further. Let us start defining what jobs machines can do for us. Let us harness the amazing power of Automation for our benefit. Businesses and governments need to focus on the humans that matter and ensure they use technology to advance and enhance their human behaviours and human pursuits.

For companies that means using technology to be more customer-centric: to delight and enable the core lifeblood of their businesses to thrive. And also their employees: companies must understand how Machine Learning and Automation can enhance our workforces so that they feel fulfilled and valued in their roles, providing the human parts of their industry.

We need to understand the mechanical tasks that a machine can do best and delineate them from the tasks that only a human can and should perform. And the (very human) role for leadership is to integrate these sets of tasks so that humans lead full and meaningful working lives, reaping the benefit of the efficiencies that the machines working alongside them bring to them and to their customers.

And in this way, businesses and governments can also ensure they benefit the other humans that matter: the wider community, or society… or Humanity.

So let’s take back control. Let’s harness the machines to service and enhance humankind and leave humanity to the humans, and avoid creating the future that some many fear. After all, only humans would recognise the irony in being wiped out by the tools we created to enhance and improve the lives of everybody in the first place.

Daniel Solomons is a leader, facilitator and orchestrator of behaviour change programmes that help organisations thrive in a digital age. With over twenty years experience in the advertising and media world, he recently led a team at Google that drives transformation and behavioural change through ground-breaking, immersive, educational programmes and workshops for advertisers, agencies and Googlers across EMEA: the Google Digital Academy. Daniel left Google to set up Byte Behaviour and works as an associate for Impact