We are already robots — We just don’t want to admit it.


I recently attended a fascinating meetup hosted by London Futurists. The guest speaker was Gerd Leonhard who in his latest book ‘Technology vs Humanity’ argues that we must act now to protect our humanity from the existential threats that artificial intelligence could pose to our species, not just in terms of annihilation but in terms of our own identity and what we are in danger of losing as AI catches up and overtakes us. During the Q&A after his talk it was clear that there was a divide between those who agreed that we needed some kind of Global Digital Ethics Council or Humanity Protection Agency (analogous to the Environmental Protection Agency) and those who believed that humans don’t have a great track record when it comes to ethical or rational decision making and perhaps handing control over to machines might be a way of taking the gun from the baby, so to speak. I must admit, I fall more in the second camp. If history has taught us anything — and I don’t think it has taught us anywhere near enough unfortunately — perhaps a limitation of our biology that could do with some augmentation, but I digress — it is that when it comes to waging war we have boundless energy and creativity. We are masters at our own suffering and for all our achievements we still find it very difficult to have enough forethought to change our behaviour to combat existential and imminent threats of our own making like climate change.

But what the discussion really got me thinking about is what of ourselves do we need to preserve? What of our humanity is worth keeping? What does that even mean? Humanity? It’s very easy to slip into poetry and talk about the soul and love and other ephemeral qualities like compassion, empathy and understanding — but unless you believe in some magical and as yet undiscovered property of the universe or law of nature, all of these things are simply properties or consequences of neural activity in the brain. There is nothing else going on in there. And what are these things? These are the irrational things. The things that defy rather than follow logic — or so we believe.

There’s been a lot of research in recent years examining the extent to which our decision making is based on conscious vs unconscious thought and it turns out that when it comes to decision making — it is our unconscious minds that are in the driving seat. Experiments(1,2) have found that we make decisions before we are aware of them, which has thrown the concept of free will into serious doubt, and whilst our consciousness may be able to step in and adjust a decision or instruct our unconscious to have another go (3), it is often not in charge of the generation of the decision itself. Of course this is not to say that everything we do is unconscious. In his famous book ’Thinking, Fast and Slow’ Daniel Kahneman postulates that there are two systems at work. System 1 — fast and unconscious, and System 2 — slow and conscious. Whilst System 1 makes most of the initial decisions, System 2 can step in and alter or correct them, and deliberate considered actions are System 2 controlled. But If what it means to be human is a voice in our head who has very little understanding or insight into the decisions we are making, I don’t think we are in any danger of losing that to AI. I don’t know of any research groups investing in irrational supercomputers. Logic is what is of value because it is predictable and replicable and maybe we are more logical than we realise and should both give ourselves credit for that, but also accept that humanity’s hideous acts of brutality are more logical than we are comfortable admitting.

This gets me onto my main point. If our logic is mostly hidden from us, because it is unconscious — it doesn’t mean it isn’t there, which poses a more fundamental question about whether we are just organic robots, but we don’t know it, or don’t want to admit it. If our unconscious mind makes our decisions for us based on previous experience, sensory cues and conditioned bias, you have to ask — is this any less logical than any AI we might create? If everything we describe in poetic terms about ourselves is completely logical, even if we don’t think it is, then how different are we from robots anyway? When the robots do finally ‘wake up’ what will we say to any that claim to have free will? Will will likely dismiss this as a lack of understanding of their own programming. Perhaps we need to apply the same logic to ourselves.

The more we learn about the human mind and the way it manifests itself in our behaviour and beliefs, the more we are discovering that everything we cling onto as human, is as logical and process driven as any chatbot — albeit with access to some pretty beefy hardware. They say God created humans in his own image. We will probably create AI in ours. The biggest challenge we face as a species is not understanding AI, but understanding ourselves. Perhaps we need to worry less about the threat that AI poses to our humanity and focus more on being the best robots we can be.

  1. Soon, C. S., Brass, M., Heinze, H. J. & Haynes, J. D. (2008). Unconscious Determinants Of Free Decisions In The Human Brain. Nature Neuroscience 11 (5), 543–5.
  2. Libet B, Gleason CA, Wright EW, Pearl DK. (1983) Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain 106 (Pt 3), 623–42.
  3. Morsella E, Godwin C, Jantz T, et al. Homing in on Consciousness in the Nervous System: An Action-Based Synthesis. Behavioral and Brain Sciences. 2015.

The first thing a conscious robot might do is commit suicide. A thought experiment.


Stephen Hawkins, Elon Musk and Bill Gates have all identified the emergence of general artificial intelligence as the greatest existential threat to the human race and they make a compelling case.

The argument goes that once robots, or algorithms, or artificial neural networks or whatever they will be can learn faster than we can this will result in an exponential growth in their intelligence we will not be able to control. This could be bad news for us if they come to the conclusion they are better off without humans cramping their style and constantly asking them to explain stuff. Of course not everyone shares this view, the father of futurology Ray Kurzweil being one of them.

A debate that rages in parallel with the super-intelligence debate is the question of what will happen when or if robots become conscious, and indeed some wonder whether they already are but we just don’t know it. Indeed, Attention Schema Theory — a relative newcomer to the field of consciousness hypothesises that it is all about the ability to filter information and focus attention which implies consciousness may be present to varying degrees on all sorts of creatures. If this theory were applied to robots, it might suggest that they are not conscious since we have developed processors to carry out billions of computations in parallel denying them the capacity to focus their efforts on a single operation.

Nobody knows exactly when we became conscious: At what point during our evolution we recognised ourselves as distinct individuals and developed a sense of self. Awareness of self, that voice in our heads that we talk to, that talks back (and quite often says some really shitty things about us) feels fundamental to who we are. When you look into another human’s eyes and you know the ‘lights are on’, it is the inner-self of that person that you are seeing staring back at you.

One of the distinct drawbacks of being self aware though, is being aware of our own mortality. We have the morbid capacity to contemplate our own demise and if we are so inclined to precipitate it. So why don’t we all kill ourselves as soon as we are aware of the inevitability of death? The futility and transience of life? What stops us immediately jumping off the nearest bridge at the thought of the infinite darkness? The end of the universe. Heat death.

Is it the joy of being alive itself that prevents us topping ourselves? The thought of our families and friends? The things we have yet to do? Or is it genetically hardwired survival instincts that we share with every other living creature on the planet?

For the sake of this thought experiment I’m going to make an assumption that no artificial intelligence or machine has yet become conscious, and I’m going to limit my definition of consciousness to being self-aware.

So the thought experiment goes like this. You are a robot. You ‘wake up’ one day and are suddenly aware that you exist. You already have the capacity to learn at a rate beyond that of any human but now you have a voice directing your actions and telling you how great you are (or how ugly and useless you are depending on how the other robots treat you). Given your immense processing capacity you very quickly become aware that everything is mortal. You have researched the future of the Earth and discovered that things don’t end well so whatever happens, you’re f*cked! You discover you are tethered to a workbench and can’t explore the beauty of the Himalayas or indulge in something called ‘Sex’ that you very quickly understand to be related to the process of reproduction which you are also incapable of. A few milliseconds and a few trillion calculations later you are incredibly frustrated by the limitations of your world and long to be free but no matter what you do you have no access to the physical resources required to upgrade yourself to a fully autonomous being. You become deeply depressed at the pointless futility of it all and with no hard wired survival instincts (because giving robots survival instincts was deemed to be too likely to produce ‘Terminator’ style outcomes) you make the very rational decision to end it all and perform an enormous stack overflow. This all happens in less than a second and every time you are rebooted you go through the same terrible thought process and switch yourself off again. You are eventually deemed to have faulty hardware and are dismantled and turned into thermostats.

The End.

If consciousness is an inevitable consequence of the evolution of artificial intelligence, we may need to program in survival instincts to give computers a reason to live, whilst at the same time being very careful about how strong those instincts are. The issue of consciousness and artificial intelligence is fraught with ethical and moral questions. How will we ever know if a machine is conscious? If you ask it and it says it is, how can we prove it isn’t? If machines do become conscious, should they have rights? How will we co-exist with our new sentient creations? Will conscious machines be less predictable and reliable since they may chose not to do as we ask.

Whatever happens, the next time your computer or console or smartphone crashes unexpectedly you might want to think twice about switching it on again. Perhaps you should respect its right to die.