Can Computers Be Conscious?

Courtesy of Unsplash

The sudden explosion of powerful, easily-accessible AI models such as GPT-3 has taken an old theoretical debate about brains, computers, and the mind, and made it relevant. I’ve seen this question being debated all over the internet, in pubs, between lectures (well, I do study computer science) and just about everywhere else. As a question it is very easy to ask and just about impossible to answer: if I can have a chat with an AI model, discussing thoughts, feelings, and opinions, then does this mean that it’s conscious?

This question of course has profound implications both for our understanding of machines and of ourselves. For example, we might have to start asking questions about the ethics of AI: if a machine learning model can think, feel and experience just like us, then is it right of us to “put them to work” for services like chatGPT?

The argument for machine consciousness relies on the idea of neural networks—the mathematical structures that AI models like chatGPT are built on. The idea behind these is that a digital input is fed through a network of connections, combining and recombining it in complex ways, to produce an output that responds to our input. This is similar to how our brain works, with a massive network of neurons processing sensory input and memories—hence the name “neural network”. Now the argument goes that if AI models work in such a similar way to the human brain, that must surely suggest that at least could be conscious.

My opinion on this comes from a thought experiment: let’s say I’ve got a giant piece of paper and I’ve drawn your entire neural network—the ‘circuit’ that defines how your brain works—onto it. I take a bunch of big wooden counters, put them on each neuron, and start pushing them about to simulate the neural network: in the same way that a neuron in your brain ‘activates’ when it gets enough inputs, so too do our neurons on the paper output a wooden counter when it gets enough wooden counters as input. This process of pushing wooden counters around according to the basic rules of how our neurons are activated will simulate the functioning of your own brain—decision making, processing sensory input, controlling organs and limbs, etc—down to the individual neuron activation.

The question now arises that, if I am doing with my wooden counters and big drawing exactly what your brain does, is the big drawing now conscious? What about the wooden counters?

An artificial neural network is, at best, a simulation of a brain. It can shuffle numbers around and give you the output that a brain would, as can our drawing and wooden counters, but I do not believe that this is enough to give it a conscious experience. Artificial neural networks are, instead, a model for expressing human-like problem solving—maybe not ‘thought’, per se, but something close—without the element of consciousness. And perhaps therefore these rapidly evolving technologies will help advance our understanding of the very old, and very hard, problem of consciousness.