Is consciousness substrate-independent?

Everybody seems to be talking about artificial intelligence. That’s appropriate, I’m sure, because AI is already having a big effect on our lives and its impact is only going to increase. AI researchers and climate scientists are currently vying with each other for the distinction of being members of the field addressing the “most important issue facing humanity today”. My only rejoinder to that is that since all things evolve, all fields of scientific investigation are subsets of evolutionary theory (and that includes physics). Reductionists may put them apples in their pipes and smoke them….so there, nee-nah nee-nah, etc.

Now that we have settled (once and for all) the important question, namely the hierarchy of scientific fields, we can get on to considering whether artificial general intelligences (or superintelligences) are likely to be conscious. Nested insider this question is another – is consciousness solely a property of biological wetware, or is it “substrate-independent”? That is, if we replicate the kind of information processing that goes on in human brains in a computer program, will that computer program be conscious? This is also relevant to the “simulation hypothesis“, which claims that we should consider it more likely that we live inside a simulated universe than that we are the privileged few who inhabit the “real” (original) universe. The simulation argument uses pretty basic statistical reasoning to make this point, but in order to get off the ground it must include “functionalism”, a.k.a. the substrate-independence of consciousness, as one of its premises.

Functionalism is simply the notion that what mental states do is more important than what they are made of. Thus, the argument goes, there is nothing special about wetware and all mental states are realisable in other “substrates” (like computers). The physicist Max Tegmark, in making this kind of argument, has pointed out that we shouldn’t think there’s anything special about “machines made of meat”, because, just like machines made of silicon, they are “fundamentally” composed of up-quarks and down-quarks. The “only” difference between the two is the arrangement of the quarks. Like Tegmark, I am a physicalist monist, so I agree with the factual component of his assertion, but I don’t see it as a strong argument for substrate-independence.

Now, don’t get me wrong, I think it’s entirely possible that AGIs will become conscious, I just don’t think the quark argument is particularly relevant. Quite a lot is smuggled into the claim that it’s only the pattern/arrangement of quarks that is different. If we accept that all material things are made of quarks arranged in different patterns, then pattern accounts for rather a lot. If you fully reproduce the pattern of quarks instantiated in a conscious meat machine, you will not only create consciousness, you’ll create meat. So by reducing the “substrate” to quarks, one doesn’t motivate an argument for substrate-independence.

We simply don’t yet know which properties of brains and bodies are important for consciousness. All we know is that consciousness evolves in biological (“meat”) systems as a solution to the design problem of integrating multiple sensory inputs. Some of those sensory inputs are interoceptive – they are not simply the brain’s assimilation of the external world, but also include responses to hormones, blood sugar levels, nociceptors, and the rest. Maybe you don’t need all of that to have consciousness and maybe you do – nobody knows what you can and can’t leave out.

A deep intuition at work behind claims of substrate-independence and functionalism is the belief that we can make “models” of things, typically mathematical/computational models, that actually possess the (salient) properties of the things we are modelling. This is an ancient idea – one name for it is Platonism, but it predates even that venerable Greek. We humans are powerfully motivated to understand things. This is our great evolutionary trump card – the ability to abstract from things to principles so that our knowledge becomes more generalizable and less domain-specific. What an awesome power this is and what an incredible extension of this power the evolution of first verbal, and then mathematical, languages was. I have no desire to trivialise the importance of our ability to model reality in order to understand it better. Heck, consciousness itself is a modelling process. But let’s not get overzealous and confuse map with territory. Maps are a useful guide to territory, but they lack many interesting properties of the things they represent.

In order to know whether consciousness is substrate-independent we’re going to need to know which properties of meat machines are relevant to its emergence. The likelihood seems to be that we are going to create AGI before we fully understand consciousness. Maybe we’ll actually come to a better understanding of consciousness with the aid of AGI, either because it helps us to study conscious meat machines, or because one day it claims to be conscious and we are forced to take its word for it. Personally, I don’t think consciousness is all that mysterious and I’m not a fan of the “hard problem“. Until further notice, however, it seems to be a purely biological phenomenon.

So I guess we’re just going to have to see what happens. Until then, let’s not get too carried away with our assertions of substrate-independence and let’s examine the motivating assumptions behind those assertions. As the philosopher John Searle has pointed out, a computational model of a storm is not wet. If we fully reproduce all the information represented by the arrangement of quarks in a system, we will have recreated that system, not modelled it.

(The painting is Max Ernst’s L’ange du Foyer, I could explain how it’s relevant, but I prefer to let you just enjoy the beautiful art)

A metaphor (throwback Thursday)

This piece – “A Metaphor” – that I wrote a few years ago, closely relates to the themes discussed in our most recent episode (11 – see previous post) of the podcast:

*

We are all in a dark room.

We all have torches.

Torches are tools for seeing.

*

All our torches are fundamentally the same, but they have different batteries.

Batteries are tools for thinking.

Our choice of batteries affects the brightness of our torches.

*

The beams of our torches can be focussed or diffuse.

The more we focus our beam the brighter it becomes.

The brighter the beam, the more clearly we see what we are looking at.

The more clearly we see what we looking at, the less we see everything else.

The more diffuse our beam, the more we see.

The more we see, the less clearly we see it.

*

The room is crowded.

We can’t see beyond the width of our torch beam.

We can’t see anyone else’s torch beam.

We often bump into each other.

Bumping into each other is an unfortunate accident.

*

The room’s darkness is not absolute.

If we switch our torches off our eyes can adjust.

If we let our eyes adjust we can see everything, dimly.

*

Get the best batteries you can.

Vary the width of your beam constantly.

Switch off your torch for a while every single day.

*