Having a career in science…and what is “science” anyway?

Episode 15 of the Sympathetic People podcast, in which we discuss careers in science and ask questions about what exactly “science” is anyway.

It’s undeniably a problem that scientific research is underfunded. PhD students represent cheap labour for laboratories and universities, but how many of them can find a job when they complete their degrees? How many young researchers ever achieve job security? How does science stack up against other career paths in these regards?

Another core issue we discuss is what “science” really is. Many people seem to believe that science is fundamentally different from the arts and humanities, but “science” is just an outgrowth of empiricism. Empiricism is a branch of epistemology and epistemology is a branch of philosophy – science isn’t clearly separable from philosophy, it’s part of it! Not only this, but differences between the sciences and the arts, or between science and ancient traditions of myth-making and storytelling are greatly exaggerated.

Enjoy, subscribe, like (if you do), and please comment below with your thoughts!

Ouroborus painting by Genevieve Jackson.

Is consciousness substrate-independent?

Everybody seems to be talking about artificial intelligence. That’s appropriate, I’m sure, because AI is already having a big effect on our lives and its impact is only going to increase. AI researchers and climate scientists are currently vying with each other for the distinction of being members of the field addressing the “most important issue facing humanity today”. My only rejoinder to that is that since all things evolve, all fields of scientific investigation are subsets of evolutionary theory (and that includes physics). Reductionists may put them apples in their pipes and smoke them….so there, nee-nah nee-nah, etc.

Now that we have settled (once and for all) the important question, namely the hierarchy of scientific fields, we can get on to considering whether artificial general intelligences (or superintelligences) are likely to be conscious. Nested insider this question is another – is consciousness solely a property of biological wetware, or is it “substrate-independent”? That is, if we replicate the kind of information processing that goes on in human brains in a computer program, will that computer program be conscious? This is also relevant to the “simulation hypothesis“, which claims that we should consider it more likely that we live inside a simulated universe than that we are the privileged few who inhabit the “real” (original) universe. The simulation argument uses pretty basic statistical reasoning to make this point, but in order to get off the ground it must include “functionalism”, a.k.a. the substrate-independence of consciousness, as one of its premises.

Functionalism is simply the notion that what mental states do is more important than what they are made of. Thus, the argument goes, there is nothing special about wetware and all mental states are realisable in other “substrates” (like computers). The physicist Max Tegmark, in making this kind of argument, has pointed out that we shouldn’t think there’s anything special about “machines made of meat”, because, just like machines made of silicon, they are “fundamentally” composed of up-quarks and down-quarks. The “only” difference between the two is the arrangement of the quarks. Like Tegmark, I am a physicalist monist, so I agree with the factual component of his assertion, but I don’t see it as a strong argument for substrate-independence.

Now, don’t get me wrong, I think it’s entirely possible that AGIs will become conscious, I just don’t think the quark argument is particularly relevant. Quite a lot is smuggled into the claim that it’s only the pattern/arrangement of quarks that is different. If we accept that all material things are made of quarks arranged in different patterns, then pattern accounts for rather a lot. If you fully reproduce the pattern of quarks instantiated in a conscious meat machine, you will not only create consciousness, you’ll create meat. So by reducing the “substrate” to quarks, one doesn’t motivate an argument for substrate-independence.

We simply don’t yet know which properties of brains and bodies are important for consciousness. All we know is that consciousness evolves in biological (“meat”) systems as a solution to the design problem of integrating multiple sensory inputs. Some of those sensory inputs are interoceptive – they are not simply the brain’s assimilation of the external world, but also include responses to hormones, blood sugar levels, nociceptors, and the rest. Maybe you don’t need all of that to have consciousness and maybe you do – nobody knows what you can and can’t leave out.

A deep intuition at work behind claims of substrate-independence and functionalism is the belief that we can make “models” of things, typically mathematical/computational models, that actually possess the (salient) properties of the things we are modelling. This is an ancient idea – one name for it is Platonism, but it predates even that venerable Greek. We humans are powerfully motivated to understand things. This is our great evolutionary trump card – the ability to abstract from things to principles so that our knowledge becomes more generalizable and less domain-specific. What an awesome power this is and what an incredible extension of this power the evolution of first verbal, and then mathematical, languages was. I have no desire to trivialise the importance of our ability to model reality in order to understand it better. Heck, consciousness itself is a modelling process. But let’s not get overzealous and confuse map with territory. Maps are a useful guide to territory, but they lack many interesting properties of the things they represent.

In order to know whether consciousness is substrate-independent we’re going to need to know which properties of meat machines are relevant to its emergence. The likelihood seems to be that we are going to create AGI before we fully understand consciousness. Maybe we’ll actually come to a better understanding of consciousness with the aid of AGI, either because it helps us to study conscious meat machines, or because one day it claims to be conscious and we are forced to take its word for it. Personally, I don’t think consciousness is all that mysterious and I’m not a fan of the “hard problem“. Until further notice, however, it seems to be a purely biological phenomenon.

So I guess we’re just going to have to see what happens. Until then, let’s not get too carried away with our assertions of substrate-independence and let’s examine the motivating assumptions behind those assertions. As the philosopher John Searle has pointed out, a computational model of a storm is not wet. If we fully reproduce all the information represented by the arrangement of quarks in a system, we will have recreated that system, not modelled it.

(The painting is Max Ernst’s L’ange du Foyer, I could explain how it’s relevant, but I prefer to let you just enjoy the beautiful art)

A metaphor (throwback Thursday)

This piece – “A Metaphor” – that I wrote a few years ago, closely relates to the themes discussed in our most recent episode (11 – see previous post) of the podcast:


We are all in a dark room.

We all have torches.

Torches are tools for seeing.


All our torches are fundamentally the same, but they have different batteries.

Batteries are tools for thinking.

Our choice of batteries affects the brightness of our torches.


The beams of our torches can be focussed or diffuse.

The more we focus our beam the brighter it becomes.

The brighter the beam, the more clearly we see what we are looking at.

The more clearly we see what we looking at, the less we see everything else.

The more diffuse our beam, the more we see.

The more we see, the less clearly we see it.


The room is crowded.

We can’t see beyond the width of our torch beam.

We can’t see anyone else’s torch beam.

We often bump into each other.

Bumping into each other is an unfortunate accident.


The room’s darkness is not absolute.

If we switch our torches off our eyes can adjust.

If we let our eyes adjust we can see everything, dimly.


Get the best batteries you can.

Vary the width of your beam constantly.

Switch off your torch for a while every single day.


To what extent do we perceive the world through words?


Episode 11, in which we discuss Carlos Castaneda, Buddhism, Karl Popper, science, fallibilism, magic, reality, and all such fun stuff!

To what extent do we perceive the world through words? Can we escape our verbal preconceptions and experience things “as they really are?” Can we do so and remain functional? We agree that it is healthy and productive to “get outside our selves” and attempt to perceive the world in a way less filtered by our preconceptions, but is all perception ultimately theoretical? Haven’t we evolved to perceive “affordances” – things we can use?

We hope you enjoy the discussion – please leave feedback in the comments section, and subscribe on iTunes.

What is the meaning of life?

Episode 10! In a decimal system that is surely a milestone; in any other system of counting maybe not, but we’ll take it!

Tim reads from Sean Carroll’s wonderful book “The Big Picture“, and the intrepid evolutionary biologists (your hosts) discuss the meaning of life, conflict between science and religion on this topic, meditation, and a bunch of other stuff. It’s fun!

Spoiler alert – we don’t really know what the meaning of life is, but as Bill Fay (he’s a musician, check him out) says “like my old dad said, life is people“…..

Listen! We compel you to do so and hope you enjoy doing so. Subscribe on iTunes! Rate us on iTunes! Comment on the blog! Use exclamation marks excessively!!!

Thanks 🙂

What it’s like to be a philosopher

A: I’ve been thinking about your definition of consciousness. You said it was an “affordance-seeking predictive engine”, which really wasn’t very helpful.

B: Sorry…

A: All good, you did say a bunch of other stuff that was interesting…

B: I do try my best.

A: Well, that’s all I ask. Aaaaaaanyway, I was listening to a podcast about consciousness and the host discussed another definition. I’m sure you probably heard before, but it seemed intuitively correct to me, so I’m wondering if you have any comments.

B: Fire away.

A: OK, so they said that consciousness is just “what it’s like to be something”. Like, what it’s like to be me is my consciousness, what it’s like to be….

B: A bat?

A: Yeah, they did use that example. They credited it to some philosopher.

B: Thomas Nagel. But really I think it’s just an extension of Descartes

A: Slow down…

B: Sorry. Nagel wrote a famous and influential essay called “What is it like to be a bat?”. He argued that if it is like something to be something, meaning that something has subjective states, then that something is conscious.

A: Stop saying “something” – what is the relevance of bats?

B: Nagel thought that they were an interesting example because they perceive the world with different senses from us.

A: Ah, like echolocation.

B: Yep – he thought that thinking about the difference between the way a bat perceives a moth with echolocation and the way we do with our eyes would make the difference between the subjective and the objective clear.

A: Right – the moth is objective but our perceptions of it are subjective.

B: That’s the idea. And I agree, it seems intuitive and innocent enough.

A: But?

B: But….well, there are multiple “buts” actually.

A: Are they big?

B: What?

A: Are these big butts? I hope so, I cannot lie…

B: Grow up. Philosophy is serious business.

A: No wonder it’s so tedious. OK OK, go on.

B: It’s not uncommon to hear people that like this definition add that consciousness is also the one thing we can really “Know” (with a capital K, mind you) exists.

A: Actually yeah, that’s exactly what they said….

B: Well it’s part of Nagel’s thesis, which is basically Subjectivism – the only thing we can really know is our own subjective experience. And that is really just a step from Solipsism – “the only thing that exists is my own consciousness”. And this argument about “what it’s like” is also tied up with the idea that consciousness is immaterial, a “mental phenomenon”, that can’t be reduced to a scientific, physical theory. There’s so much philosophical baggage here I hardly know where to start.

A: Well I don’t think all that applies here really, the guy I heard this from is very scientific, he doesn’t even believe in free will….

B: Uh oh, he’s almost certainly a closet dualist in that case. That line of reasoning, even though it might not be explicit, goes something like: “the physical universe is a closed causal system, but our conscious thoughts, or our awareness of them, are clearly mental, thus not physical. We don’t know how the mental could possibly interact with the physical, therefore our thoughts are acausal and we don’t have free will, QED.” A lot of this goes back to Descartes and problems with his thesis.

A: That’s the second time you’ve mentioned him.

B: Well, he has a lot to answer for! Probably his most famous claim, maybe the most famous claim in all of Western Philosophy, is “cogito ergo sum” – I think, therefore I am. I’ve always liked to parody this as “cogito ergo inconditus” – I think, therefore I am confused.

A: Ha. I see that philosophy doesn’t ban humour altogether, it’s just that philosophers aren’t very good at it.

B: Anyway. Descartes’ philosophy was deeply dualistic – he divided the world into “res cogitans”, mental stuff, and “res extensa”, physical stuff. That’s not exactly unique in itself of course because the majority of people in 17th Century Europe were dualists, and he was just formalising that in his own way. There were other options available to him, mind you, there have been many monist philosophers throughout history in both the West and the East, and even Descartes’ correspondent, the Princess….

A: Is this turning into a history lesson or what?

B: Sorry. His claim “cogito ergo sum” was the end result of his experiment with skepticism – he wanted to discover if he had any certain knowledge that he could ground the rest of his philosophy on, and he came up with this Matrix-style thought experiment…

A: Hey cool, the Matrix is cool!

B: Yeah, but not very original…

A: Hold up. You’re telling me Descartes thought about dodging bullets, jumping from building to building and learning martial arts from a computer?

B: No.

A: Soooooo the Matrix is original!

B: My mistake. What Descartes imagined was that there might be an evil demon systematically deceiving his senses. Maybe the world he thought he was perceiving didn’t exist at all, but was just one big deception.

A: That is kinda like the Matrix…

B: And he therefore concluded that the only thing he could really know was that he was perceiving something, but that he couldn’t know what that something really was. So his only certainty was the fact of his consciousness itself. He later decided he was certain that God existed and a bunch of other stuff, but you get the gist.

A: Ah. So what’s wrong with “I think, therefore I am” exactly?

B: Well, we might start by inverting it – “I am, therefore I think”. What Descartes is trying to establish is something fundamental, something given, a foundation from which reasoning can begin. But his choice is arbitrary and egocentric really. He implies that the world is in his head, but he could just as easily have concluded that his head was in the world.

A: Eh?

B: He might have said that the only thing he could be certain of was that there is a world. He might be deceived about that world – maybe it’s just him and the evil demon hanging out in the void and all the rest is illusory, but nonetheless there is a world and he is in it.

A: I’m not convinced that makes much difference.

B: It’s subtle, but it’s like Chaos theory – extreme sensitivity to initial conditions. This very subtle difference in the choice of foundations for a system of philosophy can have a profound impact on the conclusions reached “higher up” the chain of reasoning. Ultimately it’s not surprising Descartes went the way he did because he believed in an immortal soul that was separate from the physical world. He was begging the question – his choice of “foundations” was really constrained by his higher level beliefs from the get go and his exercise in skepticism was purely a technical exhibition.

A: Uh huh. It still makes intuitive sense to me though.

B: Well of course, yeah – it does to most Westerners, which is what I mean by “philosophical baggage”. Our dominant philosophical heritage is dualism, and Descartes himself is a big part of that – his ideas make sense to you because you were raised in a philosophical tradition influenced by his ideas (and the ideas that influenced his ideas).

A: If you say so. I’m not sure you’ve successfully linked this to the “what it’s like” definition yet.

B: That definition comes directly out of this dualist tradition. It is an earlier version of the so-called “hard problem” of consciousness, the question of why we have any subjective experience at all….

A: That problem sounds hard, let’s deal with one at a time please.

B: It’s the same problem really, but OK, aside from prompting us to do silly things like imagining the subjective experience of a bat from inside the subjective experience of a human (which is allegedly all we can know, remember), the issue is that it privileges subjective experience in the first place. Nagel does a switcheroo when he substitutes “what it’s like to be a bat” for “what a moth is like for a bat”. Like the “hard problem”, this presupposes that there is some core kernel of subjectivity, of consciousness, that is distinct from the contents of consciousness, and this connects with Descartes’ claim that it is this thing, this core kernel, that is the only thing we can be sure of, since can be fooled about any of the contents of consciousness. But the thing is, the claim that there is any consciousness without contents is simply assumed!

A: Hmmmmmmmmm. You seem a bit excited about this.

B: You asked – I don’t like the “what it’s like” definition because it’s the same as the hard problem, which is a pseudo-problem that ultimately derives from a Cartesian split of the world into the mental and the physical. I admit that at the level of the definition itself this might not seem obvious, but I think it can lead to a lot of sloppy thinking further down the line.

A: Well, I like it and I think you philosophers get your knickers in a twist about silly things, which is why people prefer watching cute cat videos on Youtube to studying philosophy……check this one out, for example.

B: Pass.

Are gender and sexuality social constructs?

Episode nine of the Permanent Evolution Podcast, a discussion about gender and sexuality and whether or not these are merely socially constructed categories. Do we need labels to function as part of social groups, as part of societies? Words are tools, but what happens when we give them too much power by using them to define our identities?

We speak over each other a couple of times during this episode – sorry, we’re working on that! We’re also working to improve the sound quality. We hope you’re enjoying the content! Please subscribe on iTunes if so.

Thanks for listening!