ANIL SETH ON THE HARD PROBLEM OF CONSCIOUSNESS By Tony Sobrado *** The Montréal Review, July 2024 |
|||
Tony Sobrado: So the `hard problem of consciousness’ can be thought of in two ways that essentially overlap - why is it that a physical brain gives rise to subjective first-person qualitative experiences, or why is it that some physical things, like brains, are conscious and some physical things, like rocks, are not conscious. The issue is experience. Would you agree? Anil Seth: So yeah, I like that definition. How could any kind of physical processing give rise to any kind of experience, any rich inner life at all? It seems objectively unreasonable that it should, and yet it does. So I like this formulation of the hard problem. It basically presupposes a kind of view of the universe where there is stuff, a kind of materialist starting point, and then asks how and why is consciousness part of that picture? To me, that includes the second version of the way you put it, which is: okay, then how and why is it that some things are conscious and other things are not? And of course, opinions on that will vary depending on your metaphysics. Tony Sobrado: So it is essentially metaphysics, so to speak, and obviously ontology. It makes some initial conceptual sense and possibly logical sense, but in terms of how big a problem it is, I would like you to expand on that. For example, we can get into elements of identicalism, because that's got problems for causation as well. But in terms of how big the ‘hard problem’ for consciousness actually is, that is what I want to get your immediate thoughts on. Do you really need explanatory bridge principles that solve explanatory gaps, as Joseph Levine famously called for? So neuroscientists will say that in a lab like yours, we have mechanisms that show that if you control for brain state x, it affects consciousness, and that's fine, and that's the ‘easy problem of consciousness’, so to speak. But someone like Joseph Levine will say, well, okay, pain is brain state x, but we need a bridging principle that explains why this brain state x gives rise to the subjective experience of pain. What do you think of that argument? Anil Seth: It depends on what you mean by a bridging principle. I certainly think that brute correlations are not enough. So my own view is that the hard problem, as you say, makes sense conceptually, logically, probably nomologically, if we're being technical about it. But whether it makes sense pragmatically is another question. You know, there are other examples of seemingly intractable problems that have become less intractable and faded away. A typical example is something like vitalism in biology: something that seemed incompatible with the materialistic worldview became compatible with it, and that required two things. First, it required recognizing that the explanatory target is not just one big scary mystery like: Is it alive? Or is it not alive? Because life has a lot of interesting properties. It also required kinds of bridging principles - how do you explain, predict, and control? These kinds of properties are based on physics, chemistry, and a general pragmatic approach. So here’s a gambit for how best to understand consciousness. It’s true that it seems to be the kind of thing that is difficult, if not impossible, to explain in terms of physical events. But this could be for a number of reasons. For example, we don't have the right explanation, or we don't understand matter well enough, etc. Pragmatically, it seems like the sensible thing to do is to try to go beyond brute correlation towards explanation, rather than addressing the hard problem head-on. The idea is to establish – theoretically and experimentally - systematic links between material properties and phenomenological properties that have explanatory grip, predictive power, and that we can potentially control and have interventional power over. This is what I call the real problem of consciousness, and the hope is that as we do this, the sense of how big a mystery the hard problem of consciousness really is will change and diminish, and maybe even disappear altogether. The hard problem might not be solved – it would instead be dissolved. Tony Sobrado: I like having different interpretations of the bridging principle - that's a good point. Now your framework depends on how we define a bridging principle, which you've already partially addressed. At what point do we stop using these overarching bridging principles and just have something that is sufficient in terms of explanation and prediction, which we will discuss shortly, and ultimately in terms of causation and intervention. But for now, tell me what you think is the ‘real problem of consciousness’ and how that works in explaining consciousness in general, so that we can demystify some of these elements that are thrown up by the hard problem of consciousness and the explanatory bridge principles. Anil Seth: So I think you can go after the hard problem from two directions. You can go at it by trying to go beyond brute correlations and make these bridging principles more substantial. Or you can go at it from the other direction, by questioning our intuitions about what would constitute a satisfactory explanation of consciousness. For this second direction, David Chalmers has discussed this in terms of what he calls the meta-problem of consciousness: why we think there's a hard problem of consciousness. I think that's a really interesting question, because I suspect that one of the reasons we have this intuition that there's a hard problem is because we're trying to explain 'us'. We are trying to explain the only thing we have direct acquaintance with, which is experience (of course some philosophers will disagree that ‘direct acquaintance’ make sense). Anyway, we certainly have a different kind of relationship with experience than we do with other things. So if we're trying to explain how a black hole works, we don't expect it to make intuitive sense to us, because we don't have intuitive acquaintance with what a black hole should be like, but we do have acquaintance with consciousness. This means that we may implicitly apply different criteria to what would count as a successful scientific explanation of this phenomenon. Putting things this way brings me a little bit closer to what's called an illusionist position on consciousness. Indeed, people I know and respect keep trying to tell me that I should just accept that I'm an illusionist, and I resist this, because illusionism is interpreted in different ways, and the strong form of it is that we're just wrong that there's anything special about consciousness. The strong illusionist position is that once we've explained function, disposition and behavior - that's it, that’s all there is to explain. But there are also weaker versions that I find a little bit more appealing, which is that consciousness exists, but it's not what we think it is I think that this perspective is very likely to be both true and useful. For example, we have already changed our views on what the self is. Tony Sobrado: So the illusionist school is also very interesting in terms of, as you say, the interpretation. So I too would put you in the bracket of the weaker form of illusionism just because of what I've seen from some of your work. Daniel Dennett is obviously the pioneer of illusionism and these days we have Andy Clark and Keith Frankish as well. Their overall framework is that basically we misrepresent these subjective experiences as having some kind of ‘likeness’ quality - phenomenological properties - which I find a bit extreme. But in terms of the weaker form which I'm aligned to, and bits of your work seem to map onto as well, is this idea of having a unified ‘Cartesian Theatre’ as Dennett described it, and that does not actually exist. So basically Tony or Anil does not sit there watching the world go by in an observational seat. Instead we are a composite of different parts of consciousness and conscious experience is actually integrated not unified and complete – the ‘oneness’ is the illusion. So that's the weak form of illusionism as I see it and that I agree with. Anil Seth: Exactly. You know, another example of this perspective is how we think about free will. If we think that in order to explain free will we have to take some kind of libertarian view where conscious experiences have real causal power in the sense that they make things happen at the micro level that wouldn't happen otherwise, then I think we get nowhere. So part of the challenge in explaining free will is to understand that, well, that's not the right question to ask. Free will is not an uncaused cause. It's a particular way of experiencing action that is generated within the brain and the body. But then some people say: "Oh, well, you know, you're changing the rules of the game. Now you're changing them to explain something other than the real phenomenon and moving to illusion." Those kinds of approaches are always susceptible to that. It's like, "Oh, you're not really explaining consciousness” or “You're not really explaining free will," at which point I would reply, "Well, actually, I am, because of ABC". Tony Sobrado: I completely agree with that sentiment, and it also goes back to what you said about the meta-problem of consciousness. Why is consciousness a problem? There's something bizarre about why it seems strange to humans - we won't get into the zombie argument here. But why do we humans, with our own consciousness and internal sensations, think that there's something strange about consciousness coming out of physical matter, and some of your writings pick that up as well. So here I agree with you that you don't need an explanation of why things are conscious, and as I say, no more than you need an explanation of why entropy says that degradation in systems always produces disorder in physics. And in cognitive science there's functionalism and evolution, where consciousness serves an evolutionary function, i.e. consciousness is a function that allows the organism to exist and survive - that's why we have consciousness, because it's obviously a scheme to survive as a species. But to ask why there is consciousness at a meta level is no more than asking where the laws of nature come from and then you get into elements of determinism and brute facts, so in the same way you don't need to ask why some things are conscious. But back to your work in the lab and in terms of the best way to do that pragmatically using scientific models is to try to figure out how we map our subjective experiences onto ontological mechanisms of the brain, so expand on that. Anil Seth: The way we try to do this is, first, to distinguish different aspects of consciousness that we might want to explain. These are aspects that can be treated as heuristically separately, but are not necessarily ontologically separable. One category of aspects can be described as global changes in conscious state: sleep, anaesthesia, and so on. Changes in global state are relatively easy to study - we can, for example, track what happens when you go under general anaesthesia. Most people would accept that this is an important difference in terms of consciousness, because you have a system that is conscious in one state but not conscious in another state. The challenge is that many things in the brain and body are changing in transitions like these, not just consciousness. This means the merely establishing correlations will not be enough. Instead, following the real problem approach we have to find processes in the brain (and body) that explain and help predict (and control) global differences in the level of consciousness. How to do this? One promising way is to ask what features are common to every conscious experience. Here, I go back to people who inspired me a long time ago, like Gerald Edelman and Giulio Tononi, who argued that every conscious experience is unified, and every conscious experience is also informative. Well, if that's the case, then the dynamics and mechanisms of the brain should have also have these properties during conscious states, but maybe not during unconscious states. So now we’ve moved beyond mere correlations, because the concepts of unity and informativeness –when suitably made precise and applicable mathematically, are now doing some explanatory work. We’re-identifying a kind of homology between a brain property and an experiential property – a bridging principle, to go back to what we were saying earlier. So that's one. Then the second is conscious content. When we're conscious, we're conscious of something: we’re experiencing a multi-modal perceptual scene. Experimentally, conscious content can be manipulated by psychophysical methods so that sometimes you're conscious of a stimulus, while at ad other times you're not. Perceptual illusions and the like can also be useful here in teasing apart sensory information from perceptual experience in interesting ways. The question here is how do we best understand why a particular conscious experience is the way it is, and not some other way. My preferred approach here is the predictive brain - also known as predictive processing, and active inference, among other things. This framework does not tell you why a thing is conscious, but it has the potential to tell you why a particular conscious experience may be underpinned by underlying brain dynamics in way that can do explanatory work: why the experience of redness is different from the experience of pain, for example. And that brings us to the third set of aspects of consciousness, which is the self. I like to think of the self a subset of conscious content. Part of what a conscious experience is at any given time typically involves experiences of an external world and also of being a self within that world. Although strictly a subset it's a very distinctive subset, so I'm treating it heuristically as a distinct set of aspects. The weak illusionist position here is to say that the self is not the “thing” that's doing all this experiencing, it's also another kind of perceptual experience. In my view, the self is a collection of perceptual experiences underpinned by neural predictions – predictions of a certain kind that are rooted in control and regulation of the body. This perspective means we can do all kinds of experiments. For example, we can manipulate the sensory feedback about the body and see how that changes the experience of ownership of the body. And we can construct ideas about emotions, which see them in terms of predictions about sensory signals coming from the body – this is a view of emotion as ‘interoceptive inference’ that I started talking about just over ten years ago. Tony Sobrado: So by describing different elements of consciousness and then seeing how mechanisms in the brain can be used as explanatory and predictive models for different types of consciousness is very important when we talk about what kind of bridge principle is needed. You've talked about how one type is sufficient, which many people would agree is sufficient, but someone like Joseph Levine would want to know why the fibers of neural state x cause the experience of pain, or is a causal model in itself sufficient? This is what I want to address now: If consciousness and mind aren't entirely physical, then we have big problems to overcome when we do so many different kinds of interventions and manipulations of brain states and their effects on consciousness. For example, interventionist theories of causation and predictive models of causation would obviously say that the brain causes consciousness. So my real question is, since consciousness is so important in terms of causation, as the work in neuroscience presents it, how do you feel about elements of identicalism, and whether that raises methodological and metaphysical problems, because your work is really about mechanisms that cause - which is quite different from elements of identicalism. But then again, general causation could be a weak form of identicalism, which would say, well, ultimately it's just brain states that cause consciousness anyway, but we have causal chains, whereas strong forms of identicalism don't really require causation or causal chains at all, because mental state x is just brain state x in an aggressively reductionist approach. But your work and other work in neuroscience does involve causation. This is an ontological dilemma that we have in terms of the mental being married to the physical, and we both agree that the mental exists where people like Daniel Dennett, Patricia Churchland, and eliminativists would say "No, there's no real mental’ so to speak. So from your point of view, does identicalism cause any problems in the overlap of causal models in neuroscience and how that relates to consciousness? Anil Seth: No, I don't think so. One of the problems here is this idea of causation applied to the relationship between consciousness and the brain. I think that can be very problematic, because in a sense, if you take it in the standard sense of causation, you're already implying a kind of dualism. So I don't tend to talk in terms of the brain causing consciousness, because as we were discussing earlier, then you can end up asking the wrong questions. This is very clear in the context of free will, where one might be tempted to say something like: "Okay, so if the brain causes consciousness, then free will is an aspect of consciousness that causes things in the brain." I think that's the wrong approach. I tend to practice agnosticism about that relationship. To answer your question more directly, while I wouldn’t want to specifically endorse identity theory, what I'm doing at least seems compatible with something like that – perhaps a kind of neutral monism, where the mental/conscious and physical are different aspects of a single underlying reality. Causation comes up in this picture in a couple of ways. One is as a very useful epistemological tool to refine the systematic bridging principles that we're looking for. Remember the ‘real problem’ framing in terms of: explain, predict, and control. So if brain states can explain, predict, and control phenomenological states, then we are in business. Then causation just becomes a way of establishing or testing these claims about the prediction and control relationships. We can use interventional methods, like as you say optogenetics, of brain stimulation, or analytic methods – like Bayesian causal modelling – to establish causal relationships. As far as I can see, this a metaphysically neutral approach, and is the normal way I would think about causation in this framework. The other way in which causality comes into the picture for me is best illustrated by appealing to work I’ve been involved in with colleagues like Adam Barrett, Fernando Rosas, Pedro Mediano, and Lionel Barnett. This work has to do with emergence - another tricky term. People often talk about emergence in terms of irreducibility. And in indeed, a commonly proposed example of emergence in terms of irreducibility is consciousness itself – in a hard problem sense. There's also a weaker but very interesting sense of emergence: in some systems, it just is the case that the whole is more than the sum of the parts. But not in any spooky way that contravenes ontological reducibility. The whole is still made of the parts. We can also think about consciousness and emergence in this weaker sense. Tony Sobrado: Causation - that's half the problem here, because causation is so metaphysically loaded, obviously from the days of David Hume to the way it's practiced today in different scientific fields. I had this conversation with Donald Hoffman about the different elements of causation and how they become problematic in cognitive science, and you just reeled off a whole bunch of cases where you could say in one sense that the brain causes consciousness, but you're also a little reluctant to say how much causation is involved. Some philosophers of causation would say that as soon as you start talking about 'predictability', for example, if you go back to Karl Popper, as soon as you start talking about 'predictability', 'manipulability' and 'intervention', then you're already talking about causation, because it can't be anything else. Because if you can manipulate it, then it must be a causal property. Then the second thing is emergence. Top-down causation, which is basically like a feedback loop going one way or the other, is very interesting in general and also for consciousness. The thing about emergence that I think is interesting in terms of the output for consciousness is that the whole is more than just the sum of the parts. I think that's fine, a lot of people would agree with that. But I think ultimately - as someone who has discussed this with philosophers of physics like Craig Calendar, who argues that you can have the output be more than the sum of its parts, but ultimately that kind of emergence still can't escape a form of reductionism because at the end of the day it all supervenes on a physical property anyway. But yes, causation is obviously very loaded and complex, and so are the different meanings of emergence. Anil Seth: In my opinion, any version of emergence that violates supervenience is not the kind of emergence we should be looking for. I am keen to characterise the kind of emergence that is consistent with things or processing existing at higher levels of description, and these things having causal and explanatory power, and which is also consistent with the complete absence of magic. There's something real about a table as a higher level, macroscopic level of description of a bunch of atoms. The table is a thing in and of itself, but it remains consistent with physics at different levels of description. One problem here is the philosophically appealing tendency to identify opposite poles of a debate and stick just with them: something is either emergent or it's not. But many things are too vague for the sharp distinctions of these kinds, which is where mathematics and computational modelling can really help. Can we come up with an operational measure of emergence that is precise and graded? And if one can do this in a way that's philosophically non-naïve, then I think this delivers a very useful advance where you can say there's a reasonable way to talk about emergence and downward causation that it is testable and can help explain phenomena in the natural world. For one example of this, I’d offer the measure of ‘dynamical independence’, based on transfer entropy, developed by and with my colleague Lionel Barnett. Then, finally, on causation, my work over the years has been somewhat related, to the work of the economist Clive Granger on Granger causality in statistics. It's not really about causality in the sense of physical causation. It's about prediction. For example, if you have two time series, and time series A helps you predict the future of time series B better than just knowing the past of time series B by itself, then there’s a statistical / predictive sense in which A ‘Granger causes’ B., Importantly, this is not equivalent to physical causation. It’s more like a kind of asymmetric correlation. Normally when you say things are correlated - like A correlates with B, and B correlates with A - it's the same thing, it's symmetric. But what we have with Granger causality is a measure of functional connectivity, like correlation, but which is asymmetric. One of the things that we did years ago was to prove that Granger causality is equivalent to transfer entropy in an information theory – for the Gaussian case. This means that Granger causality can be thought of as a measure of information flow, and its relationship to causality as a concept is very subtle. Tony Sobrado: That's a good way to put it - in terms of the gradient of causality. For the Granger model of causality specifically, it's more of an epistemology of causation in terms of prediction, but not necessarily about causation ontologically. It's more of a prediction theory, basically to have a likely answer for something, but not necessarily about the metaphysics of causation. Anil Seth: Absolutely, so an example would be if in the physical world A is connected to B and there's a little bit of a delay and you start to intervene you can get Granger causality. There could be physical causality but of course it could be that there is a common cause, C, to A and B and so if you don't measure C you will still see Granger causality between A and B. But now if you intervene nothing will happen in B, because of the common cause. So in very simple situations you can have a setup to show that these two things are related, but they are not the same, and you will go wrong if you mistake one for the other. Tony Sobrado: That's a very good point. James Woodward, the philosopher of science, he's a major thinker on the interventionist theories of causation and causation in general and he's got a great way of putting some of these sentiments where he says that you can use variables statistically in the data sets of economists, neuroscientists and biologists and so on and you can say that they explain something but whether they actually state causation is something that is quite different which is more about the elements of how we use statistical methods as well.
*** |