Assigning less than 5% probability to ‘cows are moral patients’ strikes me as really overconfident. Ditto, assigning greater than 95% probability. (A moral patient is something that can be harmed or benefited in morally important ways, though it may not be accountable for its actions in the way a moral agent is.)
I’m curious how confident others are, and I’m curious about the most extreme confidence levels they’d consider ‘reasonable’.
I also want to hear more about what theories and backgrounds inform people’s views. I’ve seen some relatively extreme views defended recently, and the guiding intuitions seem to have come from two sources:
(1) How complicated is consciousness? In the space of possible minds, how narrow a target is consciousness?
Humans seem to be able to have very diverse experiences — dreams, orgasms, drug-induced states — that they can remember in some detail, and at least appear to be conscious during. That’s some evidence that consciousness is robust to modification and can take many forms. So, perhaps, we can expect a broad spectrum of animals to be conscious.
But what would our experience look like if it were fragile and easily disrupted? There would probably still be edge cases. And, from inside our heads, it would look like we had amazingly varied possibilities for experience — because we couldn’t use anything but our own experience as a baseline. It certainly doesn’t look like a human brain on LSD differs as much from a normal human brain as a turkey brain differs from a human brain.
There’s some risk that we’re overestimating how robust consciousness is, because when we stumble on one of the many ways to make a human brain unconscious, we (for obvious reasons) don’t notice it as much. Drastic changes in unconscious neurochemistry interest us a lot less than minor tweaks to conscious neurochemistry.
And there’s a further risk that we’ll underestimate the complexity of consciousness because we’re overly inclined to trust our introspection and to take our experience at face value. Even if our introspection is reliable in some domains, it has no access to most of the necessary conditions for experience. So long as they lie outside our awareness, we’re likely to underestimate how parochial and contingent our consciousness is.
(2) How quick are you to infer consciousness from ‘intelligent’ behavior?
People are pretty quick to anthropomorphize superficially human behaviors, and our use of mental / intentional language doesn’t clearly distinguish between phenomenal consciousness and behavioral intelligence. But if you work on AI, and have an intuition that a huge variety of systems can act ‘intelligently’, you may doubt that the linkage between human-style consciousness and intelligence is all that strong. If you think it’s easy to build a robot that passes various Turing tests without having full-fledged first-person experience, you’ll also probably (for much the same reason) expect a lot of non-human species to arrive at strategies for intelligently planning, generalizing, exploring, etc. without invoking consciousness. (Especially if your answer to question 1 is ‘consciousness is very complex’. Evolution won’t put in the effort to make a brain conscious unless it’s extremely necessary for some reproductive advantage.)
… But presumably there’s some intelligent behavior that was easier for a more-conscious brain than for a less-conscious one — at least in our evolutionary lineage, if not in all possible lineages that reproduce our level of intelligence. We don’t know what cognitive tasks forced our ancestors to evolve-toward-consciousness-or-perish. At the outset, there’s no special reason to expect that task to be one that only arose for proto-humans in the last few million years.
Even if we accept that the machinery underlying human consciousness is very complex, that complex machinery could just as easily have evolved hundreds of millions of years ago, rather than tens of millions. We’d then expect it to be preserved in many nonhuman lineages, not just in humans. Since consciousness-of-pain is mostly what matters for animal welfare (not, e.g., consciousness-of-complicated-social-abstractions), we should look into hypotheses like:
first-person consciousness is an adaptation that allowed early brains to represent simple policies/strategies and visualize plan-contingent sensory experiences.
Do we have a specific cognitive reason to think that something about ‘having a point of view’ is much more evolutionarily necessary for human-style language or theory of mind than for mentally comparing action sequences or anticipating/hypothesizing future pain? If not, the data of ethology plus ‘consciousness is complicated’ gives us little reason to favor the one view over the other.
We have relatively direct positive data showing we’re conscious, but we have no negative data showing that, e.g., salmon aren’t conscious. It’s not as though we’d expect them to start talking or building skyscrapers if they were capable of experiencing suffering — at least, any theory that predicts as much has some work to do to explain the connection. At present, it’s far from obvious that the world would look any different than it does even if all vertebrates were conscious.
So… the arguments are a mess, and I honestly have no idea whether cows can suffer. The probability seems large enough to justify ‘don’t torture cows (including via factory farms)’, but that’s a pretty low bar, and doesn’t narrow the probability down much.
To the extent I currently have a favorite position, it’s something like: ‘I’m pretty sure cows are unconscious on any simple, strict, nondisjunctive definition of “consciousness;” but what humans care about is complicated, and I wouldn’t be surprised if a lot of ‘unconscious’ information-processing systems end up being counted as ‘moral patients’ by a more enlightened age. … But that’s a pretty weird view of mine, and perhaps deserves a separate discussion.
I could conclude with some crazy video of a corvid solving a rubik’s cube or an octopus breaking into a bank vault or something, but I somehow find this example of dog problem-solving more compelling:
I’m not sure how you go from “So… the arguments are a mess, and I honestly have no idea whether cows can suffer,” to “‘I’m pretty sure cows are unconscious on any simple, strict, nondisjunctive definition of “consciousness.” Perhaps the latter is referring to some complicated thing beyond suffering, but at present, there seems to be a big tension here to me. And even a 5% probability like you mention should make nonhumans very important for present and far future moral consideration.
Agreed, even a 1% or 0.1% probability would suffice. It would take a whole lot for me to approve of putting a human at a 1/1000 risk of being tortured.
I’m actually an eliminativist about first-person consciousness, so I don’t think the morally relevant kind of ‘suffering’ need be all that related to our conventional concepts of consciousness and subjectivity. But my views are idiosyncratic, so I wanted to background them.
I doubt cows have human-style subjectivity (or human-style whatever’s-actually-going-on-in-the-vicinity-of-what-we-label-‘subjectivity’), but I expect them to have an alien subjectivity-like thing, and I wouldn’t be surprised if a fully developed human morality ends up assigning value to some things in the category ‘alien subjectivity-like thing’.
Though I definitely wouldn’t go so far as to say that e.g. lobsters are necessarily conscious, or necessarily moral patients, just because they exhibit a sensitivity to bodily damage. It might be wise to avoid eating lobsters, but if so it’s because lobster behavior gives indirect evidence for particular cognitive algorithms we think may be valuable, not because the behaviors themselves are what matter. Neither subjective experience nor any crude motor behavior, I’d wager, is what we’ll ultimately end up prizing.
Let’s consider microelectrode studies using verbally competent human subjects. If intensity of consciousness, or even consciousness itself, were a function of computational complexity or meta-cognition, say, then we might predict that microelectrode stimulation of evolutionarily ancient regions of the limbic system would elicit only faint experiences, whereas stimulation of the prefrontal cortex would elicit the most intense experiences.
The opposite seems to hold. The kinds of consciousness associated with solving mathematical equations, generative syntax, higher-order intentionality (etc) distinctive of mature humans are typically faint, subtle and elusive. By contrast, the phenomenology of extreme pain, pleasure and the core limbic emotions that we share with our mammalian cousins can be intense. Depending on the account we give of phenomenal binding, sperm whales, for example, may be more intensely conscious than humans. Cows and pigs, on the other hand, may be no more sentient than prelinguistic toddlers.
On pain of arbitrary anthropocentric bias, our nonhuman cousins deserve to be treated with equal care and respect.
The underlying assumption here is that a sensorily and affectively vivid experience is (for that reason) in some sense ‘more conscious’, or more typical of consciousness. For example, as you reduce how conscious a subject is (or bring them closer to a threshold where consciousness suddenly winks out), you see the content of the subject’s experiences become more dull, faded, non-motivating. A rival prediction would be that as you reduce the degree of consciousness, you increase the vividness of the remaining consciousness. (Or don’t affect vividness at all.)
Why do you think vividness and consciousness go hand in hand? E.g., when I remember times I’ve gone under anesthesia, it’s not obvious to me that my experiences got fainter and fainter — if anything, my experiences were sometimes extra vivid or salient, like the hallucinations some people see when they’re falling asleep.
But maybe anesthesia and sleep are poor models for ‘transitioning to unconsciousness’, since some part of your brain might still be consciously dreaming (or our memories might be unreliable).
If you consider the role performed by vividness, it is surely to act as an attention attracting system for an executive system that is easily capable of getting distracted. For instance, if memories were as vivid as present experience, a subject might get mentally lost in the past, to the detriment of their survival. Likewise, what pain us “for” is to act as a non maskable interrupt.
Interrupt to what? One of the points I want to makes is that vividness, and phenomenality in general, is not all there is to consciousness. A vivid attention-grabbing sensorium only makes sense in relation to another system that has a number of options to direct its attention towards. Vividness is then a weighting mechanism.
I see no reason to suppose that levels of consciousness vary moment-by-moment with levels of vividness, not least because phenomenality is not the whole story. However, I would expect the capacity for vivid phenomenality to be better developed in organisms with more cognitive ability just because they are more readily distractable. Thus, I can support the common intuition that lower organisms, eg invertebrates, have relatively faint qualia, if any.
Robby, I’m struggling here. In your reply to Jacy, you say that you’re an eliminativist about first-person consciousness. But in your reply above, you’re acknowledging that first-person consciousness is real. My guess is, that in some sense, we’re talking past each other??
When we’re talking about the philosophy of consciousness, I’ll be strict and say that I think something in our concept of subjectivity and/or phenomenal richness is deeply flawed, in a way that suggests phenomenal consciousness is an illusion. (Though other kinds of consciousness, like access-consciousness, exist — e.g., we can be access-conscious of phenomenal consciousness, or talk about phenomenal consciousness, whether or not we’re phenomenally conscious.)
When we’re talking about the science of consciousness, and how it’s neurally implemented, I’m less focused on phenomenal consciousness, so I’m happier to talk about ‘experience’ and ‘pain’ with the acknowledgment that they’re placeholders for functional concepts. Chalmers and I don’t disagree about any fact regarding the structure or dynamics of brains, including their evolutionary and developmental history and their cognitive dynamics. So the fact that we disagree about the ‘fire’ hidden within those physical states, the ‘fire’ that has no effect on the distribution of neurons or their evolution or genetics, can be completely bracketed.
You’re suggesting a specific time when you think that a physical process evolved (for physical reasons), the neural correlate of what you call ‘binding’. Since the exact same process occurs (for the exact same reasons) in Zombie World, we can have a productive conversation about when and how ‘binding’ evolved, without talking much about the details of the Hard Problem. We could have this conversation just as productively even if I turned out to be a zombie; since being a zombie doesn’t affect what words I say, I can contribute just as much, and have just as well-calibrated intuitions, regardless of whether I’m phenomenally conscious. Whether I’m a zombie or not also can’t affect my beliefs about animal welfare. (Though whether I believe I’m a zombie can make a difference re my views of animal welfare. Ditto, whether a zombie believes she’s a zombie is important for what diet she selects.)
If first-person consciousness can’t change which arguments I make, we can mostly set it aside, or swap in some neural correlate that does affect which arguments I make.
> I’m happier to talk about ‘experience’ and ‘pain’ with the acknowledgment that they’re placeholders for functional concepts.
Which functional concepts? After all, if you could write out a convincing seeRed(), then you have an acceptable answer to the Hard Problem, and we would all go home.
> we can have a productive conversation about when and how ‘binding’ evolved, without talking much about the details of the Hard Problem.
Since binding is the binding of conscious experience, you can’t gave much of a conversation on the basis that there is no conscious experience.
> We could have this conversation just as productively even if I turned out to be a zombie; since being a zombie doesn’t affect what words I say, I can contribute just as much, and have just as well-calibrated intuitions, regardless of whether I’m phenomenally conscious.
A functional duplicate is a functional duplicate, so it will produce the same responses to the same stimuli. A functional duplicate of yourself without qualia, a zombie, will function the same…and a functional duplicate of yourself without physical properties, made of ghostly soul stuff alone, an np-zombie will function the same.
From the fact that your np zombie duplicate functions without physical properties, you would not infer that you yourself work without physical properties. Functional duplication explains why behaviour us the same, but not why anything happens at all. A function is abstract concept and needs a concrete implementation to d anything. The fact that some set of properties is not needed by (the implementation of) one of your functional duplicates does not mean they are not needed by you, so the claim that qualia are epiphenomenal because of zombies , if that is how you are arguing, does not follow.
Alternatively the fact that functions need implementations means means you can’t deal with every possible issue at the functional level..if that is how you are arguing.
Let’s say that we undergo merely illusory agony, hear illusory melodies and feel illusory jealousy, and so forth. On an ontology of eliminativist materialism, there shouldn’t be any of this illusory phenomenal “seeming” either. A seeming oasis in the desert may be turn out to be a mirage; but the mirage itself isn’t illusory. Such “seeming” phenomenology can’t be derived from a fundamental physics of fields devoid of phenomenal properties.
On the other hand, if Strawsonian physicalism is true, then a zombie world is physically impossible because such a thought-experiment misconstrues the intrinsic nature of the physical. A world can’t simultaneously be physically identical to our world and yet lack its defining physical nature. P-zombies would be conceivable only if we assume that consciousness doesn’t disclose the intrinsic nature of the physical, i.e. that Strawsonian physicalism is false.
Again assuming Strawsonian physicalism, the intrinsic nature of the stuff of the world isn’t hidden – not all of it, at any rate. Rather, the self-intimating phenomenal “fire” in the equations is what one’s mind-brain instantiates: it gives us causal efficacy. The challenge for the Strawsonian physicalist – and indeed any kind of reductive physicalist – is to explain how phenomenal binding is possible. Strawsonian physicalism is not animism. Fortunately, this very difficulty leads to a testable empirical prediction about whether Strawsonian physicalism is true – albeit a prediction that most neuroscientists would find intuitively absurd. [http://www.physicalism.com]
Regarding ‘seeming’: the easy response is that we mean two different things by ‘seeming’: we have a phenomenal / subjective concept of ‘seeming’, and we have a behavioral / functional / cognitive concept of ‘seeming’. Using the latter concept, we can coherently say that the ground ‘seems far away’ to a bat, even if we don’t think bats are subjectively aware of anything. Likewise, we can say that things functionally seem certain ways to zombies, even though ‘the lights aren’t on’ subjectively. My claim is that we functionally seem to have phenomenal experiences (including phenomenal seemings), but we don’t have actual phenomenal experiences of any sort. Only if we equivocate between these two senses of ‘seem’ does it sound as though I’m making the inconsistent assertion, ‘we phenomenally seem to have phenomenal experiences (including phenomenal seemings), but we don’t have actual phenomenal experiences of any sort’.
But that’s a fairly superficial point. I think you might have in mind a deeper criticism, a demand for some explanation of how functional seeming can so convincingly simulate phenomenal seeming. The best argument against eliminative physicalism is that it appears introspectively absurd. One way or another, we seem (on initial reflection) to have an unmediated, inerrant, primitive grasp on a stream of subjectively conscious data. Defending eliminativism from that objection, and giving a convincing theory of cognition that can account for the (functional) semblance of phenomenal consciousness, is a very large challenge. I have various ideas for how to make progress on that challenge, but I want to acknowledge that this is a good objection to my view, and doesn’t rest on equivocation or wordplay.
Regarding zombies: the ‘fire’ within the equations of physics does not logically supervene on the equations themselves. That is, it’s logically possible for there to exist a world where functionally identical equations hold true, but the ‘fire’ is phenomenally different (e.g., an inverted qualia world) or nonphenomenal (e.g., a zombie world). Zombies and inverts are conceivable not in the sense that they reproduce our brains’ quiddities / ‘fire’ while changing our p-consciousness; they’re conceivable in the sense that they reproduce our brains’ mereological structure and dynamics while swapping in a different set of quiddities (or no quiddities at all, if that’s a possibility).
With that clarification, all my above points hold. It’s immaterial whether or not we accuse zombies of being ‘unphysical’ for harboring an alien fire within their (otherwise identical) physics equations. The points about supervenience are what really matter, and we ought to be able to taboo words like ‘physical’ and ‘material’ and have exactly the same substantive discussion.
First, thanks for the fair-minded discussion of “seeming”. So if we acknowledge the existence of a stream of subjectively conscious data, can reductive physicalism be saved? Or is there some “element of reality” not captured in the formalism of physics – QFT or its successor? If Strawsonian physicalism is true, then the solutions to the master equation yield the values of qualia – bound instances of which each of us instantiate. Empirical adequacy is essential. Yes, it would be (very) nice to have some kind of notional cosmic Rosetta stone allowing us “read off” the values of qualia from the solutions to the equations – and incidentally rule out inverted qualia in the process. But contra Chalmers, what’s lacking is our understanding of how to do so rather than something missing from the formalism itself.
In my view, both of you are granting too many of Chalmers’s premises. This forces the choice between odd quantum theories (or even stranger options such as dualism) or eliminativism. But the choice is a false one.
Don’t let philosophers and theologians define subjectivity. Rather than talk of whatever’s-actually-going-on-in-the-vicinity-of-what-we-label-‘subjectivity’, save some breath and say subjectivity.
If logical possibility, or conceivability, is just about what can be asserted without apparent contradiction, then it is no guide to possibility. It can be supposed without apparent contradiction that Hesperus is a different planet from Phosphorus, but there is no such possibility. On the other hand, if logical possibility or conceivability is supposed to be richer than assertibility-without-apparent-contradiction, then it is not obvious that zombies are actually conceivable. As long as conceivability is about our concepts and their internal logical relations, deriving metaphysical conclusions from what one finds “conceivable” is a dangerous game.
“If logical possibility, or conceivability, is just about what can be asserted without apparent contradiction, then it is no guide to possibility.”
Prima facie conceivability isn’t synonymous with ‘logical possibility’, but the two are evidentially linked. Being able to assert something without apparent contradiction is Bayesian evidence for its logical possibility — more so the easier you’d expect to be able to demonstrate a contradiction. If ‘I can’t seem to consistently imagine setting something on fire without changing its physical composition’ is at least a decent reason to think that fire is physical, then ‘It seems I can consistently imagine adding conscious experiences to a system without changing its physical composition’ is at least a decent reason to think that consciousness is nonphysical.
“It can be supposed without apparent contradiction that Hesperus is a different planet from Phosphorus, but there is no such possibility.”
Once you know enough about Venus, it stops being conceivable that the two are different objects. For someone in the ancient world trying to argue that the two could turn out to be identical, they wouldn’t need a full theory or conclusive proof; they could just tell a reasonable story in which a sequence of events culminates in our discovering the identity. A similarly internally reasonable schematic story for consciousness would solve the hard problem. That wouldn’t be considered an unreasonable evidential demand for any other macro-level phenomenon.
Chalmers talks about this in http://consc.net/papers/conceivability.html and http://consc.net/papers/analysis.pdf.
Still struggling to read or (first link) re-read Chalmers, but I’m still thinking the way to determine possibility (other than logico-mathematical) is by doing science.
It would be going too far to say we’ve already discovered the identity for some conscious states/processes, but we’ve already discovered the *neighborhood*. It’s just that we’re having trouble believing it.
What Chalmers demands, I believe, is a physical description of brain processes that will transparently bring to mind the way (e.g.) pain feels, so that we will say, “*of course* that would be painful for the organism, now I see it!” That’s never going to happen: the reflexive nature of identification of phenomenal properties precludes it (see http://www.jenanni.com/papers/Doublemindedness.pdf ) But a very large family of physicalist theories *predicts* that it won’t happen. A successful prediction cannot be used to refute the theory that makes it.
I am no fan of arguments from conceivability, but rejecting them isnt that much help to physicalism, since anti physicalism can be argued in other ways, eg Mary’s Room.
Showing that phenomenal-is-functionality is possible is some way from showing it is actual, or even independently motivated. It’s bullet biting.
I try to find any reasonable doubt that cows can suffer, but I just can’t… Certainly they can feel it, if being hit hard enough.
Suppose you design a robot to make a sad face every time you strike it, or to emit a plaintive cry and stroke the place where it was hit. If you’re like me, you’ll feel at least some empathy for the robot; but that’s no reason to think the robot is conscious. Our empathy circuits are hardwired for modeling other adult humans. They give plenty of false positives when we leave that domain, and they could easily give plenty of false negatives.
I think the basic error here is to assume that because non-human animals can emit a complex response to a pain state, they must have a first-person point of view on the world and valenced joy and suffering states, like we do. But this depends heavily on your theory of consciousness, of what that first-person awareness is there for. If it’s something relatively high-level, like complexly modeling other agents or reasoning with abstraction, then it’s perfectly possible that you can have complicated cognition about bodily damage without experiencing anything.
As an intuition pump, imagine your computer ‘noticing’ damage to its hardware and giving complex error messages in response. There’s more reason to think that cattle suffer than that existing computers do (because of homology and common ancestry), but the point is that our immediate empathy reaction, triggered by human-like moans but not by pop-up windows, is unlikely to just-by-coincidence be perfectly calibrated to philosophy-of-mind accuracy.