Is ‘consciousness’ simple? Is it ancient?

 

Assigning less than 5% probability to ‘cows are moral patients’ strikes me as really overconfident. Ditto, assigning greater than 95% probability. (A moral patient is something that can be harmed or benefited in morally important ways, though it may not be accountable for its actions in the way a moral agent is.)

I’m curious how confident others are, and I’m curious about the most extreme confidence levels they’d consider ‘reasonable’.

I also want to hear more about what theories and backgrounds inform people’s views. I’ve seen some relatively extreme views defended recently, and the guiding intuitions seem to have come from two sources:


 

(1) How complicated is consciousness? In the space of possible minds, how narrow a target is consciousness?

Humans seem to be able to have very diverse experiences — dreams, orgasms, drug-induced states — that they can remember in some detail, and at least appear to be conscious during. That’s some evidence that consciousness is robust to modification and can take many forms. So, perhaps, we can expect a broad spectrum of animals to be conscious.

But what would our experience look like if it were fragile and easily disrupted? There would probably still be edge cases. And, from inside our heads, it would look like we had amazingly varied possibilities for experience — because we couldn’t use anything but our own experience as a baseline. It certainly doesn’t look like a human brain on LSD differs as much from a normal human brain as a turkey brain differs from a human brain.

There’s some risk that we’re overestimating how robust consciousness is, because when we stumble on one of the many ways to make a human brain unconscious, we (for obvious reasons) don’t notice it as much. Drastic changes in unconscious neurochemistry interest us a lot less than minor tweaks to conscious neurochemistry.

And there’s a further risk that we’ll underestimate the complexity of consciousness because we’re overly inclined to trust our introspection and to take our experience at face value. Even if our introspection is reliable in some domains, it has no access to most of the necessary conditions for experience. So long as they lie outside our awareness, we’re likely to underestimate how parochial and contingent our consciousness is.


 

(2) How quick are you to infer consciousness from ‘intelligent’ behavior?

People are pretty quick to anthropomorphize superficially human behaviors, and our use of mental / intentional language doesn’t clearly distinguish between phenomenal consciousness and behavioral intelligence. But if you work on AI, and have an intuition that a huge variety of systems can act ‘intelligently’, you may doubt that the linkage between human-style consciousness and intelligence is all that strong. If you think it’s easy to build a robot that passes various Turing tests without having full-fledged first-person experience, you’ll also probably (for much the same reason) expect a lot of non-human species to arrive at strategies for intelligently planning, generalizing, exploring, etc. without invoking consciousness. (Especially if your answer to question 1 is ‘consciousness is very complex’. Evolution won’t put in the effort to make a brain conscious unless it’s extremely necessary for some reproductive advantage.)

… But presumably there’s some intelligent behavior that was easier for a more-conscious brain than for a less-conscious one — at least in our evolutionary lineage, if not in all possible lineages that reproduce our level of intelligence. We don’t know what cognitive tasks forced our ancestors to evolve-toward-consciousness-or-perish. At the outset, there’s no special reason to expect that task to be one that only arose for proto-humans in the last few million years.

Even if we accept that the machinery underlying human consciousness is very complex, that complex machinery could just as easily have evolved hundreds of millions of years ago, rather than tens of millions. We’d then expect it to be preserved in many nonhuman lineages, not just in humans. Since consciousness-of-pain is mostly what matters for animal welfare (not, e.g., consciousness-of-complicated-social-abstractions), we should look into hypotheses like:

first-person consciousness is an adaptation that allowed early brains to represent simple policies/strategies and visualize plan-contingent sensory experiences.

Do we have a specific cognitive reason to think that something about ‘having a point of view’ is much more evolutionarily necessary for human-style language or theory of mind than for mentally comparing action sequences or anticipating/hypothesizing future pain? If not, the data of ethology plus ‘consciousness is complicated’ gives us little reason to favor the one view over the other.

We have relatively direct positive data showing we’re conscious, but we have no negative data showing that, e.g., salmon aren’t conscious. It’s not as though we’d expect them to start talking or building skyscrapers if they were capable of experiencing suffering — at least, any theory that predicts as much has some work to do to explain the connection. At present, it’s far from obvious that the world would look any different than it does even if all vertebrates were conscious.

So… the arguments are a mess, and I honestly have no idea whether cows can suffer. The probability seems large enough to justify ‘don’t torture cows (including via factory farms)’, but that’s a pretty low bar, and doesn’t narrow the probability down much.

To the extent I currently have a favorite position, it’s something like: ‘I’m pretty sure cows are unconscious on any simple, strict, nondisjunctive definition of “consciousness;” but what humans care about is complicated, and I wouldn’t be surprised if a lot of ‘unconscious’ information-processing systems end up being counted as ‘moral patients’ by a more enlightened age. … But that’s a pretty weird view of mine, and perhaps deserves a separate discussion.

I could conclude with some crazy video of a corvid solving a rubik’s cube or an octopus breaking into a bank vault or something, but I somehow find this example of dog problem-solving more compelling:

Loving the merely physical

This is my submission to Sam Harris’ Moral Landscape challenge: “Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.)”

Though I’ve mentioned before that I’m sympathetic to Harris’ argument, I’m not fully persuaded. And there’s a particular side-issue I think he gets wrong straightforwardly enough that it can be demonstrated in the space of 1,000 words: really unrequitable love, or the restriction of human value to conscious states.

____________________________________________________

My criticism of Harris’ thesis will be indirect, because it appears to me that his proposal is much weaker than his past critics have recognized. What are we to make of a meta-ethics text that sets aside meta-ethicists’ core concerns with a shrug? Harris happily concedes that promoting well-being is only contingently moral,¹ only sometimes tracks our native preferences² or moral intuitions,³ and makes no binding, categorical demand on rational humans.⁴ So it looks like the only claim Harris is making is that redefining words like ‘good’ and ‘ought’ to track psychological well-being would be useful for neuroscience and human cooperation.⁵ Which looks like a question of social engineering, not of moral philosophy.

If Harris’ moral realism sounds more metaphysically audacious than that, I suspect it’s because he worries that putting it in my terms would be uninspiring or, worse, would appear relativistic. (Consistent with my interpretation, he primarily objects to moral anti-realism and relativism for eroding human compassion, not for being false.)⁶

I don’t think I can fairly assess Harris’ pragmatic linguistic proposal in 1,000 words.⁷ But I can point to an empirical failing in a subsidiary view he considers central: that humans only ultimately value changes in conscious experience.⁸

It may be that only conscious beings can value things; but that doesn’t imply that only conscious states can be valued. Consider these three counterexamples:

(1) Natural Diversity. People prize the beauty and complexity of unconscious living things, and of the natural world in general.⁹

Objection: ‘People value those things because they could in principle experience them. “Beauty” is in the beholder’s eye, not in the beheld object. That’s our clue that we only prize natural beauty for making possible our experience of beauty.’

Response: Perhaps our preference here causally depends on our experiences; but that doesn’t mean that we’re deluded in thinking we have such preferences!

I value my friends’ happiness. Causally, that value may be entirely explainable in terms of patterns in my own happiness, but that doesn’t make me an egoist. Harris would agree that others’ happiness can be what I value, even if my own happiness is why I value it. But the same argument holds for natural wonders: I can value them in themselves, even if what’s causing that value is my experiences of them.

(2) Accurate Beliefs. Consider two experientially identical worlds: One where you’re in the Matrix and have systematically false beliefs, one where your beliefs are correct. Most people would choose to live in the latter world over the former, even knowing that it makes no difference to any conscious state.

Objection: ‘People value the truth because it’s usually useful. Your example is too contrived to pump out credible intuitions.’

Response: Humans can mentally represent environmental objects, and thereby ponder, fear, desire, etc. the objects themselves. Fearing failure or death isn’t the same as fearing experiencing failure or death. (I can’t escape failure/death merely by escaping awareness/consciousness of failure/death.) In the same way, valuing being outside the Matrix is distinct from valuing having experiences consistent with being outside the Matrix.

All of this adds up to a pattern that makes it unlikely people are deluded about this preference. Perhaps it’s somehow wrong to care about the Matrix as anything but a possible modifier of experience. But, nonetheless, people do care. Such preferences aren’t impossible or ‘unintelligible.’⁸

(3) Zombie Welfare. Some people don’t think we have conscious states. Harris’ view predicts that such people will have no preferences, since they can’t have preferences concerning experiences. But eliminativists have desires aplenty.

Objection: ‘Eliminativists are deeply confused; it’s not surprising that they have incoherent normative views.’

Response: Eliminativists may be mistaken, but they exist.¹⁰ That suffices to show that humans can care about things they think aren’t conscious. (Including unconscious friends and family!)

Moreover, consciousness is a marvelously confusing topic. We can’t be infinitely confident that we’ll never learn eliminativism is true. And if, pace Descartes, there’s even a sliver of doubt, then we certainly shouldn’t stake the totality of human value on this question.

Harris writes that “questions about values — about meaning, morality, and life’s larger purpose — are really questions about the well-being of conscious creatures. Values, therefore, translate into facts that can be scientifically understood[.]”¹¹ But the premise is much stronger than the conclusion requires.

If people’s acts of valuing are mental, and suffice for deducing every moral fact, then scientifically understanding the mind will allow us to scientifically understand morality even if the objects valued are not all experiential. We can consciously care about unconscious world-states, just as we can consciously believe in, consciously fear, or consciously wonder about unconscious world-states. That means that Harris’ well-being landscape needs to be embedded in a larger ‘preference landscape.’

Perhaps a certain philosophical elegance is lost if we look beyond consciousness. Still, converting our understanding of the mind into a useful and reflectively consistent decision procedure cannot come at the expense of fidelity to the psychological data. Making ethics an empirical science shouldn’t require us to make any tenuous claims about human motivation.

We could redefine the moral landscape to exclude desires about natural wonders and zombies. It’s just hard to see why. Harris has otherwise always been happy to widen the definition of ‘moral’ to compass a larger and larger universe of human value. Since we’ve already strayed quite a bit from our folk intuitions about ‘morality,’ it’s honestly not of great importance how we tweak the edges of our new concept of morality. Our first concern should be with arriving at a correct view of human psychology. If that falters, then, to the extent science can “determine human values,” the moral decisions we build atop our psychological understanding will fail us as well.

____________________________________________________

Citations

¹ “Perhaps there is no connection between being good and feeling good — and, therefore, no connection between moral behavior (as generally conceived) and subjective well-being. In this case, rapists, liars, and thieves would experience the same depth of happiness as the saints. This scenario stands the greatest chance of being true, while still seeming quite far-fetched. Neuroimaging work already suggests what has long been obvious through introspection: human cooperation is rewarding. However, if evil turned out to be as reliable a path to happiness as goodness is, my argument about the moral landscape would still stand, as would the likely utility of neuroscience for investigating it. It would no longer be an especially ‘moral’ landscape; rather it would be a continuum of well-being, upon which saints and sinners would occupy equivalent peaks.” -Harris (2010), p. 190

“Dr. Harris explained that about three million Americans are psychopathic. That is to say, they don’t care about the mental states of others. They enjoy inflicting pain on other people. But that implies that there’s a possible world, which we can conceive, in which the continuum of human well-being is not a moral landscape. The peaks of well-being could be occupied by evil people. But that entails that in the actual world, the continuum of well-being and the moral landscape are not identical either. For identity is a necessary relation. There is no possible world in which some entity A is not identical to A. So if there’s any possible world in which A is not identical to B, then it follows that A is not in fact identical to B.” -Craig (2011)

Harris’ (2013a) response to Craig’s argument: “Not a realistic concern. You’d have to change too many things — the world would [be] unrecognizable.”

² “I am not claiming that most of us personally care about the experience of all conscious beings; I am saying that a universe in which all conscious beings suffer the worst possible misery is worse than a universe in which they experience well-being. This is all we need to speak about ‘moral truth’ in the context of science.” -Harris (2010), p. 39

³ “And the fact that millions of people use the term ‘morality’ as a synonym for religious dogmatism, racism, sexism, or other failures of insight and compassion should not oblige us to merely accept their terminology until the end of time.” -Harris (2010), p. 53

“Everyone has an intuitive ‘physics,’ but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter). Only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. I am arguing that everyone also has an intuitive ‘morality,’ but much of our intuitive morality is clearly wrong (with respect to the goal of maximizing personal and collective well-being).” -Harris (2010), p. 36

⁴ Moral imperatives as hypothetical imperatives (cf. Foot (1972)): “As Blackford says, when told about the prospect of global well-being, a selfish person can always say, ‘What is that to me?’ [… T]his notion of ‘should,’ with its focus on the burden of persuasion, introduces a false standard for moral truth. Again, consider the concept of health: should we maximize global health? To my ear, this is a strange question. It invites a timorous reply like, ‘Provided we want everyone to be healthy, yes.’ And introducing this note of contingency seems to nudge us from the charmed circle of scientific truth. But why must we frame the matter this way? A world in which global health is maximized would be an objective reality, quite distinct from a world in which we all die early and in agony.” -Harris (2011)

“I don’t think the distinction between morality and something like taste is as clear or as categorical as we might suppose. […] It seems to me that the boundary between mere aesthetics and moral imperative — the difference between not liking Matisse and not liking the Golden Rule — is more a matter of there being higher stakes, and consequences that reach into the lives of others, than of there being distinct classes of facts regarding the nature of human experience.” -Harris (2011)

⁵ “Whether morality becomes a proper branch of science is not really the point. Is economics a true science yet? Judging from recent events, it wouldn’t appear so. Perhaps a deep understanding of economics will always elude us. But does anyone doubt that there are better and worse ways to structure an economy? Would any educated person consider it a form of bigotry to criticize another society’s response to a banking crisis? Imagine how terrifying it would be if great numbers of smart people became convinced that all efforts to prevent a global financial catastrophe must be either equally valid or equally nonsensical in principle. And yet this is precisely where we stand on the most important questions in human life. Currently, most scientists believe that answers to questions of human value will fall perpetually beyond our reach — not because human subjectivity is too difficult to study, or the brain too complex, but because there is no intellectual justification for speaking about right and wrong, or good and evil, across cultures. Many people also believe that nothing much depends on whether we find a universal foundation for morality. It seems to me, however, that in order to fulfill our deepest interests in this life, both personally and collectively, we must first admit that some interests are more defensible than others.” -Harris (2010), p. 190

⁶ “I have heard from literally thousands of highly educated men and women that morality is a myth, that statements about human values are without truth conditions (and are, therefore, nonsensical), and that concepts like well-being and misery are so poorly defined, or so susceptible to personal whim and cultural influence, that it is impossible to know anything about them. Many of these people also claim that a scientific foundation for morality would serve no purpose in any case. They think we can combat human evil all the while knowing that our notions of ‘good’ and ‘evil’ are completely unwarranted. It is always amusing when these same people then hesitate to condemn specific instances of patently abominable behavior. I don’t think one has fully enjoyed the life of the mind until one has seen a celebrated scholar defend the ‘contextual’ legitimacy of the burqa, or of female genital mutilation, a mere thirty seconds after announcing that moral relativism does nothing to diminish a person’s commitment to making the world a better place.” -Harris (2010), p. 27

“I consistently find that people who hold this view [moral anti-realism] are far less clear-eyed and committed than (I believe) they should be when confronted with moral pathologies — especially those of other cultures — precisely because they believe there is no deep sense in which any behavior or system of thought can be considered pathological in the first place. Unless you understand that human health is a domain of genuine truth claims — however difficult ‘health’ may be to define — it is impossible to think clearly about disease. I believe the same can be said about morality. And that is why I wrote a book about it…” -Harris (2011)

⁷ For more on this proposal, see Bensinger (2013).

⁸ “[T]he rightness of an act depends on how it impacts the well-being of conscious creatures[….] Here is my (consequentialist) starting point: all questions of value (right and wrong, good and evil, etc.) depend upon the possibility of experiencing such value. Without potential consequences at the level of experience — happiness, suffering, joy, despair, etc. — all talk of value is empty. Therefore, to say that an act is morally necessary, or evil, or blameless, is to make (tacit) claims about its consequences in the lives of conscious creatures (whether actual or potential).” -Harris (2010), p. 62

“[C]onsciousness is the only intelligible domain of value.” -Harris (2010), p. 32

Harris (2013b) confirms that this is part of his “central argument”.

⁹ “Certain human uses of the natural world — of the non-animal natural world! — are morally troubling. Take an example of an ancient sequoia tree. A thoughtless hiker carves his initials, wantonly, for the fun of it, into an ancient sequoia tree. Isn’t there something wrong with that? It seems to me there is.” -Sandel (2008)

¹⁰ E.g., Rey (1982), Beisecker (2010), and myself. (I don’t assume eliminativism in this essay.)

¹¹ Harris (2010), p. 1.

____________________________________________________

References

What is a self?

This is a revised version of an IU Philosophical Society blog post.

At the Philosophical Society’s first spring meeting, I opened with a methodological point: Semantics matters. Misunderstanding is everywhere, and it is dangerous. If we don’t clarify what we mean, then we’ll never pinpoint where exactly we non-verbally disagree.

But the importance of semantics doesn’t mean we should fetishize which particular words we use. Just the opposite: In analyzing what we mean, we frequently discover that the world doesn’t neatly break down into the shape of our linguistic categories. We may have one word (“monkey”) where there are really two or three things, or two words (“electricity” and “magnetism”) that pick out the same phenomenon in different guises. Thus we talked about the value of “Tabooing your words”, of trying to find paraphrases and concrete examples for terms whose meaning is unclear or under dispute.

This is of special relevance to discussions of the self. People mean a lot of different things by “self”. Even if in the end those things turn out to be perfectly correlated or outright identical, we need to begin by carefully distinguishing them so that we can ask about their relatedness without prejudging it.

For example: DavidPerry noted that many classical Buddhist texts denied the existence of a self. But what they actually denied was what they called ātman, which some people have translated as “self”. Even had they written in English, for that matter, it wouldn’t necessarily have been obvious which ideas of “self” they had in mind — and, importantly, which they didn’t have in mind.

What are some of the concepts of “self” that we came up with? I lumped them into five broad categories.

1. Thing

EMpyloriWhen we say “That’s an ugly coat of paint, but I like the house itself,” we don’t have the same thing in mind as when we say “I have a self”. It may seem trivial to note that objects in general can themselves be called “selves”; but this has real relevance, for example, to the Buddhist critique of “self”, which really does generalize to all objects — for early Buddhists, humans lack a “self” for much the same reason chariots lack a “self”, because they aren’t things in quite the way we normally take them to be.

Some things, of course, may be more intuitively “selfy” than others. The idea of discrete organic selves, or organisms, is applied to everything from viruses to humans. In this biological sense, I am my body, even though my body can change drastically over time.

Two troubling questions arise here, and they’ll recur for our other ideas of “self”. First, can my concept of myself as an organism be trumped by other (say, more psychological) conceptions? If not, then if my brain were turned into a sentient machine, or if my body perished while my soul lived on, I would not survive! Some ghostly or robotic impersonator would survive, while the “real me” perished with my body. Could that be right? Or is the “real me” something more abstract? And why does the question of which “me” is “real” feel like it matters so much?

2. Persona

By “self” or “person” we sometimes mean the specific things that make you who you are. We mean someone’s personality, character, life-experiences, social roles, and so on. As the Stanford Encyclopedia article on selfhood notes:

We often speak of one’s “personal identity” as what makes one the person one is. Your identity in this sense consists roughly of what makes you unique as an individual and different from others. Or it is the way you see or define yourself, or the network of values and convictions that structure your life. This individual identity is a property (or set of properties). Presumably it is one you have only contingently: you might have had a different identity from the one you in fact have. It is also a property that you may have only temporarily: you could swap your current individual identity for a new one, or perhaps even get by without any.

3. Subject

png;base646bdd8702569ffd85We may also have a more generic idea in mind — a “self” as a subject of experience. But this too conflates several ideas.

First, there’s the idea of an experiencing subject, an experiencer. At a minimum, this could be whatever directly brings experiences about. But does this causal notion adequately incorporate our intuition of a self that “undergoes” or “has” its experiences? What would we have to add to turn an experience-generator into an unconscious self? And if some brain region or ectoglob can be “me”, where do we draw the line between the parts of the world that are me and the parts that aren’t?

Jonathon, for one, voiced skepticism about there being any fact of the matter about the dividing line between Me and Everything Else. Some philosophers even reject the very idea that a self exists “outside” or “behind” experience:

But even so, there remains the distinct idea of an experienced subject. Our self isn’t just hidden behind our experiences; it’s also indicated within them. Thus we can speak of experiences that are “self-aware”, in different ways and to different extents. This ranges from the self-awareness of explicit thoughts like “I am getting rained on!” to primitive perceptual impressions that a certain hand is Me while a certain chair is Not Me.

At the outer edge of this category, DavidPerry raised the idea of a bare “phenomenological” subject, which I took to be the perspectivalness or subject-object structure in experience. Here our discussion became very murky, and DavidBeard expressed some skepticism about the possibility of disentangling this idea from the very idea of consciousness.

In general, we had a number of difficulties reconciling the philosophical method of phenomenology, or describing how things appear from a first-person perspective, with the method of third-person science. Most basically, Neeraj asked, can the fact of first-person experience itself be accounted for in objective, scientific terms? As Briénne put it: Supposing I were an intelligent zombie or automaton, could you explain to me what this thing you call “consciousness” is? This brought us to another way of conceiving a self — behaviorally.

4. Agent

png;base64f4b49a5756f7bece“Self” can be defined in behavioral terms. We generally say that humans and animals can perform actions and deeds, while beaches and kaleidoscopes, metaphors aside, cannot. So agency is an important way of distinguishing persons from non-persons.

Of course, “action” is a vague category. It’s easiest to tell persons from non-persons apart when we’re dealing with intelligent agency, i.e., behaving in a skillful, adaptive, goal-oriented way. We debated whether intelligent behavior can occur in the absence of conscious thought, and if so how we could ever identify subjects of experience based on how they act. Sam noted that we very readily ascribe agency, and perhaps even awareness, to beings based merely on their superficial resemblance to humans and other animals — suggesting that our agent-detecting intuitions are prone to leading us astray.

We might also distinguish deliberative agency, which makes decisions, from rudimentary animal behaviors that possibly lack real “choice.” Even more narrowly, we can ask what gives deliberative agents (or agents in general) free agency. Does social or political freedom, as Nathaniel suggested, inform our concept of “person”? Does psychological or metaphysical freedom help determine whether something is a self in the first place?

This brought to the forefront the important fact that our idea of “self” is not merely descriptive; it is also prescriptive. What things we call “person” is bound up with our values, preferences, and principles. Thus we have to ask how the above ideas relate to moral agency, a being’s responsibility for its own actions. A storm can make bad things happen, but it’s not the storm’s fault. What sorts of things can be at fault?

5. Patient

png;base643f39801d7f0875daJust as an agent is something that acts, a patient is something that’s acted upon. Thus, along similar lines, we can ask what beings are moral patients — beings that can be harmed or benefitted. And we can ask whether there is a special, narrower category of personal patients — whether, for example, humans or intelligent agents have their own special rights above and beyond those of other sentient beings.

But the normative concepts of self aren’t just about morality. We also need to know what it takes to count as a prudential patient. Or, to ditch the jargon: What does it take for something to be the proper object of my self-interest? What sorts of things can be me, when it comes to my looking out for my own welfare?

The question seems so basic as to be bizarre. But in fact it’s not a trivial matter to figure out why I should care about myself — or, given that I do care about myself, what it takes for a thing to qualify as “me” — or how to go about discovering which things so qualify!

More generally, we can distinguish two questions:

1. What does it take to be a certain kind of self? What makes Bob, say, an agent?

2. What does it take for two things to be the same particular self? What makes Bob at 3:00 am and Bob at 3:45 am the same agent? Why aren’t the two hemispheres of Bob’s brain two different agents?

Thus far, we’ve only even begun to address the first of these two questions. And we’ve barely scratched the surface of the normative concepts of self, and of the relationships between the above concepts of agent, patient, subject, and persona. But we’ve made real progress, and we can use the distinctions we’ve drawn as tools for beginning to make headway on the remaining riddles.

For those interested in further reading on these two questions, I recommend John Perry’s A Dialogue on Personal Identity and Immortality, a rousing and very accessible introduction to the philosophy of self.