Sam Harris has argued that we should treat situations as morally desirable in proportion to their share of experiential well-being. In a debate, William Lane Craig objected:
On the next-to-last page of his book, Dr. Harris makes the telling admission that if people like rapists, liars, and thieves could be just as happy as good people, then his “moral landscape” would no longer be a moral landscape. Rather, it would just be a continuum of well-being whose peaks are occupied by good and bad people, or evil people, alike. […] The peaks of well-being could be occupied by evil people. But that entails that in the actual world, the continuum of well-being and the moral landscape are not identical either. For identity is a necessary relation.
I think the real problem here isn’t that it could be moral to make evil people happy. Harris and I gladly bite that bullet. The deeper worry is that, in a world teeming with pathological sadists, torturing a minority might well increase aggregate psychological welfare. Yet it would be absurd to conclude that torturing an innocent in such a world is moral.
This is a perfectly fair argument. But Harris simply responds, “Not a realistic concern.”
Why the lack of interest? Because, I think, any claim that the English-language word ‘good’ means ‘well-being’, picking it out across all possible worlds, is beside the point for Harris.
A world of sociopaths or sadists would be trapped in a valley of the moral landscape. Fixating on a few tiny hills at the bottom of that valley is missing the big picture, which is that the truly moral act would be to cure the world of its antisocial tendencies, not to indulge them. It’s sort of ‘moral’ for a doctor to spend most of her time making delicious pies for her rapidly deteriorating patients. I mean, baking for others is a good deed, right? But it’s immoral on a deeper level if it distracts the doctor from diagnosing or treating her patients. Craig’s example is alien enough to do some violence to an exact identification of ‘good’ with ‘well-being’, but it does nothing to undermine the enterprise of improving psychological welfare, because it misses the landscape for the hills in much the way the baker-doctor does.
So what is Harris’ goal in The Moral Landscape ? He seems to want to establish four main theses:
1. Positive experience is what we value.
All the things we care about are instances of experiential well-being.
2. So we should value all positive experience.
Our strongest unreflective desires will be furthered if we come to value such experience in general, however and wherever it manifests. For this binds all of our values together, encouraging us to work together on satisfying them.
3. Morality is about satisfying that universal value.
Since this is the most inclusive normative project we could all legitimately collaborate on, and since it overlaps a great deal with our most rationally defensible moral intuitions, it makes consummate sense to call this project ‘morality’.
4. So science is essential for getting morality right.
The best way to fulfill this valuing-of-experienced-value is to empirically study the conditions for strongly valenced experience.
I’m very skeptical about 1 on any strong interpretation, but I’ll talk about that another time. (EDIT: See Loving the merely physical.) Though Harris places a lot of emphasis on 1, I don’t think it is needed to affirm 2, 3, or 4. Suppose we learn that some people really do value living outside the Matrix, keeping natural wonders intact, promoting ‘purity‘, obeying Yahweh, or doing the right thing for its own sake, and not solely the possible experiential effects of those things. Still Harris could argue that, say…
- … those goals form a much less consistent whole than do the experiential ones. Perhaps, for instance, subjective projects come into conflict less often than objective ones because we have separate mental lives, but only one shared physical world.
- … education or philosophical reflection tends to make those goals less appealing.
- … those goals make dubious metaphysical assumptions, in a way experiential goals don’t.
- … those goals depend for their justification on experiential ones.
- … those goals causally depend on experiential ones.
- … those goals are somehow defective variants on, or limiting cases of, experiential ones.
- … those goals are unusually rare, unusually temporally unstable, or unusually low-intensity.
- … those goals are so different from experiential ones that they can’t all reasonably be lumped into a single category.
Some combination of the above conclusions could establish that experience-centered goals form a natural group that should, for pragmatic or theoretical reasons, be discussed in isolation. Once we’ve got such a group, we can then argue that our most prized goals will be furthered if we generically endorse the entire category (2), and that these goals will be further furthered if we reserve ethical language for this category (3). 4 will then fall out of 2 and 3 easily, as an empirical conclusion about the usefulness of empiricism itself.
On my view, then, the real action is in the case for 2 and 3. What is that case?
Why value value?
It’s important to highlight here that Harris doesn’t think everyone already generically values all positive experience. It would be a fallacy to deduce ‘everyone values every positive experience’ from ‘everything that’s valued by anyone is a positive experience’.
[I]n the moral sphere, it is safe to begin with the premise that it is good to avoid behaving in such a way as to produce the worst possible misery for everyone. I am not claiming that most of us personally care about the experience of all conscious beings; I am saying that a universe in which all conscious beings suffer the worst possible misery is worse than a universe in which they experience well-being. This is all we need to speak about “moral truth” in the context of science.
So Harris is proposing that we change our priorities. They should change in pretty much the same way our ancestors’ linguistic, political, and intellectual practices changed to affirm the scientific character and universal value of health.
Why change? Because it will allow us to better collaborate on the things we already care about most. Again, why should we prize health in general, as opposed to caring specifically about the health of certain groups of people, or certain body parts? Why not have medicine focus disproportionately on our right legs, disregarding our left legs almost completely? Well, I suppose there are no unconditional, metaphysically fundamental reasons to value health in general, or to build sciences and social institutions dedicated to understanding and improving it. But it’s simpler that way, and it benefits us both individually and collectively, so… why not?
Valuing every experienced value, in proportion to its intensity and frequency, is egalitarian in spirit. Practically democratic. That doesn’t make it ‘objective’ in any mysterious cosmic sense. But it does make it an extraordinarily useful Schelling point, a slightly arbitrary but stable and fair-minded convention for resolving disputes.
Of course, if we just think of it as an arbitrary convention, without ascribing it any importance — if we ‘mere‘ it — then the whole point of the convention will be lost. If no one had any respect for democracy, democracy would dissolve overnight. It may be very important for the practice of valuing value that we adopt moral realism or consequentialism as an absolute law, even if the justification for doing so isn’t so much philosophical first principles or linguistic definitions as our lived, pragmatic concern for our own and others’ actual welfare. Good conventions save lives.
It’s because we do in fact have conflicting desires that it’s important to have a general framework for resolving disputes, and Harris’ is a surprisingly flexible yet sturdy one. On Harris’ view, we do factor values like nepotism and egoism into our calculus, and try to help even sociopaths live a joyful, fulfilling, beautiful life — within limits.
What limits? Simply that it come at no cost to everyone’s joy, fulfillment, and beauty. In that respect, the system is more fair than a democracy, since unpopular values get equal weight; and at the same time less exploitable than one, since that weight is determined by psychological fact, not by popular opinion.
So most malign values are quelched or stymied not because they’re intrinsically Evil but because they don’t scale well. They don’t interact in such a way that they form sustainable ecosystems of positively valenced experience. On Harris’ view, we shouldn’t block or assist sadists and war criminals merely because it pre-reflectively ‘feels righteous’ to do so; for our sense of righteousness can go horribly astray. Rather, we should do so because an ecumenical ‘value all values’ project demands it, and because abandoning this meta-value means abandoning our best hope for fully general cooperation between sentients.
What’s on the table is less a moral theory than a humanitarian superproject. Harris reinterprets our language of ‘ought’ and ‘should’ not with the goal of solving Kantian paradoxes but with the goal of defining and motivating a long-term civilizational research program, all while bringing our intellectual drives and traditions into a more intimate conversation with our moral drives and traditions, at the individual as well as the societal scale.
Why call this ‘morality’?
For a person who wrote a book about meta-ethics, Harris is remarkably unconcerned with meta-ethics. He takes note of it only to do a bit of conceptual and rhetorical tidying up. At all times, his sights remain firmly fixed on applied ethics, on politics, on, well, real life.
[T]he fact that millions of people use the term “morality” as a synonym for religious dogmatism, racism, sexism, or other failures of insight and compassion should not oblige us to merely accept their terminology until the end of time.
But if there’s real disagreement here, why speak in terms of ‘ought’ and ‘bad’ at all?
The problem isn’t that those are univocal, clearly-defined terms whose entrenched meanings Harris is flouting. The more realistic worry, rather, is that they’re horribly confused terms with only a limited amount of consistency within and across linguistic communities. Folk morality is a mess. Heck, academic morality is a mess. And folk meta-ethics and folk normative ethics (and their academic counterparts) are particularly confused and divergent — far more so than object-level morality. So if Harris’ goal is to inject some clarity and points of basic consensus into this conceptual cacophony, why enter the fray we call ‘ethics’, with its centuries of accumulated obscurity, at all? Why not just invent a new set of terms for what he has in mind, like ‘flought’ and ‘flad’? Then, stipulatively, we could have our flobligation cake and eat it too. If he did that, you can be sure that you’d see fewer people treating ‘but you’re just defining morality as “the maximization of well-being”‘ as an objection.
Although it’s tempting to reboot ethics and start over with a clean slate, I think that the risks should we completely forsake the moral conversation are too dire. Moral language is just a language. (What’s ethical remains ethical, whether we call it ‘ethical’ or ‘flethical’, or ‘unethical’, or ‘linoleum’.) But language matters. Our intuitions are language-shaped. Even if we say that ‘florality’ or ‘neuro-eudaimonics‘ is far more humanly important and conceptually deep than traditional ‘morality’, people raised on the ‘morality’ lexicon will still reliably misconstrue how high the stakes are, misconstrue even their own preferences, if we toss out moral language.
Many [highly educated men and women …] claim that a scientific foundation for morality would serve no purpose in any case. They think we can combat human evil all the while knowing that our notions of “good” and “evil” are completely unwarranted. It is always amusing when the same people then hesitate to condemn specific instances of patently abominable behavior. I don’t think one has fully enjoyed the life of the mind until one has seen a celebrated scholar defend the “contextual” legitimacy of the burqa, or of female genital mutilation, a mere thirty seconds after announcing that mortal relativism does nothing to diminish a person’s commitment to making the world a better place.
Moreover, our traditional talk of goodness and badness has some very useful features, like its correlation with our deepest concerns and its built-in universality. Certainly we could redefine morality in, say, egoist terms. ‘Justice’ and ‘ought’ could be made to refer to the speaker’s interests, as opposed to the overall interests of sentient beings. But then it would be less useful as a language, since the meanings of the terms would vary from person to person, like pronouns do, and since we already have adequate ways to express personal preferences.
Ethical discourse is our only established way to concisely refer to aggregate preference satisfaction. So streamlining the expression-conditions of this discourse, stripping it of the parochial or metaphysically dubious associations it has in certain linguistic communities, may be a very valuable project if we have a sufficiently important candidate meaning to adopt. Harris thinks that psychological well-being meets that condition.
I’ve emphasized the revisionary nature of Harris’ project, because I want to make it clear why objections like Craig’s are beside the point. Harris’ goal is to provide a framework for thinking and talking clearly about humanity’s most important (i.e., most widely and deeply valued) problems and possibilities. His goal isn’t to provide a novel theory that can ground all our naïve normative intuitions, ordinary prescriptive language, or sophisticated ethical theories, because he thinks that all three of these are frequently useless, internally inconsistent, even outright contentless.
Everyone has an intuitive “physics,” but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter). Only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. I am arguing that everyone also has an intuitive “morality,” but much of our intuitive morality is clearly wrong (with respect to the goal of maximizing personal and collective well-being).
At the same time, I don’t want to suggest that Harris’ framework is all that ethically novel or strange. We really do care with unparalleled ferocity about suffering, rapture, beauty, tranquility, and all the other qualities of experience Harris is interested in. And our everyday moral intuitions and conventions really do orbit the distribution of extreme forms of these experiences.
My qualification is that that’s a contingent fact, and it’s not the core reason Harris is so interested in this project. If our moral intuitions had turned out to be consistently detrimental to our psychological welfare, Harris would have advocated the destruction of morality, not its reconceptualization! But, for all that, the conservatism of Harris’ proposal is very much worth keeping in mind. If nothing else, it shows that Harris’ project isn’t as difficult as it might seem. All we need is a small but vocal pool of intellectuals and public figures on our side, just large enough to reverse the current cultural trend towards blind relativism and lame nihilism.
Harris’ aim, then, isn’t to give a fully general semantic theory of what the word ‘good’ means in English, or to provide metaphysical truth-conditions for all our intuitive judgments. It’s to recommend a simple framework for collaborating on issues of deep humanistic import. It’s to repurpose an increasingly unproductive discourse to express the urgency of scientifically inquiring into the nature of anything and everything that matters to us. And then actually doing something about it.
Regimenting our concept of “morality” with simplicity will make it easy to teach and explain the value of value, regimenting it with elegance will make it easy to theoretically and pragmatically defend the value of value, and regimenting it with egalitarianism will ensure that we do not disregard any of the core concerns of any of the beings capable of having concerns. If Harris’ own proposal is not ideal for this aim, still it seems clear that something has to fill the void that is modern ethical thought, lest this void continue to encroach upon the things we love.
- Alexander, Larry & Moore, Michael (2007/2012). “Deontological Ethics“. SEP.
- Alexander, Scott (2011). “The Consequentialism FAQ“. Raikoth.net.
- Harris, Sam (2011). “A Response to Critics“. The Huffington Post.
- Harris, Sam (2011). “Toward a Science of Morality“. The Huffington Post.
17 thoughts on “Moral theory is for moral practice”
Great article, you should write more!
You linked to a video of Shelly Kagan from Yale early in the piece, and I am curious to hear your thoughts on his and Craig’s debate, if you’ve seen it.
Thanks! I plan on it! I have not seen Kagan and Craig’s debate, but I’ll let you know what I think if I watch it. If there’s a particular portion of it you’d like to hear my thoughts on, send me the timestamp. I’ll also be responding to Craig’s criticism of my own work in a few days.
I think you are shortchanging academic metaethics. I don’t think that academic metaethics and normative ethics are “confused” (whatever that means). They are a highly technical area in which people have made technical arguments to justify their positions, but I see no reason to believe that they’re “confused”. What’s interesting is that I’ve gotten much more out of the academic metaethics and normative ethics I’ve read than anything of Harris’. From my understanding of Harris, he’s just trying to argue for an objective utilitarian standard based on flimsy neurological evidence. He doesn’t really tackle some of the problems with consequentialism. I’m all for practical ethics, but I just don’t see Harris as really advancing that project at all.
Meta-ethics and normative ethics are enormous fields, so we may just be reading different authors. Send me a few representative papers; I may change my mind!
What do you mean by ‘flimsy’? I find the suggestion that his arguments are relevant but inconclusive strange; depending on what you think he’s arguing for, I think his case should either be pretty empirically airtight, or a complete non sequitur. If you tell me what you think his argument is (or which of the four theses above you’re objecting to), this will be easier to hash out.
He tackles some of the problems. I added a link to a Consequentialism FAQ that addresses a few more. Which further problems do you have in mind?
I suggest anything by Simon Blackburn, Nicolas Sturgeon, and Peter Railton for metaethics and Peter Singer, Christine Swanton, Dan Russell, Rosalind Hursthouse for normative ethics (the latter three are focused on virtue ethics which I think is probably the best normative theory).
“What do you mean by ‘flimsy’?”
I think he overestimates the value of neurological evidence when it comes to morality. Pat Churchland talks about the concerns I have in this podcast: http://www.partiallyexaminedlife.com/2011/07/18/episode-41-pat-churchland-on-the-neurobiology-of-morality-plus-hume%E2%80%99s-ethics/
“Which further problems do you have in mind?”
My problems with consequentialism (which, by the way, I’m much more sympathetic to than deontology) is its value monism. I do not think everything can be boiled down to one value that has to be maximized. I think in any situation there are multiple things that have to be looked at and a simple utilitarian calculus oversimplifies the situation too much to be really helpful. Morality is hard, and there are multiple things to consider. For instance, I don’t think that there is a right choice in the trolley example. Both options are horrible and will probably emotionally damage the person choosing. But, I would find it absurd to call out the person as wrong depending on which action he took. That would be cruel. For me, morality is always going to be tough and there is not a formula that is going to give us the right answer. We have to struggle through it.
Consequentialism doesn’t entail value monism. All it entails is that evaluation of outcomes is the method by which you derive the most proper behavior(s) and that evaluation can be value pluralistic.
I think if you’re not able to formalize how decisions are justified, e.g., through a formula, then you cannot really claim anything you do as moral, because you haven’t actually defined why any action is good. That said, I think it’s also worth pointing out that having a formula for evaluation does not entail that there is always one best answer. Two or more actions can be equally good. It’s also possible that all possible actions can be equally good.
This is right. Consequentialism has absolutely nothing to do with monism; it can be pluralistic, and deontology and virtue ethics are perfectly capable of being monistic.
I also don’t understand the claim that the trolley problem doesn’t have a correct choice. If you’re right that both choices are equally horrible (which is itself an objective moral claim – which is either true or false), then both choices would be correct. In reality, though, I think the implications of each choice are different enough that it would be very improbable for them to be equally moral choices. The fact that any field of study is extremely complex will never justify claims about phenomena within that field of study by itself.
Yeah, I agree. To assert them equally means there has to be a basis for that (which is why I previously said unless you can define these terms formally you cannot call anything you do moral).
I think there is a simple variant of the trolly to further reveal consequentialism and some valuation of life as the source too. Lets call it the pathological trolly dilemma. In this one the trolley is barreling down and will kill 1 million people if left on its current track, or you can divert it to another track that will kill only one. It’s a completely unrealistic scenario, yet I’d be very concerned by anyone who claims either choice to be equally morally good in such a hypothetical.
Now, to be fair to these dilemmas, I think there is also some layer of complexity that makes them less trivial than a # of lives evaluation (and the pathological example is meant to simply reduce the influence of any additional complexities). That is, there is something to be said about living in a society where you can be thrown into a scenario to which you had no prior connection at your expense and without your consent. Living in such a society would tend to put people on edge because they would be much less able to predict and control their future. So, in small person count difference trolly dilemmas, I’d willing to entertain the argument that switching the track introduces that negative precedent and may make up some of the difference in life count. I think the cost of this precedent also goes up as the person being sacrificed becomes further removed from the situation (e.g., the fat man sacrifice variant has a larger negative cost since the fat man is even further removed from the scenario). However, this kind of complexity does not mean that consequentialism is wrong. Rather, it means that the consequentialist evaluation is simply difficult and requires more investigation and I think the pathological version of this dilemma also helps reveal this property.
Yeah I agree with that completely I think.
Laurence: Could you state, in a sentence, what thesis you have in mind when you refer to “virtue ethics”? Or what part of virtue ethics you think makes it “the best normative theory”? My concern with virtue ethicists is that they tend to become evasive or defeatist when it comes to answering the most important question in ethics, which is ‘What specific choices should I make?’ My experience so far aligns well with Scott Alexander‘s:
If you have articles in mind you think show a different side of virtue ethics, or if you’d like to contest any of the points made in the article, I’d welcome having my mind changed.
– a. As noted above, consequentialism has nothing to do with monism.
– b. There are non-maximizing forms of consequentialism.
– c. Whether a utilitarian calculus is helpful in object-level moral disputes is a distinct question from whether it’s helpful at higher levels. For instance, perhaps utilitarianism is the best way to decide what moral heuristics to use on object disputes; we then don’t do the calculation in real-world action dilemmas, but we do rely on them on a deeper level to figure out what rules of thumb are most likely to actually benefit people. Compare Kant’s consequentialist justification for his deontological theory.
– d. No right choice in the trolley example? That’s a rather extreme hypothesis. What if there were (somehow) ten billion people on one track, and only one person on the other track? Would you still say that there’s no fact of the matter about whether you should redirect the trolley to kill the one person, when ten billion lives are at stake? ‘Both options are horrible‘ in no way implies that they’re equally horrible.
– e. ‘Both options are […] possibly emotionally damage the person choosing‘? Sure. But the four lives are what matter most, not the emotional damage. If it’s emotionally damaging to help others, and you do it anyway, that makes you even more virtuous, since you were willing to put more of your person at risk to do the right thing.
– f. “I would find it absurd to call out the person as wrong depending on which action he took.” – What does that have to do with whether he actually was wrong? Obviously it would be inhumane to get into an ethical argument with someone who’s just witnessed a horrible, traumatic incident. The person needs therapy, not condemnation. But that doesn’t suggest that the person has no moral obligations one way or the other.
– g. “there is not a formula that is going to give us the right answer.” – The concern isn’t that virtue ethics provides no formula that an actual human being can use in all circumstances to come to the right decision. The world is too complex, and humans too limited, for there to be such a formula. The concern is rather that virtue ethics, at least in extreme forms, can’t even acknowledge that there’s a fact of the matter about which situations are better or worse, which discourages people from even trying to reason carefully about tough real-world decisions.
Virtue ethics at best provides no methodology for even beginning to figure out how to “struggle through it”; and at worst it actively discourages people from “struggling through it” by endorsing nihilism about improving human life via making good decisions in moral dilemmas. To put it in virtue-ethical terms: The disposition to reason consequentialistically, at least when deciding what situational heuristics to endorse, is itself incredibly virtuous. (Singer is a case in point. Would he be a more virtuous person if he’d decided never to follow the consequentialist logic where it took him?)
shelley makes wlc look like an idiot ….I don’t remember the context….but when you hear it, you be incredulous or should I say in disbelief …
He doesn’t completely school him, since they are in large part talking past each other, but certainly Craig is out of his element during the Q&A section and Kagan makes some very smart points (wrt Ultimate meaning vs. meaning, for example.)
Kagan’s presentation was a whole lot of fun to watch, and quite compelling. But I agree with you that it was a fairly superficial victory, though perhaps less superficial than most of Craig’s.
You suggest they’re talking past each other. What topic could they address to move past that? What do you think is the most basic problem, or unanswered question, with the framework Kagan proposes? Likewise, what’s the deepest problem with Craig’s account (bracketing the falsity of theism)? I wrote up my own idea, but I’d like to post it after yours so we can compare mostly independent approaches and conclusions.
For others who haven’t seen the debate: http://www.youtube.com/watch?v=SiJnCQuPiuo
At the time, I remember most of the reviewers complained that Kagan went first at the insistence of Craig, which is considered bad form since he had to anticipate Craig’s arguments, but that didn’t bother me all that much. Although Luke, in his review at the time (http://commonsenseatheism.com/?p=1810) [how do you hyperlink on this blog?] suggested that Kagan’s problem was not about talking meta-ethics while Craig was, I can’t say I agree; they seem to be talking in this little Ethical field that exists only in Theism-Atheism debates rather than in any easily pegged field like Normative or Meta Ethics, because while they discuss ontological groundings for morality, they are also constantly shifting into Applied Ethics territory (“The Holocaust isn’t *really* wrong…”) and Kagan’s Constructionist view of Moral Ontology is as ill-suited to be called Meta-Ethics as Craig’s modified Divine Command Theory.
Certainly the most glaring problem with Kagan’s framework is that his sketches of Morality don’t have much going for them other than being intuitively appealing. “Don’t harm, do help” may be the most helpful moral axiom for me in day to day life, but Craig rightfully asks Kagan why he’s decided to stop the ethical inquiry at that point rather than questioning the Golden Rule. His attempts to use a blend of Rawls and Scanlon is fascinating and I would love to see someone go into more depth on it, but it’s out of it’s original element, since Rawls was (roughly) trying to prescribe policy principles, and Scanlon was talking about a much narrower subject: Giving a robust and compelling account of what is meant by the word ‘should’, specifically when applied to interpersonal obligations. Neither of these are what Craig is looking for, which is where the part of the disconnect comes from. This hypothetical contract might be objective, but is it ‘morality’? What are we even asking when we ask “What is morality?” and what sort of answer would satisfy us? Craig has an advantage because he can do some sort of trick like “God says *this* is morality, therefore *this* is actually morality,” but Kagan can’t.
Ignoring the Theistic aspect of Craig’s presentation, the largest problem I see is the one Kagan points out during the Q&A, and I’ll quote him because I think it’s a very well made point:
“”If you put it as “complex nervous systems” it sounds pretty deflationary. What so special about a complex nervous system? But of course, that complex nervous system allows you to do calculus. It allows you to do astrophysics… to write poetry… to fall in love. Put under that description, when asked “What’s so special about humans…?”, I’m at a loss to know how to answer that question. If you don’t see why we’d be special… because we can do poetry [and] think philosophical thoughts [and] we can think about the morality of our behavior, I’m not sure what kind of answer could possibly satisfy you at that point.
…I could pose the same kinds of questions of you… So God says, “You are guys are really, really special.” How does his saying it make us special? “But you see, he gave us a soul.” How does our having a soul make us special? Whatever answer you give, you could always say… “What’s so special about that?”””
At the end of the day, I’d like for both of them to boil down the mess that introducing the words “Objective” “Categorical” “Ultimate” “Meaning” and “Value” without properly defining them has gotten us into, but the limitations of the format are such that that would be impossible.
I should probably rewatch it, it’s been a while since I’ve seen it, but this was my first introduction to Philosophy (after which I watched Kagan’s Open Yale Course offering) so I have plenty more to say about it, even from memory alone. 🙂
(BTW, perhaps I am being impatient, but you haven’t forgotten about our Universals dialogue, right?)
I like it! I agree with you that starting with contractarianism is fine. Presumably if Luke himself (back in his desirist days) were asked about what makes morality objective and real, he’d begin with a discussion of why morality supervenes on desire (which is part-meta, part-theory), then move on to why that makes morality real, and what sort of realness that is. So I think Kagan’s starting point is great, though I agree with Luke that he should have gone deeper.
Here are the objections I wrote before reading your above comment:
For Kagan, I think the basic challenge is: Explain exactly what you mean by ‘rationality’. The worry is that Kagan may be forced to build morality into his definition of rationality — at an extreme, indifference to Social Contracts and to the Veil of Ignorance is declared ‘irrational’ by fiat. If so, then one will sacrifice both the explanatory value of contractarianism, and its prima facie normative weight. (The latter because condemning immorality for being ‘irrational’ then amounts to condemning immorality for being immoral. ‘Irrational’ adds nothing to the discussion, except a new way to call people mean names.)
For Craig, the basic challenge is: Metaphysically, what makes us ‘objectively obliged’ to obey the will of God? If you just say it’s a brute fact about God’s nature, then you have provided a ‘foundation’ for moral value in the sense of a reification, but you have provided absolutely no explanation for why there is any such value. On the other hand, if you do explain morality in terms of some non-moral property of God, then why can’t the atheist extract that property from a theistic context and use it in his own account? (If it’s something about God’s personhood, then humans are a good candidate for morality’s ground; if it’s something else about God, like its simplicity or cosmogenesis or transcendence, then an entirely impersonal cause or abstractum might suffice.) In fact, the atheist can even steal the ‘brute fact’ approach; why does divinity make it more intellectually satisfying?
And, more worryingly: Epistemically, how could we ever come to know what’s moral, if all the causal properties of Objective Moral Facts are indistinguishable from causal properties of Amoral Facts? (And if there are empirically detectable causal properties that uniquely pick out Objective Moral Facts, why couldn’t a nontheistic system instantiate those properties?)
Re linking: html (a href, etc.) works in comments.
Re universals: Too much patience can be a vice! I’ve recently acquired some new obligations, so I probably won’t be able to give a full, satisfying response to your message until the end of the month. Sorry about that. You can keep e-mailing me in the interim if your views keep evolving. I’ll most likely send you a short historical overview this week to contextualize some of the contemporary debates, because I can see it was a mistake to just start tossing interesting papers at you from wildly different eras without attempting to set the stage first.