In this post, i would like to submit an essay on moral philosophy that i wrote last year as part of a philosophy class. Since i didn’t reach a coherent conclusion, the title is set as a question, rather evasively formulated. (Although i’m considering pragmatic ethics, as in that of John Dewey, as an appealing ethical standpoint after exploring a little more.)
Without further ado;
In the history of philosophy, one of the greatest problems in ethics is the problem of deriving true value statements from empirical facts, the ”is-ought problem”. The purpose of this essay is to examine whether normativity is objective, and how (or if) moral knowledge can be obtained.
Let me start by presenting two different views on human nature in the history of philosophy. Rousseau defended the view that human beings are born altruistic and that society eventually corrupts them, whereas Hobbes defended the opposite argument; that humans are born selfish and it is society that teaches them better.1 These alternative hypotheses are important for describing human actions, but do they matter when considering how we ”should” act?
The word ”should” is value-laden and expresses something driven by an intention or purpose, often toward a corresponding goal. It goes beyond a simple declaration of facts about how the world ”is”. For value-statements to exist, there must be a subject/conscious agent who holds the expressed value, it won’t simply appear out of nowhere. Clearly, there is a mental aspect involved in linking facts to values. But does the same fact always relate to the same value by every conscious being encountered with that fact? Apparently not, as will hopefully be clarified in the next section.
Upon examining different cultures, some of which practice examples of what western traditions consider ”wrong” and morally abhorrent (such as female genital mutilation or mass-suicide), there seems to be a conflict between different belief-systems about what ways of life are morally favorable. The existence of -and sometimes incoherence in- opinions of right-and wrongness is perhaps the best argument for moral relativism, although arguably not a relevant defense of moral normative relativism (in the sense that you ought to be tolerant of every traditional practice).
The opposite view has also been taken, for example by utilitarians and consequentialists, that there is a ”true” way of reasoning to arrive at the ”best” theory of what is right and wrong. Utilitarian philosopher Sam Harris argues that the best way of determining moral worth, is by considering the well-being of conscious creatures as ”objectively good”2. At least in my interpretation, he is also saying that this is the only sensible way of arriving at a pragmatic moral truth. A key point in his argument is that ”moral view A is truer than moral view B, if A entails a more accurate understanding of the connections between human thoughts/intentions/behavior and human well-being.” This implies the convenience of being able to provide evidence for or against a moral theory, for example in experimental psychology, assuming that you have an established definition of what is meant by ”well-being” (which Harris does not really provide in his book The Moral Landscape), and on which scale well-being is to be pursued (e.g. individual or societal, current or potential lives, other species etc.).
Contemporary experimental psychologists and cognitive neuroscientists, (such as Patricia Churchland, Antonio Damasio and Joshua Greene) have examined moral judgment with the aim of explaining how the mind works when making morally significant decisions. According to a review article by Greene, they have found support for a ”dual-process theory”, according to which utilitarian thinking (e.g. considering the ”greater good”) is enabled by controlled cognitive processes, while deontological judgment of matters such as individual rights, are driven mostly by intuitive emotions.
These investigations could be included in the emerging interdisciplinary field of neurophilosophy, in which neuroscientific evidence is used to better understand issues related to philosophy of mind. Although many philosophers used to dismiss the findings of neuroscience as irrelevant with the claim that ”what matters is the software, not the hardware”, there is a growing interest in this intersection with neuroscience nowadays.
Arguably, most people would agree that moral traits include aspects such as sympathy, altruism, cooperation and justice. These are thus probably some of the traits meant by researchers trying to find the ”neural mechanisms of moral cognition”, when they use the term ”moral behavior”.
But even if there was a well-established inter-subjective agreement, on which traits were morally right and wrong, would that be a sufficient condition for morality to be objective? Can neuroscientists and experimental psychologists study ethical imperatives without themselves committing a naturalistic fallacy?
Since morality is about separation of good and bad, we immediately encounter the problem of judging what is good and what is bad in a scientific description of how the brain works. There seems to be no way around Hume’s law, since morality is inevitably, and by definition concerned with value-statements.
Let us examine the implications that would follow from regarding inter-subjective agreement as a sufficient condition for moral universality. Here, I think it is useful to make a comparison with other subjects, such as physics and mathematics, which are universal in the sense that even a hypothetical alien species could discover them by observing the universe and deriving the laws of nature (assuming they do not suddenly change). On the other hand, we could compare it to a subject such as art, which revolves around matters of taste, and often involves a personal categorization of what is considered aesthetically pleasing. Is morality merely a ”matter of taste”?
I think not, at least if we confine the definition to our own species, and examine the genetic (a-priori) ways we engage in social interaction and cooperation. Cultural differences aside, we are in general confined by natural predispositions to interact in certain ways, and also through reasoning , to arrive at similar conclusions about right and wrong actions. Clearly there are exceptions, but on average, the fluctuations in value systems are relatively small.
Now consider, as a thought experiment, that we created artificial intelligence with consciousness and the same reasoning capacities as humans, except with no emotions or ability to feel pleasure and pain. According to what desirable standard would they evaluate the right- or wrongness of actions that involve what we humans call ”well-being” or ”suffering”?
Although statistical inter-subjective agreement would be a reasonable requirement for a universal standard of right and wrong, in practice this agreement would be highly anthropocentric. However, there might be a way to compromise this conclusion by defining truth in a pragmatic sense; since, at present, the world is controlled largely of a human population, and all the relevant moral decisions are thus made by members of our species. Perhaps the exploration of value-sources of other potentially conscious beings remains beyond the scope of necessity and limits of contemporary technology. I think it is worth considering as a possibility, however, that human conceptions of maximum well-being may be something entirely different, or even non-existent for other types of intelligence. Doesn’t this somewhat undermine the controversial view of an objective morality?
In my current view, the most important aspect of ethics is its practical function. In a world full of individual, somewhat independently thinking brains, there will inevitably be different opinions in questions of what is right or wrong. Nevertheless, it is how people act on the large scale that shows the effects of these opinions, and thus how well they function in building and maintaining a society. I also think that it is most likely impossible to construct a moral code or theory that applies in every situation. All theories yet examined have been found, by critics, to contain problems and loopholes in certain contexts. Therefore it is fortunate that there exist many different opinions, so that we can have an ongoing discussion and not just blindly accept and follow what a certain theory says without questioning its validity.
In quite a disheartening way, this conclusion takes us back to the starting point; finding the most realistic theory of ethics. Whether this is done by relating values to facts, or by other means, remains unclear. One of the main reasons why I remain unconvinced by Sam Harris’ thesis is that it is based on the premise that well-being is intrinsically good, and suffering intrinsically bad, which is not arbitrary, but not really self-evident either. Another problem is that of measurement; how is well-being measured? When simply asking individuals to evaluate the quality of their lives or questioning them moment by moment, they tend to give different, sometimes contradicting answers.
Trying to measure the well-being in a large group of people is even more difficult. Psychologists, economists and followers of Bentham have tried to find a connection between economic welfare and happiness, but it’s hard to pin-point the relevant indicators. One thing seems to be for certain though; well-being is not proportional to income beyond a minimum level.
1. The debate in biology of ”nature vs. nurture” is, I think, similar to the opposing views of Rousseau and Hobbes, and reflects the philosophical debate between rationalism and empiricism, but from an ethics-perspective. If every individual human being had the same predispositions in judgment of moral values, there would probably be no argument of whether morality is determined by nature, or enforced by nurture (a-priori vs. a-posteriori). But since this is evidently not the case (otherwise there would be no debate in ethics), there must exist individual differences. The question of whether these differences are genetic or cultural could be philosophically relevant if the answer would limit or constrain human psychology. Not being able to behave in a certain way constrains the expectations on how you “ought” to behave. Other than this, it is unclear how much a thorough examination of the psychology of moral judgments would yield on the subject of ”true morality”.
2. In his definition, values relate to facts about the well-being of conscious creatures, which is the “most important source of value”. Although he admits that most of these relations remain to be discovered by scientists, his thesis asserts that they are discoverable in principle. The well-being of conscious creatures on a greater scale can be metaphorically illustrated as peaks on a moral landscape, while the troughs represent the depths of suffering.
Perhaps of further interest: