Is it really possible to “prove” that one value is better, or more right, than another? The much contested field of moral philosophy is a testament to the difficulty of this task. Yet clinical ethics consultation is predicated on the notion that trained ethicists can offer moral expertise in the form of guidance that helps people make better decisions in medical moral dilemmas. But what is their standard for a “better decision,” and is such a standard even objectively possible?  In this piece, I intend to examine the recent work of Dien Ho, Leah McClimans and Anne Slowther who attempt to defend a notion of moral expertise, which avoids the quagmire of conceptual problems, within moral philosophy.  I argue that these attempts are unsatisfactory for a variety of reasons.  

Dien Ho has attempted to point out that the lack of agreed upon metaphysical foundations is not unique to ethics. The debate between realism and anti-realism remains unsettled in physics; the disagreement between natural law and legal positivism remains contested in jurisprudence; and the ontological status of numbers in mathematics is unresolved.  Despite such foundational disagreements, these disciplines all manage to proceed unscathed.  If, Ho argues, we are to throw out ethics because of unsettled questions of meta-ethics, it seems we would also have to throw out physics, mathematics and law. [1]  Therefore, there must be a way to proceed in ethics despite disagreements in meta-ethics.  

While Ho is correct to point out that all disciplines rely on metaphysical foundations that are often left unresolved, it is not clear that the metaphysical disputes in physics or law cause the kind of problems that metaphysical disputes in ethics produce.  This is because one can practice science without resolving—in fact, usually without even being aware of—the realism/anti-realism debate. Similarly, we can learn algebra while having no opinion of the ontological status of numbers. However, in ethics, how one judges many moral claims will be heavily influenced by one’s metaphysics.  Whether one believes abortion is wrong often hinges upon a belief in moral truths that are themselves dependent upon a god who establishes such truths.  Moral judgments that arise in ethics, particularly the momentous kind we see in clinical cases, will often reach deep into the foundations of one’s worldview.  Ethics cannot reasonably proceed detached from these foundations any more than a race can be run without a track.  John Stuart Mill makes this observation when, in Utilitarianism, he notes the first principles of science often come at the end of analysis, but moral principles must come first. Mill writes that disagreement over the first principles of mathematics occur “without much impairing, generally indeed without impairing at all, the trustworthiness of the conclusions.” [2]  However, with respect to ethics, “the contrary might be expected to be the case with a practical art, such as morals. All action is for the sake of some end, and rules of action, it seems natural to suppose, must take their whole character and colour from the end to which they are subservient.” [2]

Ho argues that we can make normative progress in ethics without appeal to meta-ethics by simply agreeing to resolve moral disagreements by an appeal to reasons. Ho claims that a “default principle” arises in such situations that says that if A wishes to X, and B wishes for A to refrain from doing X, then B cannot justifiably prohibit A from Xing without sufficient reason [1].  The burden of proof is always on the person trying to prohibit the behavior of another. Ho recognizes that shifting the burden of proof to the one who wishes to prohibit the other’s behavior creates a “permissive bias,” such that where reason does not decide the issue (i.e., there is a “tie”), then there is no reason to restrict the behavior. He then goes on to fill in what counts as giving a “sufficient reason” in this system by pointing to arguments by parity, which operate on the assumption that we ought to treat similar cases the same. Arguments by parity look for a commonly held moral belief between the two discussants that is similar to the case they currently disagree on. Using the commonly held belief they can then resolve their disagreement by pointing out that like cases should be treated the same. [1]  

Ho provides the example of two individuals who disagree about whether eating meat is morally permissible. Because both individuals probably agree that eating dogs is not morally permissible, the vegetarian could argue that because dogs do not differ from cows and pigs from a moral point of view, then—from argument by parity—we ought to not eat cows and pigs. [1]  Arguments by parity require that disputants 1) agree that sufficient similarity exists between two cases and 2) that even if similarity is established, they share a belief about which direction the resulting conclusion ought to go.  

The trouble with this strategy is that where it is successful, it will be successful because the disputants already share similar metaphysics. Where it breaks down, it will break down on account of differences in metaphysics. While serving in the Peace Corps I lived overseas in Kosrae Micronesia within a culture that would draw a very different conclusion from Ho’s argument by parity with regard to eating meat.  The Kosraens whom I encountered, love to eat their dogs. To them, pointing out that dogs are morally equivalent to cows and pigs counts as a reason to eat dogs—not as a reason to refrain from eating cows and pigs. I had three dogs while living with them for two years and I was constantly having to convince my host mother not to prepare them for dinner. I eventually lost that fight. Are dogs morally different from cows or pigs? Do dogs have a moral status that should prevent us from eating them? These are metaphysical differences that will make all the difference in arguments by parity. Because metaphysics is still deeply involved, Ho’s strategy does not provide a way to make normative progress without appeal to metaphysics.  

Leah McClimans and Anne Slowther point out that moral expertise is often contrasted with the gold standards of medicine and science in attempts to highlight the shortcomings of moral expertise. This is because medicine and science are thought to be epistemically uniform, meaning that experts can run the same experiments that will yield similar conclusions. In contrast, disciplines like ethics are epistemically diverse, meaning that experts will often disagree—there is no universally accepted method of moral reasoning that produces consensus among moral experts. McClimans and Slowther argue that the supposed difficulties of moral expertise are not exceptional—medicine and science are now widely regarded as facing the same problems. [3]  Ethics, science and medicine all regularly face a plurality of reasonable choices which require judgment and values to choose between.  

To show that medicine is epistemically diverse, they point out that evidence based medicine was designed to limit the intuition and experience of physicians and base decisions purely on evidence. The problem is that physicians never have all the information, they must make decisions under uncertainty, and where there is uncertainty judgment will always enter in.  When factors like clinical experience, intuitions, and judgment enter in, then doctors start making different recommendations given the same information. This is the kind of epistemic diversity that evidence based medicine sought to avoid. [3]  

    McClimans and Slowther also point out that the scientific process is epistemically diverse—full of uncertainty and the subjective judgment of individual scientists. Selecting standards of statistical significance, determining what evidence would support or weaken a theory, deciding when evidence is sufficient to do so, are all times when judgment must enter the scientific process. [3]  The strength of evidence needed will always depend on what one stands to gain or lose from a belief. They quote Richard Rudner who states, “How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be.” [4]

    In the 1970s, science advising became part of the US policy process, hoping to ground policy decisions in objective science. However, the science advice one received seemed to depend heavily on the scientist one solicited for advice, threatening the idea that science was a purely objective business upon which rational policy could be based. Much like the evidence based medicine movement, federal agencies responded to expert disagreement by attempting to develop rules for scientific inference that would minimize the subjective judgment of scientists.  For example, “an inference guideline might direct scientists to assume that if a chemical causes cancer in rodents, it is likely to cause cancer in humans.” [5]  The development of such standards proved controversial and difficult to apply. As a result, subjective judgment on the part of individual scientists remains an indispensable feature of scientific practice.  

    Attempts to argue that widespread expert disagreement is a normal part of legitimate epistemic practice sometimes threaten to prove too much. The strategy here is to basically argue that because widespread disagreement is prevalent in science, we should therefore not be alarmed by such disagreement in other disciplines. Expert disagreement just comes with the territory of an expertise. The trouble with this comparison is that it overlooks the fact that there can be different kinds of expert disagreement. Sometimes widespread disagreement among “experts” should raise red flags. For example, there is certainly widespread disagreement amongst palm readers—you are likely to get either hopelessly vague or contradictory results by taking the same palm to a variety of palm reading experts.  The reason McClimans and Slowther’s view threatens to prove too much is that it seems their account—in the process of legitimizing moral expertise—would also inadvertently legitimizes all sorts of disciplines that are widely regarded as illegitimate. This shortcoming shows that more must be said of their account that disagreement among ethics experts is ok because there is disagreement among scientific experts.       

One trouble with their assessment is that it rests upon a false dichotomy—that between epistemic uniformity and diversity.  Although there may be aspects of scientific practice that are epistemically diverse, there are also mountains of epistemically uniform dimensions to science—and it may be this mountain of epistemic uniformity that sets it apart from other disciplines like ethics. Although two biologists may disagree about whether punctuated equilibrium or phyletic gradualism accounts for the way species tend to evolve, they will have such disagreement only because there is so much evidence within evolutionary biology upon which they agree without much question. So science is neither fully epistemically diverse nor uniform, it is always both. In contrast, there is very little in ethics about which substantive agreement exists. If knowledge were shaped like a tree, scientists would have their disagreements in the leaves, and ethicists in the roots. Noting the difference in the levels of the disagreement in a discipline makes all the difference.    

Although I argue that these attempts are troubled, I believe that articulating a notion of moral expertise is vital to the survival of clinical ethics consultation as a discipline.  There is much critical work yet to be done.

References:

1.    Ho, D., Keeping it Ethically Real. J Med Philos, 2016. 41(4): p. 369-83.

2.    Mill, J.S., J.B. Schneewind, and D.E. Miller, The basic writings of John Stuart Mill : On liberty, the subjection of women, and Utilitarianism. 2002, New York: Modern Library.

3.    McClimans, L. and A. Slowther, Moral Expertise in the Clinic: Lessons Learned from Medicine and Science. J Med Philos, 2016. 41(4): p. 401-15.

4.    Rudner, R., The Scientist Qua Scientist Makes Value Judgments. 1953, Williams and Wilkins Co. p. 1.

5.    Douglas, H.E., Science, policy, and the value-free ideal. 2009: Pittsburgh, Pa. : University of Pittsburgh Press, c2009.

 

Comment