Revised in Defense of Staying the Course

download Revised in Defense of Staying the Course

of 21

Transcript of Revised in Defense of Staying the Course

  • 7/29/2019 Revised in Defense of Staying the Course

    1/21

    Jay Carlson

    In Defense of Staying the Course

    Philosophy of Special Science--Dr. Cameron Buckner

    December 18, 2012

  • 7/29/2019 Revised in Defense of Staying the Course

    2/21

    Staying the Course

    2

    In Defense of Staying the Course: Epistemology of Disagreement Amongst

    Philosophers

    Abstract: Contemporary epistemologists have concerned themselves with the

    epistemic weight we ought to give to the phenomena of peer disagreement. One partysuggests that the rational response to disagreement with an epistemic peer is to revise

    ones doxastic statesthat we should either split the difference with our epistemicpeers or suspend judgment on the matter altogether. The other partyknown as the stay

    the course approachsuggests that one is permitted to maintain holding ones

    antecedent beliefs even in the face of disagreement with epistemic peers. In this paper, Iwill consider how the psychology of expertise affects the way we approach disagreement

    among philosophers. This will contribute to an examination of the respective ecologicalrationality of these strategies within the domain of philosophy. Finally, I hope to defend

    the claim that the STC-strategy is more ecologically rational in the domain of philosophy.

    A recent topic of much discussion in epistemology is what sort of epistemic

    weight ought to be given to the phenomena of peer disagreement. Roughly speaking

    there are two major strategies for an epistemic agent in a situation of peer disagreement.

    The first option is that one should revise ones beliefs in order to accommodate the

    beliefs of ones epistemic peer. There are two distinct species of this strategy. The first

    is where one attempts to establish a compromise position between the opposing views, as

    it were, to split the difference between them. The second species of this position

    posits that in a situation of peer disagreement one ought simply to be agnostic on

    the matter, that is, one should suspend ones beliefs (Kornblith 2009, 52). Both of

    these strategies suggest revision of beliefwhether toward compromise or

  • 7/29/2019 Revised in Defense of Staying the Course

    3/21

    Staying the Course

    3

    agnosticism; thus I will refer to these strategies collectively as the R-strategy.1 The

    second option claims that one can maintain ones antecedent beliefs in the face of peer

    disagreement. This option is known as the Stay the Course approach (hereafter STC-

    strategy). While these discussions could be about disagreements in any area of life, I will

    focus upon disagreement amongst philosophers. I want to argue that the STC-strategy is

    still permissible even in cases where a philosopher has reason to believe that her fellow

    philosopher interlocutor is in a symmetrical relationship to the truth.

    To approach the question of which strategy the philosopher should adopt in a peer

    disagreement situation, we have to examine several preliminary issues. First, what sort of

    conditions constitutes a peer disagreement situation? To what extent are philosophers

    experts in their domain of inquiry, and does that status alter their responsibilities in these

    peer disagreement situations? Are they exempt from the sorts of biases that plague the

    judgments of the average non-philosopher? Defining thephilosophers epistemic

    situation in terms of their expertise allows us to evaluate the ecological rationality of the

    R and STC strategies. It might be the case that the environment in which the

    philosophers conduct their inquiry makes it rational for them to prefer one of these

    strategies to the other.

    What is the initial impetus for these strategies of dealing with disagreement? The

    R-strategy gets its intuitive force from the fact that we are fallible creatures who

    frequently have false beliefs. It also seems plausible that our interlocutors are usually

    more or less on an epistemic par with us as it relates to believing the truth. Other things

    being equal, we usually do not have reason to believe that we have some privileged

    1In previous drafts of this paper, this option was known as the STD-strategy.

  • 7/29/2019 Revised in Defense of Staying the Course

    4/21

    Staying the Course

    4

    access to truth that other equally bright and reflective people do not. As a result of these

    prima facie considerations, it would seem epistemically arrogant for person A who

    believes p to think that some person B who is As epistemic peer but believes ~p is

    wrong from the outset. A more plausible response in such a situation is that we should

    take a humbler attitude toward our antecedent doxastic states and be open to the

    possibility of revising them.

    In the other direction, however, the impetus behind the STC-strategy is that while

    it might be true that some instances of disagreement should lead one to reconsider ones

    beliefs, it is not nearly as plausible that disagreement should always warrant

    reconsidering ones beliefs. After all, we have justifications for many of ourantecedent

    beliefs, and it hardly seems plausible that disagreement by itself would undermine any

    justification we might have for a belief. Bluntly stated, to simply surrender ones

    antecedent beliefs at the first sign of dissent seems like an epistemic form of cowardice

    (Elgin 2009, 57).

    An important variable in this discussion is the notion of epistemic peerhood,

    roughly that two agents are on a level epistemic ground regarding the truth of some

    proposition. Thomas Kelly describes the conditions for epistemic peerhood thusly:

    [T]wo individuals are epistemic peers with respect to some question if and only

    if they satisfy the following two conditions: (i) they are equals with respect to

    their familiarity with the evidence and arguments which bear on the question, and

    (ii) they are equals with respect to general epistemic virtues such as intelligence,

    thoughtfulness, and freedom from bias. (Kelly 2005, 1745)

  • 7/29/2019 Revised in Defense of Staying the Course

    5/21

    Staying the Course

    5

    Neither has privileged access or ability that would make them more likely to possess the

    truth than their peer. Though few real-world agents will be in an exactly equal

    relationship to the truth with some other agents that these conditions demand, these

    conditions nevertheless serve as an epistemic ideal that agents can approximate

    seemingly without much lost.

    STC advocates will note that there are some fairly uncontroversial conditions

    where maintaining ones antecedent beliefs in the face of disagreement is surely

    permissible. If one has reason to believe that ones interlocutor has made a mistake or is

    simply unreliable on the topic being discussed, then it seems obvious that one has

    justification in maintaining ones antecedent beliefs and thus reason to discount the

    epistemic weight of the other persons disagreement. It might even be the case that in

    such a situation one has an obligation notto revise ones beliefs. It seems plausible to

    think that even advocates of R-strategy would accept the justifiability of maintaining

    ones belief in these situations because the epistemic peer condition has failed to obtain.

    But R advocates would respond in kind that it is surely false that one can disregard

    another persons opinion simply because they disagree. In their introduction to their

    volumeDisagreement, Richard Feldman and Ted Warfield deny that the mere fact of

    person Bs disagreement with A itself counts as evidence that B is unreliable on the given

    topic. Such a position seems indicative of the overconfidence that STC advocates would

    do well to resist (Feldman and Warfield 2009, 5).

    One might find it curious that these epistemic strategies of what one should do in

    a disagreement make no reference to the arguments or evidence that underlie either side.

    The neglect in this discussion of evidence and arguments as relevant for what one should

  • 7/29/2019 Revised in Defense of Staying the Course

    6/21

    Staying the Course

    6

    do in a situation of disagreement leads some to object that these strategies are focusing on

    the wrong phenomena for the question of how to respond rationally to disagreement. The

    proper focus, so this objection goes, in a disagreement is not a matter of who disagrees

    with whom, nor of tallying how many people are on each side of the debate, but rather

    what evidence can be marshaled for each position. Thomas Kelly thus argues that how

    opinions are distributed across the philosophical discipline on a given issue is effectively

    a sociological observation that does not provide any philosophically relevant evidence

    about what one should do in situations of disagreement: what one should do in a case of

    disagreement lies completely on the level of the evidence and arguments for each side

    (Kelly 2005, 182). If one has reason to doubt that ones opponents argument or

    evidence is somehow deficient, then it is rational to stand ones ground and maintain

    ones belief; conversely if one finds the other sides arguments and evidence compelling,

    then it would be rational to revise ones beliefs in one aspect or another. The

    disagreement itself is therefore either irrelevant or unnecessary for what one should do.

    This perspective warrants two responses. First, the assumption that these

    strategies are starting from is that the evidence in a given situation is symmetric, meaning

    that the evidence is equally balanced on both sides. There are always possibilities that

    one could break this symmetry by noticing a hidden inconsistency that ones opponent

    has not recognized, by developing an argument that they must respond to, etc. A second

    possible response to this objection might be that these strategies are epistemic heuristics

    that allow us to make quick but accurate assessments of what one ought to believe in a

    given situation in the same way that Gigerenzer and Todds fast and frugal heuristics

    allow us to make quick but accurate decisions in certain environments without requiring

  • 7/29/2019 Revised in Defense of Staying the Course

    7/21

    Staying the Course

    7

    unreasonable amounts of calculations (Todd and Gigerenzer 2000, 731). The rationale

    behind the fast and frugal heuristics is that calculating what is optimally rational to do in

    a situation solely on the basis of logic and probability theory is very costlyif possible at

    alland these heuristics can be approximate the satisfactory decision within that

    particular environment at a fraction of cost of standard rational calculations. In the same

    way, these epistemic heuristics could be warranted by the difficulty and cost of

    calculating what is rational to believe in a given situation and by their ability to

    approximate a satisfactory doxastic state without the onerous calculation. I hedge this

    claim with could because, to my knowledge, evaluation of whether these epistemic

    heuristics regarding disagreement can produce a satisficing doxastic state is an open

    empirical question that is awaiting a test.

    An additional feature of disagreement among philosophers is that many of them

    have some prima facie case to being experts on philosophical matters. At first blush, we

    might surmise that two philosophers, being well versed in a particular philosophical

    vocabulary after continuous study and reflection of the relevant literature, have each

    developed a set of relatively stable judgments on some philosophical topic. But is this

    expertise claim warranted? While I have sketched how philosophers might have a prima

    facie case to expertise, several more substantial accounts of expertise are extant.

    Weinberg et al. (2010) note three possible models whereby philosophers might be

    appropriately called experts. The impressionistic account ofa philosophers expertise I

    have given above will need to be replaced with a more exact account of how philosophers

    are experts, if they even are at all. The overarching goal for Weinberg et al. is to respond

    to claims like the one given above that since philosophers are well-versed experts in their

  • 7/29/2019 Revised in Defense of Staying the Course

    8/21

    Staying the Course

    8

    philosophical trade, they are able to make judgments about philosophical matterse.g.

    usually focused on thought experiments about casesthat have more epistemic heft than

    ordinary folk would be able to (Weinberg et al. 2010, 331). The first possible model is

    that philosophers might have superior conceptual schemata relative to everyday folk

    theories. On this model philosophers have a special sensitivity to the structure of their

    domain of philosophical inquiry such that they are able to pick out the features relevant

    for their inquiry to which the ordinary, conceptually unladen folk would not be sensitive

    (Weinberg et al. 2010, 337). Another model is that being well-versed in a given domain

    makes one more likely to make better domain-related judgments. The idea for

    philosophers is that their grasp of philosophical theorizing makes their judgments on

    philosophical matters more reliable (Weinberg et al. 2010, 344). The final model is that

    philosophers might be experts in the sense of knowing how to effectively and

    economically utilize philosophical techniques and procedurese.g. intuitions and

    thought experimentsthe way a chess master can utilize the arrangement of pieces to

    simulate the variety of moves (Weinberg et al. 2010, 347).

    Weinberg et al. deny that philosophers can be categorized as experts on any of

    these models. On the possibility of philosophers having superior conceptual schemata,

    there is no evidence that philosophers are immune to framing effects that could skew the

    philosophers judgments. Indeed, philosophers utilize a framing effect whenever they try

    to make subsequent thoughts consistent with an initial judgment or intuition (Weinberg et

    al. 2010, 340). On philosophers having a superior theory that renders their judgments

    superior, Weinberg et al. give two objections. First, there is no plausible candidate for a

    full-bodied philosophical theory that philosophers can point to as the theory in which

  • 7/29/2019 Revised in Defense of Staying the Course

    9/21

    Staying the Course

    9

    they can claim to have achieved expertise. Second, even if they were to produce one, it

    would still be an open empirical question whether that theory would make the

    philosophers judgments more resistant to the biases that affect folk intuitions (Weinberg

    et al. 2010, 346). The final possibility considered is that philosophers might possess

    superior procedural knowledge and ability to extract information from what is given in

    thought experiments. Here again, there is no candidate for a procedural decision-aiding

    tool that the philosopher is learning to use, in the way, for example, that logicians learn

    how to use the rules of formal logic. And even if there were such a tool, it is an empirical

    question that it would give the philosopher a systematic edge in knowing how to pick out

    the right verdict in a given thought experiment.

    One might question whether the expertise of philosophers matters much at all to

    the issue of the epistemology of disagreement among philosophers. The thought here

    might be that once we have already stipulated that the disputants approximate some

    level playing field standard of epistemic peerhood, further stipulation that they are

    experts does not alter the case in any significant ways. If Weinberg et al. are correct,

    however, then little evidence has been given to think that philosophers are not subject to

    the kind of framing effects that plague non-philosophers in the philosophical inquiry.

    The philosophers claim to expertise in their field, therefore, is at best unfounded.

    This rather skeptical conclusion regarding philosophers expertise could be

    relevant to the question of disagreement if it elicited the following syllogism: if a

    philosopher is aware that her judgments and those of her opponent might be subject to

    some framing effect or bias that is distorting their respective views, she might take the

    presence of a disagreeing epistemic peer as evidence that someones judgment is being

  • 7/29/2019 Revised in Defense of Staying the Course

    10/21

    Staying the Course

    10

    distorted on the topic being discussedwhether it is hers, her opponents, or both of

    them. Why think such an inference is valid? The implicit premise underlying this

    inference might be that disagreement between philosophers is indicative of a mistake on

    someones part. If it is valid to conclude that disagreement is a reason to think that a

    mistake has been made, then one of the approaches to philosophical disagreement

    mentioned above seems particularly tempting: the agnostic form of R-strategy. If one

    has reason to believe there is an error somewhere in ones discussion with someone else,

    it seems plausible that then one ought to suspend ones belief to make sure the error does

    not lie with ones own beliefs.

    Another possibility, however, besides this inference from skepticism of

    philosophers expertise to the adoption ofagnosticism, is that philosophers are experts in

    a field that is not likely to produce agreement. James Shanteau describes two classes of

    experts, one where experts display an ability to consistently perform better than a novice

    in the same task, while another kind of expert cannot consistently perform better than a

    novice at the same task (Shanteau 1992, 257). Philosophers might fit better in this latter

    class, not necessarily because they are less intelligent or careful in their inquiry than the

    other class of inquirers, but that the environment of the philosophical domain is not as

    amenable to producing, among other things, substantial instances of agreement. If this is

    the case, philosophical disagreement might be an expected feature of the environment in

    which philosophers conduct their inquiry, such that it would not be as appropriate to

    adopt a form of the R-strategy. To examine which of these scenarios is more plausible,

    we have to inquire about the ecological rationality of inquiry into the philosophical

    domain.

  • 7/29/2019 Revised in Defense of Staying the Course

    11/21

    Staying the Course

    11

    In examining the ecological rationality of various philosophical strategies we are

    asking questions about the environment of the philosophical domain. We want to

    examine how the philosophers capacities are suited to exploiting the structure of the

    information that is found in the domain of philosophy. Some might question whether

    ecological rationality is the appropriate method to measure strategies within the domain

    of philosophical inquiry. The immediate task of the proponents of ecological rationality

    from Herbert Simon to Peter Todd and Gerd Gigerenzer is to develop an account of

    rationality that accommodates the various computational limitations that constrain a

    subjects ability to make a decision in real world scenarios like emergency rooms and

    marketplaces. These limitations seem less pertinent in the more atemporal domain of

    philosophizing: the philosopher qua philosopher is not pressed by time constraints that

    necessarily limit her ability to arrive at the optimum conclusion. It bears noting,

    however, that ecological rationality is not just about the cognitive constraints subject is

    under, but also about how the subject makes use of the information of their environment

    to inform their choice (Todd and Gigerenzer 2000, 730). In the present context, the issue

    is what decision the philosopher should make given the information she has about her

    environment. The most immediate of which is that someone with roughly equal

    likelihood of being right on an issue nevertheless disagrees with her.

    What sort of environmental features are relevant for the acquisition of expertise?

    Kahneman and Klein note that what distinguishes skilled intuition from biased judgments

    is that the environment provides access to regular statistical cues about the features of

    said environment (Kahneman and Klein 2009, 520). A task-environment can have the

    property of high validity provided that they exhibit stable correlations of cues and

  • 7/29/2019 Revised in Defense of Staying the Course

    12/21

    Staying the Course

    12

    outcomes (Kahneman and Klein 2009, 524). In a similar vein, Shanteau notes that ability

    for experts within a given domain to achieve a consensus is also a function of the stability

    and structure of their domains target: as a target within a domain displays predictable,

    repeatable feedback, the inquirers into that target are more likely to develop the ability to

    make accurate judgments about that target (Shanteau 1992, 258). Expertise also usually

    requires proficiency in the use of external decision aids. All of these features of the

    environment characterize how robust the environmental feedback the inquirer receives is,

    indicating its usefulness for making predictions about new data. These environmental

    signals tell the inquirer when their judgments are right or wrong, thus providing a useful

    corrective or checkon the subjects theorizing or decision-making (Weinberg et al. 2010,

    349).

    What then are some relevant features about the environment of philosophical

    inquiry? The first note about the philosophers environment is that it is largely an

    intersubjective network of ideas and thoughts. The philosophers environment would

    thus include the judgments and ideas of all fellow philosophers, past and present. Given

    the vast, unwieldy expanse of such an environment taken so exhaustively, for the present

    purposes we can narrow the field to just contemporary academic philosophers. Second,

    what target are philosophical inquirers aiming to capture? The question of what

    philosophical inquiry targets, however, is itself a massively controversial

    metaphilosophical topic; some might say that philosophy aims to capture the analysis of

    our concepts, while others think that philosophers are aiming to characterize natural kinds

    of one stripe or another as found in the world around us, and many more besides.

    Perhaps the safest general characterization about what philosophical inquiry targets is

  • 7/29/2019 Revised in Defense of Staying the Course

    13/21

    Staying the Course

    13

    that there are numerous kinds of phenomena that different philosophical inquiries want to

    capture, but there probably is no single unifying thing they all striving to capture. A third

    feature about this environment is how the collective opinions among these philosophers

    are distributed, whether they converge at points of consensus and whether the history of

    this environment shows a tendency toward such convergences. Here again, the truest

    answer about the domain seems to be that philosophy does not currently have many

    substantial points of consensus, nor does its history lead one to think that consensus is

    forthcoming.

    It seems that a common theme about the philosophical environment is widespread

    disagreement on almost every level. Hilary Kornblith contrasts this persistent

    disagreement in the domain of philosophy with the more formalized disciplines of logic,

    mathematics, and decision theory. In contrast to philosophy, Kornblith explains that these

    latter domains have a track record of eventually producing such stable convergences of

    opinions that it seems warranted to claim that they are probably true; he cites the

    Newcomb Problem as a specific example of a problem in decision theory where a stable

    consensus emerged after an initial stage of disagreement (Kornblith 2010, 40). Kornblith

    would probably also list the empirical sciences as further examples of domains in which

    practitioners historically have converged toward consensus. If one is in such a domain

    that has a track record of producing such stable consensuses, then one has reason to think

    that that these consensuses are probably true. Kornblith draws an epistemic conclusion

    about disagreement within these domains from their tendency to produce consensus: in

    these domains one cannot rationally maintain a belief that is in disagreement with a given

    consensus (Kornblith 2010, 43). If one finds oneself in disagreement with the majority in

  • 7/29/2019 Revised in Defense of Staying the Course

    14/21

    Staying the Course

    14

    this kind of domain, Kornblith claims, one ought to revise ones beliefs to accommodate

    the consensus.2 The strategy Kornblith is suggesting is a form of R-strategy, though not

    one that fits cleanly with the R-strategies as I have presented them abovein the cases

    Kornblith is discussing one is neither adopting an agnostic stance nor splitting the

    difference with ones opponents but outright conceding the debate to ones opponent.

    The R-strategy Kornblith recommends is ecologically rational in consensus-conducive

    domains because in these domains one has reason to believe, ceterusparibus, that an

    established consensus has a good probability of being true. Thus it seems that it would

    be to ones advantage to revise ones belief toward a belief that had a good probability of

    being true.

    What epistemic conclusion does Kornblith draw about disagreement in

    philosophy, since it does not have this tendency toward producing consensus in the way

    that the empirical sciences, math, and formal logic do? He concludes that the rational

    strategy in a non-consensus-forming domain like philosophy is to suspend ones belief

    (Kornblith 2010, 46). Using the terminology of this paper, we can take Kornblith to be

    claiming that the agnostic R-strategy is the ecologically rational in domains where one

    cannot reasonably expect convergence of opinion. His reason for this conclusion seems

    to be that because philosophy has no history of producing stable consensus, we are not

    likely to receive strong signals as to when our judgments are on track or not (Kornblith

    2Kornbliths position seems to have the odd feature that anomalists, i.e. those whomaintain that a given consensus paradigm is inadequate, are doing so on pain of

    irrationality. I think he can avoid this problem in the following way. Kornblith

    qualifies his claim about consensus in a footnote that one can maintain ones

    position in the face of a consensus if one has discovered an argument that the

    consensus has not yet considered (Kornblith 2010, 43n). Kornbliths thesis would

    also presumably not hold if one had reason to believe that one was in the early or

    middle stages of a paradigm before a consensus had emerged.

  • 7/29/2019 Revised in Defense of Staying the Course

    15/21

    Staying the Course

    15

    2010, 45). In the absence of getting any robust feedback from ones environment,

    Kornblith would claim we cannot adequately assess whether ones judgments are

    accurate or not, and so it is best to adopt agnosticism.

    How should one respond to this? Several things could be noted about Kornbliths

    advocacy for agnosticism in moments of peer disagreement. One might respond that his

    position entails a rather sweeping form of skepticism, where one should suspend ones

    judgment on many matters. It might well be incoherent because even Kornblith is willing

    to admit that this position renders his own philosophical beliefs unjustified (Kornblith

    2010, 44). But in the present context we are concerned with the matter of ecological

    rationality, so the question is whether the agnostic R-strategy allows the subject to exploit

    the structure of the environment in which she is. But the presumption of adopting the

    agnostic R-strategy is that one is taking the safe bet by refraining from making a choice

    in an environment where cues are not robust enough to generate much confidence. But

    that safety is not guaranteed: to remain agnostic on a central question of ones inquiry or

    on any controversial question is precisely to have ones investigation stall out.

    Furthermore, it might be the case that responding to any cues even ones that are

    intermittent could advance ones inquiry more than simply refraining from believing. On

    these grounds I think one is justified in adopting an STC-strategy.

    How then do I avoid the problem of dogmatism that made the STC-strategy

    unpalatable in the beginning? I think one starting point lies in an ambiguity at the end of

    Kornbliths article. Previously he had stated that in a peer disagreement situation, one

    should adopt agnosticism, but at one point he describes his position as one of epistemic

    modesty (Kornblith 2010, 52). This suggests less about a strategy of revising the

  • 7/29/2019 Revised in Defense of Staying the Course

    16/21

    Staying the Course

    16

    content ofones beliefs as revising the confidence with which one holds those beliefs. I

    think these are separable kinds of strategies. In fact, I think the revision of the confidence

    of ones beliefs could be wedded to a straightforward STC-strategy: one can stick to

    ones antecedent belief, but downgrade ones confidence in it.

    It is worth reflecting on whether Kornbliths gloss on scientific domains of

    inquiry as consensus-conducive finds support in the strategies that scientists go about

    disagreements in their domain. One reason disciplines like the sciences are taken to be

    consensus conducive is because participants in these domains make predictions about

    empirical phenomena. Proponents of theory A predict that some phenomena will occur

    under some conditions, while proponents of theory B predict that will occur under

    those same conditions. These theories are then put to the test, and the theory whose

    prediction is repeatedly demonstrated under the stipulated experimental conditions is

    taken to be the theory that one should be assented to. To give an historical example,

    when Einstein and Eddington made predictions based on their models of relativity that

    were repeatedly demonstrated in empirical tests, an opponent of relativity theory

    presumably would have been obligated to revise their opinion.

    But this simple story glosses over many non-trivial aspects of scientific practice

    where disagreement manifests itself. Disagreement is not only about what empirical

    phenomena will arise in some set of conditions. For example, in the debate over whether

    non-verbal animals display understanding the unobserved states of conspecifics, the

    disagreement is not only over what empirical data will arise, but also what empirical data

    counts as evidence for or against this claim. What sorts of tasks do animals have to

  • 7/29/2019 Revised in Defense of Staying the Course

    17/21

    Staying the Course

    17

    perform to demonstrate that they have this capability? Even the semantics of such a claim

    are under dispute (Penn and Povinelli 2007, 731).

    Perhaps the characterization of philosophy as a nonconsensus conducive domain

    is also too sweeping. While one might grant that philosophers disagreements often seem

    interminable, nevertheless there are some examples in recent philosophical history that

    could be plausible candidates of genuine progress in philosophy. For instance, Edmund

    Gettiers thought experiments seemed to generate a broad consensus among the vast

    majority of philosophers that justified true belief was not sufficient for knowledge.

    Kornblith dismisses this sort of example as an insignificant exception to the rule

    (Kornblith 2010, 45). Here he seems to be on more solid ground, since even if we grant

    that there is general assent about the correct answer of Gettier problems and their

    theoretical significance, this is a relatively narrow place of consensus.

    In this final section, I will examine a formalized approach to adjudicating between

    R and STC-strategies to measure their relative reliability. Barry Lam examines two ways

    of measuring the reliability of these approaches. The first is calibration of the subjective

    credence in ones beliefs to the actual probability that the content of ones beliefs are

    true. One is considered well calibrated when ones degree of confidence in some belief

    matches with the objective probability that that belief is true. Even if two epistemic

    agents disagree, they can still be equally well calibrated; an example of this would be if

    one agent was overconfident in the truth of A while the other was underconfident in the

    truth of A, but each deviation is the same. The second method of measurement, known

    as Brier scoring, measures the distance of a subjects beliefs about p from the truth-value

  • 7/29/2019 Revised in Defense of Staying the Course

    18/21

    Staying the Course

    18

    of p by taking the squared distance from the truth of the proposition. (Lam forthcoming,

    4).

    Lams argument for the superiority ofR-strategy is as follows. Suppose that two

    agents, A and B, disagree over some issue , but they are equally well-calibrated so that

    their distances from the truth of is equal. A and B thus satisfy the epistemic peerhood

    condition. How reliable would a hypothetical agent C be, whose constant strategy in

    every situation is to split the difference between A and B? On the calibration test, the R-

    strategy C showed improved calibration in about 15% of the simulations, leading Lam to

    conclude that STC-strategy of maintaining ones belief fares better than the R-strategy of

    splitting the difference (Lam forthcoming, 14). In the case of Brier scoring, Lam

    imagines that A and B disagree on two propositions P and Q. A and B are epistemic

    peers because As beliefs are distances and from the truth about propositions P and Q

    respectively, while Bs beliefs are and from the truth about propositions Q and P,

    respectively. In this case, a hypothetical agent C that splits the difference between A and

    B necessarily has a lower Brier score, which means that Cs position is closer to the truth

    of the matter. Thus, on a Brier score, splitting the difference between two positions

    results in proximity to truth (Lam forthcoming, 17).

    Lams conclusion seems to resemble the conclusion of the Monty Hall problem,

    that changing ones choice always increases the odds of getting the prize over sticking

    with ones original choice. Lams results purport to show that choosing the R-strategy of

    splitting the difference performs at least as well as the STC-strategy and sometimes

    better. On the calibration test, splitting the difference between epistemic peers preserved

    the reliability and proximity to truth; on the Brier test, however the R-strategy was able

  • 7/29/2019 Revised in Defense of Staying the Course

    19/21

    Staying the Course

    19

    but to actually improve the chance of approximating the truth on a matter over users of

    the STC-strategy (Lam forthcoming, 17). In another paper Lam notes that the STC-model

    is rationally permissible in some casesnamely when ones peer is using a less

    discriminating metric of reliability like calibration (Lam 2011, 243).

    How helpful is Lams approach? First, Lams only focusing on splitting the

    difference form of revision does not evaluate the agnostic form of revision that Kornblith

    supports. Kornbliths support for the agnosticism strategy is hardly idiosyncratic. It is

    difficult to know whether it would helpful for Lams overall claim. Second, it seems that

    Lams framework requires discerning the truth of matter on some given issue, which is

    precisely what is often under debate. Perhaps this is only giving an a priori setting with

    artificial numbers to illustrate why Another issue is that there seems to be a conflation of

    revising the content of ones beliefs and revising ones confidence in said beliefs. The

    rough scenarios Lam utilizes seem to make the same conflation ofthe revision of ones

    confidence in a belief with revision of the content on ones beliefs that Kornblith was

    guilty of. As was the case with Kornblith, making this ambiguity actually confuses as

    kind of R-strategy with an STC-strategy.

    Even if we put aside the technical worries of how to apply Lams approach to the

    real life situations of disagreement, there are practical reasons to be wary ofLams

    position. He insists that adopting the R-strategy consistently will result in closer

    proximity to knowledge (Lam 2011, 244). But one issue seems to be that adopting a

    simple strategy of accommodating a disagreeable peer seems to move too quickly to the

    reconciling position. Philip Kitcher notes that there are cognitive goals that are not met

    by simply revising ones beliefs in the face of disagreement (Kitcher 1990, 20). Pushing

  • 7/29/2019 Revised in Defense of Staying the Course

    20/21

    Staying the Course

    20

    back on critics objectionssometimes even with pugnacityallows for a refining of

    ones positions that benefits the entire epistemic community. These benefits are lost if

    the inquiry is short-circuited by automatic revision in response to peer3

    disagreement in

    the way that Lam suggests.

    As noted earlier, insofar as the strategies for philosophical disagreement can be

    considered kinds of epistemic heuristics, they have not been subjected to any form of

    model or simulation that would verify the claims of ecological rationality that I have

    presented here. Absent empirical tests that would confirm this, my tentative conclusion

    is that the STC-strategy seems like a more ecologically rational response to philosophical

    disagreement.

    3 To the objection that our disposition to revise is dependent on the level of

    confidence of ones peer, it would be true that a highly confident peer would give us

    a reason to adjust our beliefs while a lower confidence peer would not. It seems to

    me, though, that Lams scenarios include in the peerhood conditions that peers are

    either equally confident or at least approximately so (Lam 2011, 3).

  • 7/29/2019 Revised in Defense of Staying the Course

    21/21

    Staying the Course

    Works Cited

    Elgin, C. Persistent disagreement. Feldman, R., & Warfield, T. (2010). Disagreement.New York, NY: Oxford University Press.

    Feldman, R. & Warfield, T. Introduction. Feldman, R. & Warfield, T. (2010).

    Disagreement. New York, NY: Oxford University Press.

    Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to

    disagree.American Psychologist; American Psychologist, 64(6), 515.

    Kelly, T. The epistemic significance of disagreement. Hawthorne, J and Gendler Szabo,

    Tamar (2005). Oxford Studies in Epistemology, vol. 1. New York: OxfordUniversity Press, 16796.

    Kitcher, P. The division of cognitive labor. Journal of Philosophy, 87 (1990), 5-22.

    Kornblith, H. Belief in the face of controversy. Feldman, R., & Warfield, T. (2010).

    Disagreement. New York, NY: Oxford University Press.

    Lam, B. On the rationality of belief-invariance in light of peer disagreement.

    Philosophical Review, 120.2 (2011), 207-245.

    ---. Calibrated probabilities and the epistemology of disagreement. Synthese.Forthcoming.

    Shanteau, J. Competence in experts: The role of task characteristics. OrganizationalBehavior and Human Decision Processes, 53 (1992), 252-266.

    ---. What does it mean when experts disagree? Salas, E., & Klein, G. (Eds.). (2001).

    Linking expertise and naturalistic decision making. Mahwah, N.J.: Lawrence

    Erlbaum Associates.

    Todd, Peter M. & Gigerenzer, Gerd (2000). Prcis of simple heuristics that make us

    Smart. Behavioral and Brain Sciences 23 (5):727-741.

    Weinberg, J. M., Gonnerman, C., Buckner, C., & Alexander, J. (2010). Are philosophersexpert intuiters? Philosophical Psychology, 23(3), 331-355.