A Coincidence of Heuristic Arguments for Bayesian Epistemology

A lot of the arguments that initially convinced me that something like Bayesian epistemology is right turned out to be heuristic arguments that I now find a lot less convincing. I am still pretty convinced nonetheless. In this post I will talk about two of them, point out how they were convincing in broad strokes but hard to flesh out in detail, and then propose a research project based on them. The research project consists in explaining why these heuristic arguments coming at the problem from totally different perspectives end up prescribing the same constraints for normative epistemology.

Argument from Self Location

I’m not sure what the history of arguments like this one is, but I would bet that something like it has been independently arrived at many times.

Premise 1: There are a bunch of ways that absolutely everything could be. (You can think of these ways as really long conjunctions of sentences which imply a definite answer to every meaningful question. You can also think of them as possible worlds.)

Premise 2: Every meaningful claim can be modeled as a disjunction over these ways things could be. (A claim like “Brian is pretty” is a disjunction over all of those ways which imply that Brian is pretty.)

Premise 3: If you were just born and didn’t know anything about what the world was like, it would make sense to think that each of these ways are equally plausible.

Premise 4: If you observe something that is inconsistent with some of these ways that everything might be, it makes sense to eliminate those ways, without changing the relative plausibilities of the ways things could be that are left over.

Premise 5: If you eliminate one way that things might have been, its plausibility should transfer over to the other ways things could have been that are left over.

Premise 6: If you treat plausibility as a number and you use premises 1-5 to distribute plausibility, you get a probability distribution over claims updated by Baysian conditionalization.

Conclusion: Since premises 1-5 are right, and premise 6 just adds a notational convenience, probability distributions updated by Bayesian conditionalization is right.

I still think this is actually a pretty good argument, but it is also definitely a heuristic argument. Premises 1, 4, and 5 are not as obvious as I would like them to be, and definitely not uncontroversial. If you remove premises 5 and 6, you get something that looks a lot like falsificationism, which is an alternative formal epistemology with some adherents. This suggests that it is at least not obvious that eliminating some hypotheses through observation should make the ones that are left over more plausible.

Premise 1 says that there are ways that everything could be, but I have never seen a way that everything could be. Even if you say that these are infinite propositions, I have never seen an infinite proposition either, or a finite one for that matter. Working out how premise 1 could be made ontologically innocent is not trivial.

Premise 4 assumes that keeping relative plausibilities constant when you remove an option is natural, but there are other measures of the distance between two distributions that you could minimize instead. Showing that this particular distance metric is the one we should minimize instead of some other one needs doing.

Premise 3 has the biggest problems in my opinion, but I’m not going to get into them here. There are alternatives you could use, but I’m not going to get into those here either.

I call this argument the “argument from self location” because it’s kind of like reducing uncertainty about everything to uncertainty about where you are, except instead of being uncertain about your spatial location, you are uncertain about what possible world you are in.

If you knew that if you were in one of five different houses, but had no clue which one, you would use a similar method to figure out stuff about which house you are in. Like if you know that all the red houses have roses in them and none of the blue ones do, and there are only red and blue houses, then if you see roses, you can infer that you are in one of the red houses. Same idea here, but with possible worlds instead of houses.

There are other problems that crop up when you mix normal uncertainty about location with this weird kind of uncertainty about location in possibility space, but I am again not going to get into that here.

The Argument From Dutch Books

Dutch Book arguments also had a lot to do with how I was initially convinced. Individual dutch book arguments are a lot more formal, but the reasoning from the individual conclusions of those arguments to the correctness of Bayesian epistemology is not nearly as water tight as I used to think it was. I won’t outline all of the Dutch Book arguments here, but I will give you an example so we can see why I think they fail to justify Bayesian epistemology on their own.

Dutch Book arguments justify particular credence constraints by showing that an agent violating those constraints, and accepting bets using those constraints, can be made to accept a set of bets that is in combination a sure loss for the agent. It is assumed that if your credence in a claim is p, Bel(claim) = p, then you will buy a ticket that pays out 1 \ \text{usd} at a price of p \ \text{usd} or less, and sell such a ticket at a price of p \ \text{usd} or more. Sometimes credences are defined to mean just those willingnesses to accept or sell bets in the context of dutch book arguments.

As an example, take the constraint that: Bel(A) + Bel( \neg A) = 1. Suppose that for some agent they sum to more than 1: Bel(A) + Bel( \neg A) > 1. Then you can sell a ticket that pays out 1 usd if A is true for a price of Bel(A) usd, and one that pays out 1 usd if \neg A is true for a price of Bel(\neg A) usd. Since Bel(A) + Bel( \neg A) > 1 by stipulation, and only one of A and \neg A can turn out true, you have been paid more than 1 usd and paid out only 1 usd. This means the agent makes a sure loss whether A or \neg A turns out to be true.

If an agent’s credences sum to less than one, then you can buy those same tickets from the agent instead of selling them, and the agent again makes a sure loss. It’s easy enough to see that if the agent’s credences in A and not \neg A sum to 1, then you will not be able to make a sure win off of them in this fashion.

This fact is then taken to support the claim that credences over exhaustive, mutually exclusive propositions must sum to 1.

My main problem with this argument, and others like it, is that there is no good argument to my knowledge that rationality requires accepting the same bets regardless of which other bets have been accepted beforehand. You could argue that it’s convenient to have a formal rule that makes it impossible to be dutch booked, but here is a recipe for making other formal rules that cannot be dutch booked: pick any other formal rule, and then add the caveat that one should not accept the last bet in a sequence of bets that forms a dutch book.

Sure, if the bets had been offered in a different order, someone following an alternative formalism built along the lines I suggest would have taken a different bet, but why is that a problem? Why can’t that be rational? After all, they have a very good reason to accept the bet if it is offered first, but not if it is offered second: they don’t want to be guaranteed to lose money.

Dutch book arguments have an advantage over the argument from self location in that they do the work of establishing conditionalization without depending on anything as substantial as premises 4 and 5. They also don’t have to say anything at all about “ways that everything could be” which are admittedly quite mysterious. But they rely on the nontrivial assumption that you should not need to consider what other bets you have accepted so far when deciding whether to accept a new bet.

A Remarkable Coincidence

For all of the difficulties with both of these arguments, it’s pretty weird that when you imagine that you are in one of a bunch of possible worlds, and then count the fraction of those worlds that are consistent with everything you have observed so far, the natural formalism you get also happens to be the only formalism for betting that doesn’t let other people sell you bets that you are guaranteed to lose money on.

It’s actually weirder than that. For some reason, treating possible worlds like they are different places you might be in, and then reducing all kinds of uncertainty to uncertainty about which of these possible worlds you are in, yields the same formalism that you get when you try to figure out a way to avoid accepting bets that you are sure to lose money on without having to know which other bets you have already accepted. Why do we need the caveat that I do not know which other bets I have accepted for these to yield the same formalism? Why does eliminating a way things might be have to make the other ways left over more plausible for these to yield the same formalism? Is that even true? How different could you make the assumptions of each argument without changing them into arguments for different formalism?

Explaining why these very different kinds of heuristic arguments lead to the same formalism, I suspect would get us a lot of the way to understanding why it makes sense to use that formalism for reasoning under uncertainty.

The natural interpretation of the formalism when you look at it from the perspective of the self location argument is that the probability of a claim represents the fraction of worlds consistent with current observations in which that claim is true. The natural interpretation of the formalism when you look at it from the perspective of dutch book arguments is that the probability of a claim represents the price at which you should be willing to buy a ticket that pays out 1 usd if the claim is true. It doesn’t seem obvious that those quantities should be the same.

I’m not sure exactly what it would take to make it seem obvious, but it would have to be more than just following the arguments through and showing that the formalisms they yield are the same under a simple transformation. The question is why the arguments yield the same formalism. We already know that they do.

It might seem obvious that these arguments should yield the same formalism if you are already steeped in Bayesian epistemology, but imagine what it would be like to discover that the formalisms are identical for the first time. Imagine that you were motivated to figure out a formalism for deciding what bets to take without reference to the other bets you have accepted so far, and also that you were independently motivated to find a formalism for figuring out which of finitely many possible worlds you might be in. If you found the best formalisms for both and then noticed that they turned out to be exactly the same under the right transformation, I think you would be right to find that surprising.

In any case, the fact that these arguments which use totally different starting points and interpretations do yield the same formalism is to my mind more suggestive that there is something uniquely reasonable about using probability theory to reason under uncertainty than the sum suggestiveness of each argument taken independently. It’s a heck of a coincidence, and it would be nice to have an intuitive explanation that made this coincidence seem less surprising.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close