
Upon learning something new,
The Litany of Good.
I may decide to act differently.
Upon learning something new,
I may decide to act just the same.
No force compels me to alter my plans but my own judgment.
I need not fear that which can only improve my choices.
There is a theorem due to I. J. Good which is often glossed as “a bayesian never pays to avoid free information”. In the fine rationalist tradition of interpreting the theorems of formal epistemology as wise aphorisms which compose a lived philosophy, this theorem has not been given the attention it deserves. Although it is definitely part of rationalist orthodoxy that it is unwise to avoid free information, this imperative has rarely been presented as the theorem of decision theory that it is. Even less often has it been given the kind of lively exposition that theorems like the conservation of expected evidence and Bayes’s theorem have enjoyed. I will do my part to change that here.
Besides being a wonderful piece of formal epistemology, developing an understanding of Good’s theorem has proved useful to me in my everyday personal life, as well as in my higher minded pursuits. The exact character of that usefulness is hard to convey, although I will try to convey as much of it as I can. Along the way, because it will help me convey this usefulness and because it is a neat observation, I will claim that some of the rationality techniques that I and others have found most useful are intimately connected to Good’s theorem. I will explicitly mention three: Split and Commit, Leaving a Line of Retreat, and Simply Locating Yourself. I will show how these three techniques can all be seen as trying to do the same thing, and how that same thing is getting humans to natively implement something that closely rhymes with the main bit of reasoning used in Good’s proof.
We will start with a puzzle proposed by A. J. Ayer., the very same puzzle that motivated I. J. Good to prove the theorem we are interested in. Consider the conservation of expected evidence. It reads:
where are disjoint and exhaustive observations, and the various values of
are disjoint and exhaustive hypotheses. It can be stated equivalently using expectation as:
The most straightforward interpretation of this theorem says that a bayesian’s expected credences after making a new observation are just the same as their current credences. (The theorem’s proof is relatively simple. Deriving it and convincing yourself of its interpretation if you have not already may be a useful warm up for what is to come.) Ayer asks us then, if every chance of our credence being raised some amount by a test result is perfectly balanced by an appropriate chance of our credence being lowered by a corresponding amount, why should we ever bother running a test? If we on average end up assigning exactly the credence with which we started, what’s the point?
Good’s answer is to show that a bayesian should never pay to avoid new information, because getting such information can only ever improve their ability to manipulate the environment.
The litany with which this post started doubles as a high level sketch of Good’s proof. My aim will be to slowly crank up the formalism knob until we have reached something that resembles Good’s original proof, and then slowly translate the argument back into human native concepts until we have made it back to the litany.
Another way to gloss Good’s theorem is that the value of information (VoI) is always non-negative, so before we get into Good’s it might be useful to get a feel for what VoI is in general.
Imagine that you want to take a walk, but you do not know what the weather will be like. You assign a 70% probability that it will be sunny, and a 30% probability that it will rain. You would like to have an umbrella with you if it does rain, but it would be a minor inconvenience to carry it on a sunny day.

We can see here why it is useful to find out the state of the weather before deciding whether to take an umbrella on a walk. Something a lot like the reversal of the order of node’s we see in TurnTrout’s illustration–nature’s state now being upstream of our action, and thereby allowing us to react–is key to Good’s proof.
However, in the real world (as in Good’s proof), we do not get to find out the state of nature itself before we make our decision. Instead we have to rely on information that probabilistically bears on what nature is like. In the case of weather, we often rely on forecasts.
Imagine then that there is a forecaster who tells us only either that it will rain or that it will be sunny with the following conditional distribution:
Using Bayes’s theorem and a bit of arithmetic, we also get:
We can again represent our initial decision before finding out what the forecast says using TurnTrout’s first tree, since the probability distribution over the state of the weather is not changed before we learn what the forecast said:

As before, since we do not know the state of the weather before our decision, we can only choose to either bring the umbrella or not, with and
.
Now to represent represent our decision tree after learning what the forecast says, we add a node for the forecast that precedes our decision, just as we made the weather node precede our decision in the previous scenario. However, since the state of the weather depends on what the forecast said, the probabilities of the weather events in the first four branches of the tree will be different from the second four in accordance with the working out we did above.

Now because we can react to what the forecast says, we are not forced to choose between just bringing the umbrella or not bringing the umbrella, we can actually implement one of four policies. We can:
- Bring the umbrella no matter what.
- Bring the umbrella if the forecast says sunny, but not if it says rainy.
- Bring the umbrella if the forecast says rainy, but not if it says sunny.
- Or not bring the umbrella no matter what.
The expected utility of the third policy is highest at approximately: , which is greater than the
expected utility we get by taking the umbrella no matter what, as we would have to if we could not find out what the forecast says before making our decision.
The difference between the expected utility of taking the umbrella no matter what and the expected utility of taking the umbrella only if the forecast says that it will rain is the value of information of finding out what the forecast says. That is the maximum amount you should be willing to pay in utility to find out what the forecast said, approximately: . More generally, the value of a piece of information is the difference between the expected utility of the best policy you could implement by reacting to the information, and the expected utility of the best action when reacting to the information is not an option.
Long Informal Proof
In Good’s proof, like in our second example, the background environment has three kinds of variables: ,
, and
, which can take values
, and
. These are interpreted as actions, data, and hypotheses.
and
are treated as random variables which can take any of several mutually exclusive values. Data are interpreted as observations, experiences, or the results of tests, something like that. Hypotheses are thought of as ways that the world might be. The agent we consider is thought of as having a credence distribution over both of these variables.
is treated as a set of functions which take a hypothesis as input and returns an outcome. The agent also has a utility function over outcomes
which takes outcomes to real numbers. Outcomes do not have to be interpreted at all, we can think of them as merely a convention that allows us to write the utility of taking action
when
is true, as:
.
An expected utility maximizer takes the action with the highest expected utility. The expected utility of a particular action is defined as:
This is an average over the terms. Due to the principle of conservation of expected evidence, each factor of
in each term of that sum can be written as:
This allows us to rewrite the expected utility definition as:
If we expand out the sum iterating over , we can rewrite the expression as an explicit weighted sum of terms which are themselves sums. For
we can write
as:
This in turn allows us to represent the expected utility of each available action by arranging the terms of explicit sums like these in a table, as follows:

The sum of the terms in the first row of this table is the expected utility of , the sum of the terms in the second row is the expected utility of
, and so on for all
.
To choose without first observing an expected utility maximizer considers each available action
, and chooses the action that maximizes
. As shown above, this is the same as looking at the sum over each
term, weighted by
, for each action, and then choosing the action with the highest value. By construction, this is in turn the same thing as finding the row in our table with the highest sum and choosing the action that corresponds to it, since the terms in each row just are the terms of those sums weighted by
. (If two rows have the same sum, an expected utility maximizer is free to choose either.) This gives us a new formula for the expected utility of choosing without first learning the state of
:
You can also think of this as a formula for the sum of the terms of the row with the highest sum in our table. (Since there is a sum that starts at 1 and increments , we advance to a term in the next column each time we add a term to our sum, and since that sum is embedded in a max that iterates over the index of actions, we are guaranteed that each term will be from the same row.)
Now consider what policies an expected utility maximizer can execute if they decide to learn the state of before acting. As in our examples above, finding out the state of
before acting allows the agent to take a different action depending on what
turns out to be.
Imagine that before finding out the state of , action
has the greatest expected utility. We will represent that by highlighting the second row in red.

Now imagine that although the second row has the highest sum, the first term of the first row is higher than the first term of the second row, ie:
.
This means that having conditioned on ,
is higher expected utility than
. If the agent already knew that
, they would prefer to take
, even though the
has higher expected utility before they know the state of
.
Suppose further that having condition on , the expected utility of
is higher than or equal to that of any other available action. Likewise, conditioning on
,
is the highest expected utility action; conditioning on
,
is the highest expected utility action, and so on. The agent’s best available policy would then start as follows:
- If
, then do
.
- If
, then do
.
- If
, then do
.
- etc.
We can represent this policy by highlighting in green the highest expected utility action in each column, conditioned on the value of for that column.

This suggests an algorithm for finding the optimal policy for the agent if they are allowed to first learn the state of . For each column of our table, find the entry with the highest value. If the value of
turns out to be the value that corresponds to that column, choose the action associated with that maximal entry. (If two entries are tied for highest value within the same column, choose the action associated with either.) This also suggests a formula for finding the expected utility of that policy:
We can think of this as the formula for the value of the sum of the maximal term in each column. (Again, as is incremented, this advances us to an entry in the next column, but since this time the max is embedded within the sum, it returns the max of the column corresponding to that k).
Comparing this formula to the formula we got for the expected utility of the agent if they act before learning the state of to the one we got for the expected utility of acting after learning the state of
, we see that they are identical except that the order of the expectation and the maximization are reversed. We are comparing the max of an expectation to the expectation of a max. It follows from Jensen’s inequality that the first is always less than or equal to the second:
But we can use the tables above to prove this result without using Jensen’s inequality. Compare the algorithm referencing our table for finding the highest expected action without learning to the algorithm referencing our table for finding the highest expected utility policy if
will later be known. The first tells us to find the row with the highest sum and pick the action that corresponds to it. That sum is then the expected utility of the best action that can be taken before
is known. The second tells us to find the highest term in each column, and then take the action that corresponds to that highest term if
turns out to have the value that corresponds to that column. The sum of those highest terms in each column is then the expected utility of the best policy available if
will later be known.
In the worst case scenario, the highest term in each of the columns will be a term from the row with the highest sum, in which case acting before learning the state of and acting after have the same expected utility. This is a case where every columns’ entries are dominated by entries from one and the same row, and so the row with the highest sum and the sum of the max of each column are the same. In this case the agent chooses the same action no matter what
turns out to be.
However, if there is a term that is higher than the term in the same column from the row with the highest sum, then the action corresponding to that higher term will be chosen as the response if the value of turns out to be the value that corresponds to that column. This means that the expected utility of finding out the value of
must be higher than the expected utility of not.
To put the point more visually, a green highlight from left to right which is allowed to meander any which way so long as it picks exactly one entry from each column, cannot have a lower sum than a red highlight which has to go straight and pick only terms from one in the same row. In the worst case scenario, the green highlight can just pick all of the same terms as the red highlight, in which case their sums will be equal.

As already suggested, similar reasoning shows that the expected utility of acting before finding out the state of is equal to the expected utility of acting after if, and only if, one would choose the same action no matter what
turns out to be. Otherwise, the green highlight would deviate from the red highlight in at least one column, and so their sums would not be equal. If the green highlight does not deviate from the red highlight, then they have exactly the same terms, and so their sums are equal.
Finally, it is easy to see that the maximum amount of utility you should be willing to pay in order to find out the value of is just the formula for the expected utility of acting after finding out the state of
minus the formula for the expected utility of acting before finding out
‘s state. We call this quantity the Value of Information:
It follows directly from the inequality demonstrated above, and the fact that the terms of the formula for just are the right and lefthand side of that inequality, that:
.
In other words, an expected utility maximizer never pays to avoid information.
With Good’s theorem in hand, we are now in a position to answer Ayer’s riddle. Recall the puzzle: if by the Conversation of Expected Evidence, indeed every expectation of an increase in our credence must be balanced by an equal and opposite expectation of a decrease, then why should we ever bother collecting evidence at all?
Good’s answer is twofold.
First of all, in many cases you should be willing to pay utility to collect evidence, because that evidence will allow you to make better decisions than you otherwise could have.
Second of all, collecting new evidence cannot possibly make you worse off, so why not?
If there’s a cost to collecting a piece of evidence, the gain in expected utility may not be worth collecting it, but if the collection is itself costless, then collecting this evidence cannot possibly harm us.
Good’s answer to the puzzle seems good as far as it goes, but we know from experience that the second part of Good’s answer is sometimes false for humans. It should not take much introspection to imagine an experiment that you would prefer not to run, but just in case, consider the unpleasantness of getting medical tests for a terminal illness, or the unpleasantness of finding out just what your IQ is, or the unpleasantness of finding out what those other people really think of you, or the unpleasantness of finding out just how horrible some tragedy was or continues to be. Whether it is rational or not, humans do in fact avoid information, both in the lab and in the field.
To give a (perhaps too) personal example, in my case, as someone who is sometimes in polyamorous relationships, I sometimes have to decide how much I want to know about the romantic activities my partners engage in with others, and I sometimes have decided to learn less than I could have. Does Good’s theorem really imply that it is irrational for me to avoid finding out exactly how much more my partner enjoys sex with someone else? I mean, finding out could only improve the quality of my choices, right? Worst case scenario, I decide to just keep doing what I was doing. Alternatively, I find out that I should be doing something else, so what’s the harm?
There are complicated questions here about what norms exactly Good’s theorem imposes on rational human action. It is not clear how to think about the condition that the information must be costless. Is finding out that my partner has better sex with one of their partners costless for me? Well, I wouldn’t have to pay them for them to tell me the answer, on the other hand, there’s a good chance of them giving me an answer that will make me unhappy. That my happiness can depend (in some special sense of the term “depend”) on the data I receive, or my model of the world, rather than depending exclusively on what the actual world is like, may itself be a violation of Good’s assumptions, and if so, it’s not obvious what I should do about that. Relatedly, humans are basically just not in fact expected utility maximizers. Even if one aspires to be an expected utility maximizer, it is not clear to me that the best way to be an aspiring expected utility maximizer involves obeying the constraints placed on expected utility maximizers in the most straightforward way. Doing so may cause you to get less of what you care about out of your life than you otherwise would have.
One may wonder then why I have spent so much time on Good’s theorem if I am not yet sure what constraints it imposes on agents like us. When I first started writing this post, it was going to include a long list of vignettes wherein it seems reasonable for an agent to avoid a piece of information. It was also going to include an elaborate taxonomy of cases wherein it seems like humans violate the assumptions that make Good’s theorem provable, and recommend either happily violating the theorem or an appropriate rationality technique for dealing with the temptation to violate the theorem for each taxa. This taxonomy and its associated imperatives are closely related to the value that I have gotten out of understanding Good’s theorem. Unfortunately, that taxonomy will not fit in this post. I may some day write a post like that, but just in case I do not, I would still like to convey some condensed version of the taxonomy herein, as I think it will help me elucidate the value I have gotten out of understanding this proof.
Attached to Plans Instead of Goals: Split and Commit
Sometimes we humans become attached to a plan, and sometimes as a result, we avoid collecting information that might have us change that plan. For instance, perhaps someone wants to contribute to the world and decides to do so through particle physics, so they study physics in college and grad school etc. They later decide not to take an IQ test, because if their score turns out to be too low, they might decide that it is best to switch to a different line of research.
With Good’s theorem in hand, there is a now more salient perspective from which this fact about human psychology is hilarious, although it points to a disappointingly self-defeating feature of our cognitive designs. Imagine someone in this predicament. Presumably, the reason they came up with their plan was for the sake of the awesome outcomes they thought it likely to bring about. The only reason that new information should make them change their plan is if it turns out to be evidence that the world is unlikely to map their plan to the awesome outcomes they are hoping for. Now for the sake of their beloved plan, they avoid information that might cause them to learn that their plan does not serve the purpose for which they designed and endorsed it in the first place. “I don’t want to know whether the cat was eating our cheese! If it turns out she was, then I might have to stop setting up these mouse traps. They were very expensive!”
This illustrates one of the major kinds of violations of Good’s theorem that we humans are prone to: we become loyal to our plans rather than the ends for which we designed them, and then avoid information in order to protect the plan. When I find myself in this situation, I remember The Litany of Good, and remind myself of the ends for which I initially designed my plan. I ask myself, would I really still want to go through with this plan even if it did not lead to the ends for which I designed it? If I really still think that it is the best plan available after the new information comes in, then I can just do it anyway, so what do I have to be afraid of?
An even better solution to this problem is to avoid having it in the first place. You can do this by designing and endorsing more than one plan from the outset. This technique is known as “Split and Commit”. The technique was originally designed to deal with some of the consequences of the representativeness heuristic, but some version of it is applicable in a much wider set of cases. It is almost just a straightforward application of the algorithm we used for finding the best policy in the proof above. The advise is simple: consider each decision relevant hypothesis and formulate the complete plan you would implement if you knew that hypothesis was definitely true. Make sure to keep track of the plausibility of each hypothesis as evidence comes in, and execute the appropriate plan accordingly. It also helps to decide ahead of time exactly what kind of, and how much, evidence you will need in either direction in order to trigger switching from one of your plans to the other.
Splitting and committing is more cognitively expensive than having just one plan, but when the outcomes for which you designed that plan really matter, and it is a long and effortful plan which inertia and sunk costs are likely to make harder to abandon, it is worth it.
I claim that with practice, remembering The Litany of Good becomes automatic, and coming up with several plans ahead of time becomes less and less useful for lower stakes situations, but practicing split and commit is likely a good way to internalize the reasoning in Good’s proof, and sometimes it is best to explicitly spit and commit anyway.
A closely related feature of human cognitive design worth commenting on is that we sometimes worry so much about ex post mistakenly abandoning our plan due to new information that we sacrifice our ability to ex ante correctly abandon our plan. To borrow (and slightly alter) an example from Lara Buchak, perhaps you are considering marrying someone. You think it is possible though unlikely that they may have cheated on you. If they have cheated on you, you do not want to marry them. You could check their shirts for lipstick, which would be evidence that they have cheated on you, however, it would not be conclusive evidence. You decide not to check their shirts for lipstick, since seeing lipstick on one of the shirts could cause you not to marry them even though the lipstick did not actually end up on the shirt through cheating.
(Buchak says this can be rational, and in fact would likely disagree with almost everything I have said in this post.)
Consider how we might try to model this within the setting of Good’s proof. If it is right to decide to marry unless we observe the lipstick, we can express that with the following inequalities:
If both of these inequalities are true, by Good’s inequality we should look at the shirts, and by the second inequality, if we see the lipstick, we should decide not to marry. It might be that we see the lipstick, and we end up in the unlikely world where the fiancé did not cheat, but the truth of the second inequality implies that possibility is not probable enough to justify marrying anyway given the stakes.
Now perhaps the second inequality is not true, in which case we should not decide not to marry after seeing the lipstick. The second inequality can be false even if the lipstick is evidence that the fiancé cheated. If observing the lipstick does not shift the probability that the fiancé cheated enough to make the second inequality true, then deciding not to marry would be a mistake, and so we should not worry about observing the lipstick making us decide to not marry, since that is not what it would be rational to do in that scenario anyway.
In more human native terminology, if it would be a good idea at the time to decide not to marry if I find lipstick on the shirt, then I should not now fear that it would be a bad idea to decide not to marry if I find lipstick on the shirt. Whether it is a good idea after I have already looked at the shirt should not depend on whether I decide to look at the shirt or not.
Nonetheless, we often do find ourselves in situations much like this one: deciding not to seek out new information because the opportunity to abandon our plans for good ex ante reasons does not seem like it is worth the possibility that we might abandon our plan when we should not have ex post. It is as if one worries that seeing the lipstick will drive one not to marry against one’s own better judgement. When I find myself in this situation, I remember The Litany of Good, and remind myself that no force can compel me to change my plans but my own judgment.
Cherished Beliefs and Gaps in the Map: Leaving a Line of Retreat
There is another family of features of human cognitive design which sometimes leads us to avoid collecting information. Paraphrasing Anna Salamon: sometimes we have beliefs that are so dear and precious to us, that it seems like finding out they are false would be equivalent to the world ending. Stretching this aphorism a bit, it makes good sense from one perspective that we would avoid gaining information about such beliefs, for it seems to us that if we found out that they were in fact false, we might as well be dead. It is as if logical space stops for us at the border of that possibility. It is not just totally pointless to design plans that extend out into those regions of logical space; it is as impossible and incoherent as planning for a contradiction.
One can learn to notice these beliefs in many ways, I’m sure. For me, one of the first signs is that I notice myself hoping that it is not false in a way that interrupts my process of figuring out whether it is false. Every time the question of whether the belief is true comes up in the everyday functioning of my mind, the usual checks “is it really true? why do I think that again?” are interrupted by “Ahhhhh, oh no! Oh my god, it has to be true, of course it is, don’t be silly” or sometimes by something more like “Welp, if that’s false, forget it, so let’s not even waste time worrying here”. Other times the inquiry is interrupted by something more like “Ugh, I just can’t deal with this right now, I’ll think about it later” or even something much harder to notice like “Oh boy lol, uhh wait, what was I thinking about? Ehh, idk, must not have been important, but probably should think about puppies and Minecraft now”.
The doominess associated with rejecting one of these beliefs comes in degrees. Not all of them feel like their falsity is equivalent to the world ending; some feel equivalent to losing a decade, or even just like losing a hundred bucks. What these beliefs have in common is that when we try to assess their plausibility, the badness of the possibility of their falsity in some way makes the process more difficult. In my case it usually feels like being distracted, but I’m sure that there are many cases where the question of the belief’s truth or falsity is never raised to attention at all, and I would not be surprised to find out that there are countless creative ways the human mind ensures that we do not spend too much time assessing such beliefs.
The problem with this is that the negations of these beliefs are not in fact contradictions, and they are not actually equivalent to the world ending, they are normal contingent beliefs. Worse, they are often times the beliefs which are most critically relevant to realizing the ends we most care about.
When I find myself in the grip of one of these beliefs, I remember The Litany of Tarski, and remind myself that I want to believe it if, and only if, it is true. When I find myself avoiding information about one of these beliefs, I remember The Litany of Good, and remind myself that I need not fear that which can only improve my ability to realize the ends I have set out.
There is a technique called “Leaving a Line of Retreat” which was designed for just this problem. The advice is again simple: Pretend, definitely do not consider, but just play pretend that the belief is false. Now just for fun come up with some plans that would make sense in that totally hypothetical scenario. The hope is that by coming up with plans that make sense if the belief is false, the possibility that the belief is false becomes less unthinkable. Likewise, the prospect of receiving information that causes you to reject the belief will become less terrifying. This in turn should make assessing the plausibility of the belief easier, since it is no longer as if all of reality would cease if it turned out to be false.
Cherished Beliefs and Gaps in the Map: Simply Locating Yourself
There is another technique designed for dealing with a similar class of cognitive failures called “Simply Locating Yourself“. This advice is not so simple, since it is really more of a frame shift, but the basic idea is that rather than imagining that there is just one version of you who finds themselves in different parts of possibility space with some probability, you imagine that there are actually many different copies of you who all find themselves in different parts of actual space, but in slightly different, almost identical circumstances.
Imagine that you have agreed to a bet where if a hundred sided die comes up 1, you lose 1000 usd, but otherwise you win 20 usd. To your horror, you see the die come up 1. It of course feels like a punch in the gut. But now imagine things differently, imagine that the deal was that one hundred copies of you would be made and spread out across one hundred almost identical copies of earth. Ninety nine of you would be 20 usd richer, and one of your copies would be 1000 usd poorer. When you then imagine yourself waking up and checked your bank account to find that it is 1000 usd lower, many people find that it wouldn’t feel so much like a punch in the gut anymore. Many of us have the intuition that it would feel more like “oh, I guess I’m the copy who is 1000 usd poorer… anyway, what’s next on my agenda?”. After all, you knew that one copy of yourself was going to find themselves in this situation, and you agreed to bring that copy into existence.
You can apply this frame shift to similar effect on those beliefs of yours for which it is too painful to consider the possibility of their falsehood. Consider again someone who has decided that they will make their contribution to the world through particle physics. It’s easy to imagine that they end up with a belief that they are extraordinarily intelligent which they must protect at all costs. They might then fear taking an IQ test. The frame shift suggested by “Simply Locate Yourself” would have them imagine that there already are several copies of themselves which all end up with different scores. One of them ends up with a score of 160, and one of them ends up with a score of 115. If our aspiring particle physicist sees that their score is 115, this is not telling them what the world is like, it is simply telling them where they are. If there were two copies of me, one with 115 and one with 160, I would of course want the different copies to do different things, and one of the first things my copies would do is find out which of the two they are. When I found out that I am the 115 copy, it would not feel like a punch in the gut, it would feel like “oh, I guess I am the 115 copy” and then I would go about doing the things I want the 115 IQ copy to do.
If I haven’t done a great job conveying the intuition, that is almost certainly my fault, and I encourage reading the original post. But I hope one can see regardless, how if this frame shift did work, it would help one assess the probability of a protected belief. In a case like the one of our particle physicist, the frame shift might even move you from fearing taking an IQ test to enthusiastically deciding that that is the obvious first thing to do. More generally, this frame shift might move you from being willing to pay to not learn a piece of information to being willing to pay in order to collect that piece of information.
Wrapping Up
Now for the punchline. I claim that all of these techniques, split and commit, leaving a line of retreat, and simply locating yourself, are all actually different ways of doing essentially the same thing. Furthermore, the same thing they are all essentially doing is getting the human mind to implement the algorithm for finding the optimal policy we used in the proof above.
Leaving a line of retreat is really just splitting and committing, but only for that one truly scary possibility, the one that your mind cannot bear to consider might be false. The extra spin that leaving a line of retreat brings is that it asks its user to consider the possibility as a pure hypothetical, since considering it fairly as a hypothesis is too difficult, but it is otherwise the same: you entertain the possibility, and then come up with a plan that makes sense within that possibility. Splitting and committing is different in that it asks you to preform the same plan forming step for several possibilities instead of just the one, but actually, in the original post, mostly cases with just two possibilities were considered. When you use leaving a line of retreat, presumably you already have a plan, the thing that is missing is a plan if your protected belief turns out false. One of the most important things that leaving a line of retreat gives you is a backup plan for if that terrible thing that has been worrying you turns out to be true, so you end up with at least two plans anyway.
Similarly, simply locating yourself is really just split and commit with more creative imagery. The simply locate yourself frame shift does not directly involve coming up with any new plans like leaving a line of retreat and split and commit do, but it does make it seem obvious that the right thing to do if you find yourself in a world where some unfortunate fact is true is just whatever the copy of you would do if it found itself waking up in the copy of the earth where that unfortunate circumstance holds. The frame shift literally involves imagining splitting yourself into several copies that find themselves in different circumstances, and then imagining what you would do in each.
Recall the algorithm we used for finding the best policy when you get to find out the state of in the proof above:

We consider each possible value of s=1$ and then find the highest expected utility action conditioning on that value of
s=1$. The best policy is just to take whatever action is highest expected utility conditioned on whatever the actual value of
s=1$ turns out to be.
This is a lot like considering all the ways the world might turn out to be and designing a plan that makes sense in each hypothetical, all before trying to figure out what the world is in fact like, which is in turn a lot like all three of the rationality techniques we have talked about. They are different in that the algorithm for finding the best policy iterates over possible pieces of evidence and picks an action for each, while the techniques we have looked at mostly ask us to iterate over different possible hypotheses and then pick the best action for each. In this way, these rationality techniques are more like the first decision trees we looked at than the last. It was probably assumed by the authors of these techniques that if you use any of them, you will also later use evidence to keep track of how likely each relevant hypothesis is, and hence of which plan to implement. If so, then the three techniques and the algorithm come out even more similar.
There’s a nice question which I will not attempt to answer here, but I will ask. Why would it be that teaching humans techniques which are essentially just straight forward implementations of the algorithm for finding the best policy also helps them feel more at peace accepting information which they would normally pay to avoid? Leave a Line of Retreat suggests an answer, which is that the unthinkable possibility has become thinkable, and so finding out that it is true is no longer equivalent to dying, but that doesn’t explain why it works like this for the other two techniques. That’s something I would like to understand better.
You might still be wondering at this point why this post’s name starts with the phrase “Bayesian Self-Trust”. There is one last kind of deficit in human agency which will make the point clear. The second premise in the litany of Good is sometimes false for humans, sometimes you cannot just do whatever you were planning to do regardless of what information you get.
Imagine that you might have bed bugs. Also, you would like to go to your friend’s party. Now, you could have a professional come over and find out for you whether you do in fact have bedbugs, but if it turns out that you do, then you would not be able to go to the party. Well, actually, you could go to the party, and just not mention the bed bugs, and if your friend asks, you could lie. The problem is that you as a human are not a very good liar. Your facial expressions will be subtly different, your skin will perspire, and your friends may well be able to tell.
Our algorithm for finding the best policy assumes that no matter what evidence you happen to observe, all of the same actions will be available to you. This is not always true for humans. If you were a Bayesian decision agent, you could go to the party and act exactly as you would have if you had not just found out that you have bed bugs, every muscle contraction and expansion would be perfectly calculated to optimize expectation of your utility function. If the option of acting exactly the same is not available to you, even if you still think it is the best option available, it means that something outside of your best judgment has stepped in and hijacked your agency.
A Bayesian agent cannot, as a matter of near tautology, be bullied by their observations into acting other than how they deem best ex ante, and the same goes for us when our agency is calling the shots and working well. What evidence I happen to observe is no excuse. I need not worry that I will observe something that causes me to act against my better judgement, because observations can only influence my actions through my judgement. My actions flow from my cares and my model of the world, and I am in charge of both. No force outside my own cares and beliefs can coerce me into acting other than I deem best, except of course when one can.
The litany of good serves as a reminder that a rational agent never pays to avoid free information, but it also serves as a high level gloss of the reasoning in good’s proof. This means you can use it to check any case you find yourself in. Is the first premise false in this case? How about the second? Is it not true that only my own judgment can cause me to alter my plans in this case? If not, then I should have no fear of receiving new information, so what am I really afraid of? The ability and disposition to ask these question–while seeing how answering yes to the first three implies that I have nothing to fear from learning something new–has provided some of the biggest benefits of coming to understand Good’s proof in my case.
Upon learning something new,
The Litany of Good
I may decide to act differently.
Upon learning something new,
I may decide to act just the same.
No force compels me to alter my plans but my own judgment.
I need not fear that which can only improve my choices.