Suppose you’re feeling super torn about whether to, say, order soup or salad. Or maybe whether or not to have children. Something with a seemingly overwhelming list of pros and cons on both sides.

My first bit of insight is that the very fact of the decision feeling agonizing suggests roughly equal expected utilities. Which means you either need to gather more information, or, if you’ve gathered all the information it makes sense to gather and it still feels agonizing, then accept that it doesn’t matter what you choose.

To be clear, it may hugely matter, later, but not in a way you can use to make the decision, now.

An extreme, stylized example is deciding whether to buy a lottery ticket with unusually fair odds. If you buy the ticket and win, it’s completely life-changing. So you can do a bunch of research to figure out what the exact odds are, and work through hypothetical plans to figure out how happy the money would make you. But mostly, if you know that the odds are vaguely fair then either decision — buying the ticket or not — is pretty equally good. Because the information you really need — whether you’ll win or not — just isn’t available.

It’s often a bit like that for momentous life decisions as well and can often, counterintuitively, be perfectly rational to just make a gut decision despite huge stakes.

Ok, but if that feels too rash and you want to feel more rational, here’s a whole fancy algorithm you can use!

Step zero is to articulate the two options and make sure the decision really boils down to just A vs B. Or A vs B vs C — this all generalizes straightforwardly to any number of options, as long as you’ve enumerated them exhaustively. (Don’t forget that not deciding is a choice, and sometimes the optimal choice. And don’t forget outside-the-box or compromise options.)

But from here on we’ll assume you’ve got it down to just A vs B.

Next, some definitions:

`EU(X)`

means the expected utility of choice`X`

. That’s how good you expect`X`

to be, given what you know now. That’s “expect” in the technical sense of the expectation of a random variable, which is a bit different than just a prediction, or the most likely outcome, but we can sweep that under the rug. Technically`EU(X)`

is the expectation of`U(X)`

, so the weighted average of the ex post utilities of choice`X`

over all the possible ways`X`

can play out.`VoI(i)`

means value of some information,`i`

. It’s how you decide how much effort it’s worth to acquire new information. Technically,`VoI(i)`

is the difference in`EU`

from making the best decision knowing`i`

versus the best decision without knowing`i`

. It’s bounded above by the difference in the ex post utilities of the options you’re deciding among. For example, if A and B differ by 7 utils then 7 is the highest possible value of information you can have. (And an important corollary: If you wouldn’t actually make a different decision, knowing vs not knowing`i`

, don’t bother to find out`i`

!)`cost(i)`

is just the cost of acquiring information`i`

.

And now the algorithm:

```
while (there is more information, i, to gather to refine EU(A) or EU(B)) {
if (VoI(i) > cost(i)) {
gather the information i
}
}
if (still torn between A and B) {
if (you know someone who cares which you choose) {
have a decision auction with them!
if (you find yourself wanting to bid positively for, say, A) {
go ahead and bid for A
introspect on why you didn't choose A in the first place!
} else {
bid $0 and fully cede the decision
}
} else {
flip a coin and cover up the result of the flip
if (notice self hoping for either heads or tails) {
abort the randomization and go with your gut
} else {
commit to the result of the coin flip
}
}
} else {
pick the option with greater EU
}
```

In words: Gather information; if it’s still agonizing then have a decision auction or flip a coin, otherwise pick the option with greatest expected utility. The point of the auction and coin flip is just to suss out which way your gut wants to go.

- If one option is scarier then pick it. Why? Isn’t scariness perfectly good Bayesian evidence of badness? It totally is, but given that you’re agonizing about whether to pick it anyway, it must have a lot of corresponding goodness to offset the scariness. And since you’re a human, you’re probably overweighting the fear / anxiety / ugh-factor, etc.