Saturday, November 5, 2016

Game Theory vs The Halting Problem

The usual presentation of the Halting Problem goes something like this. Suppose we have some computer program H, which takes in the code for another program P. H(P) outputs "yes" if P, when run, will eventually stop (a.k.a. "halt"), or "no" if P will run forever.

Then we take the code for H, and use it to build a tricky program called G. G first runs H on itself - it gets H(G). If H(G) is "yes", then G immediately enters an infinite loop and never halts. If H(G) is "no", then G immediately halts. So, when H runs on G, G will halt if and only if H(G) is "no". But the whole point of H is that for any program, H says "no" if and only if the program does not halt.

In short, G breaks H. Given any supposed halting-predictor program H, we can easily write a program G which breaks it. Thus there is no perfect halting-predictor program: the halting problem is "undecidable".

Using the halting problem as a starting point, theoretical CS classes go on to prove all sorts of other problems are undecidable. I'm going to go in a different direction, one which I haven't seen before: I want to argue that the standard halting problem asks the wrong question.

First, let's rewrite the halting problem in game-theoretic terms. Rather than two programs, the game has two players: H and G. G has two choices: halt, or don't halt. H's goal is to predict whether G halts or not; H wins if it says "yes" and G does halt, or if it says "no" and G doesn't halt. G, on the other hand, want trick H: G wins if H says "no" and G halts, or loses if H says "yes" and G runs forever.

Here's a table to show the rules:

halt run
yes H wins G wins
no G wins H wins

Now, when computer scientists say the halting problem is "undecidable", what they're really saying is that this game has no equilibrium in pure strategies. What does that mean? Well, a "pure" strategy means that H and G always do the same thing. In this case, if H always says "yes", then G always wants to run forever; but if G always runs forever, then H always wants to say "no"; but if H always says "no"... You see the issue. There is no equilibrium in pure strategies, because either H or G will always want to change their choice.

Remember the movie "Beautiful Mind", about John Nash? Well, John Nash's original claim to fame was a proof that any game with finite players and finite choices has at least one equilibrium. (In fact, for the simple two-player zero-sum case here, Von Neumann proved existence of an equilibrium even earlier.) But there's a catch: the equilibrium isn't always in pure strategies. In order to find an equilibrium, players may need to randomize their choices. Strategies with randomized choices are usually called "mixed" strategies.

For the halting problem game above, there is one very simple equilibrium in mixed strategies: H picks "yes" or "no" randomly by flipping a coin, and G chooses whether or not to halt randomly by flipping its own coin. Half the time, the choices will match, and H will win. Half the time, the choices will not match, and G will win. As long G splits 50/50, H cannot do any better by picking "yes" or "no" more often. As long as H splits 50/50, G cannot do any better by picking to halt or not halt more often.

In general, no matter what program H is playing against, whether it's G or any other program, there is always some optimal strategy for H. For some programs, especially programs which aren't actively working against H, a pure strategy will work. For other programs, like G, a mixed strategy is needed. (Technical note: I'm brushing some things under the carpet here, specifically involving jump discontinuities and limits in strategies, but these are the sorts of things which can usually be worked around.)

So let's reformulate a game-theoretic version of the halting problem. The new goal is to write a (nondeterministic) program H, which takes in another (possibly nondeterministic) program P, and returns either "yes" or "no". For any program P, H should select "yes" or "no", possibly randomly, in order to maximize its win rate, i.e. maximize the probability that H correctly guesses whether P halts.

Is this problem solvable? I don't know yet. Maybe it will turn out that this new halting problem is still undecidable. But the good news is, we're at least asking a reasonable question this time, not just searching for pure-strategy equilibria. And if it turns out that this game-theoretic halting problem is solvable, then the consequences would be significant: the undecidability of many other problems is based directly on the halting problem, or proven using the same method. The same technique should provide a practical approach to a wide variety of other problems, if the new problem is indeed solvable.

Finally, for anyone reading this who is familiar with both game theory and the undecidability proof of the halting problem: this should all have been fairly obvious. There is no way that this hasn't been thought of twenty times over. So why can't I find any literature on it? Am I not searching for the right terms? What results are known?


  1. > Is this problem solvable?

    Can you clarify what it means for a probabilistic problem to be solvable? Are you asking, e.g., is there an optimal program H and if so what is it (and possibly, what is the probability that it will give the correct answer)? Or are you asking for an example of H that does better than chance? Or something else?

    Decidability is formally defined in terms of "deciding" whether a string is a member of a fixed language or not (or equivalently, determine whether something is an element of a set). A problem is undecidable specifically if one cannot construct a Turing machine which answers this question correctly for all possible strings. I'm not aware of a standard formulation of this where you can be probabilistically be right or wrong about this membership question, so you'll probably have to explicitly define what that means.

    1. One of the core ideas of this post is that the usual decidability criteria is too strong. One goal is to come up with a more useful solution criteria, i.e. something which is tractable and can somehow handle the self-reference issues which made the original problem undecidable.

      So I don't want to nail down one criteria and say "this is it" just yet, because if that criteria also turns out to be intractable, then I'll still want to look for a new criteria.

      That said, here's my starting formulation. We have a Turing machine equipped with a read-only, one-way, no-lookahead supply of random bits (informally, this means H can get random bits which G does not know). We want a program H which takes another program G and outputs "yes" or "no"; without loss of generality, we'll assume G has no inputs.

      Each time G is run, it has a well-defined (but possibly uncomputable) probability of halting, which we'll call p_halt(G). H(G) has a well-defined (but also possible uncomputable) probability of outputting "yes", which we'll call p_yes(G).

      My first suggestion for a solution criteria is that p_halt(G) = p_yes(G) for all G. The question is whether a program H can be written which satisfies this criteria. I'll also weaken it slightly and say that H itself only needs to halt with probability 1, for all G, rather than logically certain halting.

    2. This comment has been removed by the author.

    3. > I'll also weaken it slightly and say that H itself only needs to halt with probability 1, for all G, rather than logically certain halting.

      Sorry I misread this (hence the earlier deleted comment). What's the difference between something happening with probability 1, and something happening with logical certainty?

    4. If you flip a coin until it comes up heads, then stop, that's a process which halts with probability 1. But it's logically possible to get tails forever - the process isn't logically certain to halt.

      So allowing H to halt with probability 1 potentially allows the use of probalistic methods with that sort of behavior.