Then we take the code for H, and use it to build a tricky program called G. G first runs H on itself - it gets H(G). If H(G) is "yes", then G immediately enters an infinite loop and never halts. If H(G) is "no", then G immediately halts. So, when H runs on G, G will halt if and only if H(G) is "no". But the whole point of H is that for any program, H says "no" if and only if the program does not halt.
In short, G breaks H. Given any supposed halting-predictor program H, we can easily write a program G which breaks it. Thus there is no perfect halting-predictor program: the halting problem is "undecidable".
Using the halting problem as a starting point, theoretical CS classes go on to prove all sorts of other problems are undecidable. I'm going to go in a different direction, one which I haven't seen before: I want to argue that the standard halting problem asks the wrong question.
First, let's rewrite the halting problem in game-theoretic terms. Rather than two programs, the game has two players: H and G. G has two choices: halt, or don't halt. H's goal is to predict whether G halts or not; H wins if it says "yes" and G does halt, or if it says "no" and G doesn't halt. G, on the other hand, want trick H: G wins if H says "no" and G halts, or loses if H says "yes" and G runs forever.
Here's a table to show the rules:
|yes||H wins||G wins|
|no||G wins||H wins|
Now, when computer scientists say the halting problem is "undecidable", what they're really saying is that this game has no equilibrium in pure strategies. What does that mean? Well, a "pure" strategy means that H and G always do the same thing. In this case, if H always says "yes", then G always wants to run forever; but if G always runs forever, then H always wants to say "no"; but if H always says "no"... You see the issue. There is no equilibrium in pure strategies, because either H or G will always want to change their choice.
Remember the movie "Beautiful Mind", about John Nash? Well, John Nash's original claim to fame was a proof that any game with finite players and finite choices has at least one equilibrium. (In fact, for the simple two-player zero-sum case here, Von Neumann proved existence of an equilibrium even earlier.) But there's a catch: the equilibrium isn't always in pure strategies. In order to find an equilibrium, players may need to randomize their choices. Strategies with randomized choices are usually called "mixed" strategies.
For the halting problem game above, there is one very simple equilibrium in mixed strategies: H picks "yes" or "no" randomly by flipping a coin, and G chooses whether or not to halt randomly by flipping its own coin. Half the time, the choices will match, and H will win. Half the time, the choices will not match, and G will win. As long G splits 50/50, H cannot do any better by picking "yes" or "no" more often. As long as H splits 50/50, G cannot do any better by picking to halt or not halt more often.
In general, no matter what program H is playing against, whether it's G or any other program, there is always some optimal strategy for H. For some programs, especially programs which aren't actively working against H, a pure strategy will work. For other programs, like G, a mixed strategy is needed. (Technical note: I'm brushing some things under the carpet here, specifically involving jump discontinuities and limits in strategies, but these are the sorts of things which can usually be worked around.)
So let's reformulate a game-theoretic version of the halting problem. The new goal is to write a (nondeterministic) program H, which takes in another (possibly nondeterministic) program P, and returns either "yes" or "no". For any program P, H should select "yes" or "no", possibly randomly, in order to maximize its win rate, i.e. maximize the probability that H correctly guesses whether P halts.
Is this problem solvable? I don't know yet. Maybe it will turn out that this new halting problem is still undecidable. But the good news is, we're at least asking a reasonable question this time, not just searching for pure-strategy equilibria. And if it turns out that this game-theoretic halting problem is solvable, then the consequences would be significant: the undecidability of many other problems is based directly on the halting problem, or proven using the same method. The same technique should provide a practical approach to a wide variety of other problems, if the new problem is indeed solvable.
Finally, for anyone reading this who is familiar with both game theory and the undecidability proof of the halting problem: this should all have been fairly obvious. There is no way that this hasn't been thought of twenty times over. So why can't I find any literature on it? Am I not searching for the right terms? What results are known?