Explanation of Computability Theory

Introduction to Computability Theory

My research interests lie in mathematical logic particularly Computability Theory. That is I study in some very abstract sense what computers can theoretically calculate. Quite surprisingly it turns out that even with unlimited time and memory there are some perfectly well-defined questions that a computer can't be programed to answer. The most famous example of this phenomenon is the halting problem. Sometimes computer programs can get stuck in infinite loops never yielding useful output and one might hope that you could write a computer program VERIFY(s) that takes as input the code s to some program (we assume s doesn't require any input and can output only a single value) and tells you if program s every gets stuck in an infinite loop. Surely there is a fact of the matter about whether any program s gets stuck in an infinite loop. Just run it and it will either spit out an answer or it won't. But no matter how cleverly you write the VERIFY program there will always be a program DO_OPPOSITE which contains the VERIFY program as a procedure and by examining it's own code in memory runs the procedure VERIFY on it's own code (code for DO_OPPOSITE). The DO_OPPOSITE then, like a willful child, purposefully does the exact opposite of what VERIFY says it does. So if VERIFY says DO_OPPOSITE gets stuck in an infinite loop DO_OPPOSITE immediately stops and spits out a result. If VERIFY says DO_OPPOSITE returns an answer DO_OPPOSITE promptly enters a mindless infinite loop.

Of course it's perfectly possible to write a VERIFY program that tells you some programs will get stuck in infinite loops and certifies others as infinite loop free. Computer language designers put a great deal of effort into making more and more useful partial versions of VERIFY but you can never write VERIFY so it always tells you the correct answer for every possible program. The VERIFY program isn't a special case, frequently one can employ the strategy of doing the opposite of what is expected (called diagonalization) to prove some problem can't be solved on a computer. Since it quickly gets boring just listing things computers can't do what we do in computability theory is relate the difficulty of various non-computable problems. For instance it turns out that if you could ask a magic oracle whether a given program gets stuck in an infinite loop you could also decide whether program P eventually enumerates the value n (i.e. returns n on some input) but you couldn't tell if P will enumerate infinitely many different values.

You might wonder why anyone should care about what computer programs that can consult magical oracles are able to accomplish since such oracles don't really exist. Well it turns out that whether or not some property can be captured in a mathematical formula with a particular number of quantifiers, e.g., is there a boy that every girl loves would use an existential followed by a universal quantifier, is closely related to the existence of programs which can use certain oracles. In other words often thinking of the problem in terms of writing or diagonalizing against a computer program gives a better way to attack other questions in math and logic.

You can find more details about my mathematical research and results in the math section of this website.