Computability and Strong Randomness
Victoria University of Wellington
In the 1960s Martin-Löf devised a formal definition of the notion of randomness for an individual infinite binary sequence. It formalizes the intuition that some sequences are more likely than others to be the result of a sequence of independent coin-flips – despite the fact that all are equally likely from the point of view of probability theory. Martin-Löf used effectively presented, effectively null sets.
At the same time he however noticed that the notions of computability that he used lacked certain closure properties and that this made the theory quite delicate. He suggested an alternative approach which used a much richer class of tests for randomness – the hyperarithmetic ones. Roughly speaking, these tests use the power of the halting problem and then repeat (taking the halting problem relative to the halting problem, and so on). This gives a pleasing theory of strong randomness, which has recently been studied by Hjorth, Nies, Chong and Yu.
I will describe some of the ideas and techniques involved.