Blog Archives
Topic Archive: algorithmic randomness
Structure within the class of K-trivial sets
The K-trivial sets are antirandom in the sense that the initial segment complexity in terms of prefix-free Kolmogorov complexity K grows as slowly as possible. Since 2002, many alternative characterisations of this class have been found: properties such as low for K, low for Martin-Löf (ML) randomness, and basis for ML randomness, which state in one way or the other that the set is close to computable.
Initially the class looked amorphous. More recently, internal structure has been found. Bienvenu et al (JEMS, 2016) showed that there is a “smart” K-trivial set, in the sense that any ML-random computing it must compute all K-trivials. Greenberg, Miller and Nies (submitted) showed that there is a dense hierarchy of subclasses. Even more recent results with Turetsky combine the two approaches via cost functions.
Randomness
Is the universe inherently deterministic or probabilistic? Perhaps more importantly – can we tell the difference between the two?
Humanity has pondered the meaning and utility of randomness for millennia. There is a remarkable variety of ways in which we utilize perfect coin tosses to our advantage: in statistics, cryptography, game theory, algorithms, gambling and more. Indeed, randomness seems indispensable! Which of these applications survive if the universe had no randomness in it at all? Which of them survive if only poor quality randomness is available, e.g. that arises from “unpredictable” phenomena like the weather or the stock market?
A computational theory of randomness, developed in the past three decades, reveals (perhaps counterintuitively) that very little is lost in such deterministic or weakly random worlds. In the talk, Dr. Wigderson will explain the main ideas and results of this theory.
Randomness and Fourier series
Carleson’s Theorem states that the Fourier series of certain integrable functions converge almost everywhere. I will consider this theorem in the context of computable analysis and show that the points that satisfy Carleson’s Theorem are precisely the Schnorr random points. This work is joint with Timothy H. McNicholl and Jason Rute.
NERDS on April 2, 2016
The Spring 2016 New England Recursion and Definability Seminar (“NERDS”) will take place on Saturday, April 2, 2016 at Springfield College, in Springfield, MA. Further details and abstracts of talks will be posted on nylogic.org as they become available.
The interplay of classes of algorithmically random objects
We study algorithmically random closed subsets (RCS’s) of Cantor space, algorithmically random continuous functions (RCF’s) from Cantor space to Cantor space, and algorithmically random Borel probability measures on Cantor space; especially the interplay between the three. Our main tools are preservation of randomness (PoR) and its converse no randomness ex nihilo (NREN), which say together that given an a.e.-defined computable map between an effective probability space and an effective polish space, a real is (Martin-Löf) random for the pushforward measure if and only if its preimage is random for the domain’s measure. PoR and NREN allow us to prove new facts that were previously open and reprove some known results more simply. Example 1: it was known that every RCS contains a random real and that every random real is contained in some RCS; we reprove this using PoR and NREN. Example 2: It was shown previously that the zero set (that is, the preimage of the sequence of all 0’s) of a RCF is a RCS. We reprove this using PoR and get the converse, which was previously left open, that ever RCS is the zero set of some RCF, via NREN. We also prove some miscellaneous results about the individual objects. Example 3: the Lebesgue measure of the range of a RCF is strictly between 0 and 1. This work is joint with Chris Porter.
Bases for two weak reducibilities
While the standard Turing reduction is the main tool recursion theorists use to compare the computational strength of two sets, comparisons of other types of computational strength give rise to other reducibilities. Many of these reducibilities arise from the study of algorithmic randomness and are strictly weaker than Turing reducibility. In this talk, I will focus on two of these reducibilities, JT– and LR-reducibility, and study them through the lens of sets that avoid being random in a very strong way.
A base for randomness is a set A that is Turing computable from some A-random set Z. We can extend this concept to reducibilities other than the standard Turing reducibility: given a weak reducibility ≤W, we say that a set A is a W-base for randomness if A ≤W Z for some A-random set Z. I will fully characterize the JT-bases for randomness and partially characterize the LR-bases for randomness.
This work is joint with Keng Meng Ng and Reed Solomon.
NERDS
The Autumn 2014 meeting of NERDS, the New England Recursion & Definability Seminar, will take place on Saturday, October 18 at Assumption College, in Worcester, MA, beginning at 10:30 a.m. The principal organizers are Brooke Andersen, Damir Dzhafarov, and Reed Solomon. All talks are posted on nylogic.org (find NERDS under the “Conferences” tab), and they will take place in the Carriage House building on the campus of Assumption College. Directions.
Computability and Strong Randomness
In the 1960s Martin-Löf devised a formal definition of the notion of randomness for an individual infinite binary sequence. It formalizes the intuition that some sequences are more likely than others to be the result of a sequence of independent coin-flips – despite the fact that all are equally likely from the point of view of probability theory. Martin-Löf used effectively presented, effectively null sets.
At the same time he however noticed that the notions of computability that he used lacked certain closure properties and that this made the theory quite delicate. He suggested an alternative approach which used a much richer class of tests for randomness – the hyperarithmetic ones. Roughly speaking, these tests use the power of the halting problem and then repeat (taking the halting problem relative to the halting problem, and so on). This gives a pleasing theory of strong randomness, which has recently been studied by Hjorth, Nies, Chong and Yu.
I will describe some of the ideas and techniques involved.
Randomness and ergodic theorems under measure-preserving transformations
Ergodic theorems describe regular measure-theoretic behavior, and a point is said to be algorithmically random if it has no rare measure-theoretic properties of a certain kind. The connections between these two types of regularity have been well studied over the past five years. I will discuss Birkhoff’s ergodic theorem with respect to transformations that are measure-preserving but not necessarily ergodic in the context of a computable probability space. Then I will show that each point in such a space that is not Martin-L”of random does not satisfy Birkhoff’s ergodic theorem with respect to every computable set and measure-preserving transformation.
This work is joint with Henry Towsner.