Blog Archives

Topic Archive: algorithmic randomness

Notre Dame University
Dan Turetsky received his doctorate from the University of Wisconsin-Madison in 2010, under the supervision of Steffen Lempp, and has since held positions at Victoria University in Wellington, the Kurt Gödel Research Center in Vienna, and (currently) Notre Dame University. He studies many aspects of computability theory.
NERDS: New England Recursion & Definability SeminarSunday, November 6, 20161:40 pmScience Center, Room E104​, Wellesley College, Wellesley, MA

Andre Nies

Structure within the class of K-trivial sets

University of Auckland

The K-trivial sets are antirandom in the sense that the initial segment complexity in terms of prefix-free Kolmogorov complexity K grows as slowly as possible. Since 2002, many alternative characterisations of this class have been found: properties such as low for K, low for Martin-Löf (ML) randomness, and basis for ML randomness, which state in one way or the other that the set is close to computable.

Initially the class looked amorphous. More recently, internal structure has been found. Bienvenu et al (JEMS, 2016) showed that there is a “smart” K-trivial set, in the sense that any ML-random computing it must compute all K-trivials. Greenberg, Miller and Nies (submitted) showed that there is a dense hierarchy of subclasses. Even more recent results with Turetsky combine the two approaches via cost functions.

Wednesday, November 9, 20165:00 pmGerald D. Fischbach Auditorium, Simons Foundation, 160 Fifth Avenue, 2nd floorTo attend this talk, you need to register here.

Avi Wigderson

Randomness

Institute for Advanced Study

Is the universe inherently deterministic or probabilistic? Perhaps more importantly – can we tell the difference between the two?

Humanity has pondered the meaning and utility of randomness for millennia. There is a remarkable variety of ways in which we utilize perfect coin tosses to our advantage: in statistics, cryptography, game theory, algorithms, gambling and more. Indeed, randomness seems indispensable! Which of these applications survive if the universe had no randomness in it at all? Which of them survive if only poor quality randomness is available, e.g. that arises from “unpredictable” phenomena like the weather or the stock market?

A computational theory of randomness, developed in the past three decades, reveals (perhaps counterintuitively) that very little is lost in such deterministic or weakly random worlds. In the talk, Dr. Wigderson will explain the main ideas and results of this theory.

NERDS: New England Recursion & Definability SeminarSaturday, April 2, 20163:15 pmLocklin Hall, Room 232, Springfield College

Johanna Franklin

Randomness and Fourier series

Hofstra University

Carleson’s Theorem states that the Fourier series of certain integrable functions converge almost everywhere. I will consider this theorem in the context of computable analysis and show that the points that satisfy Carleson’s Theorem are precisely the Schnorr random points. This work is joint with Timothy H. McNicholl and Jason Rute.

NERDS: New England Recursion & Definability SeminarSaturday, April 2, 2016

NERDS on April 2, 2016

Springfield College


The Spring 2016 New England Recursion and Definability Seminar (“NERDS”) will take place on Saturday, April 2, 2016 at Springfield College, in Springfield, MA. Further details and abstracts of talks will be posted on nylogic.org as they become available.

NERDS: New England Recursion & Definability SeminarSaturday, October 17, 20153:10 pmCarriage House, Assumption College, Worcester, MA

Quinn Culver

The interplay of classes of algorithmically random objects

Fordham University

We study algorithmically random closed subsets (RCS’s) of Cantor space, algorithmically random continuous functions (RCF’s) from Cantor space to Cantor space, and algorithmically random Borel probability measures on Cantor space; especially the interplay between the three. Our main tools are preservation of randomness (PoR) and its converse no randomness ex nihilo (NREN), which say together that given an a.e.-defined computable map between an effective probability space and an effective polish space, a real is (Martin-Löf) random for the pushforward measure if and only if its preimage is random for the domain’s measure. PoR and NREN allow us to prove new facts that were previously open and reprove some known results more simply. Example 1: it was known that every RCS contains a random real and that every random real is contained in some RCS; we reprove this using PoR and NREN. Example 2: It was shown previously that the zero set (that is, the preimage of the sequence of all 0’s) of a RCF is a RCS. We reprove this using PoR and get the converse, which was previously left open, that ever RCS is the zero set of some RCF, via NREN. We also prove some miscellaneous results about the individual objects. Example 3: the Lebesgue measure of the range of a RCF is strictly between 0 and 1. This work is joint with Chris Porter.

Jason Rute
Penn State University
Jason Rute received his doctorate in 2013 from Carnegie Mellon University, under the supervision of Jeremy Avigad. After a short stint at the University of Hawaii, he assumed a postdoctoral position at Penn State University. His work involves connections between computable mathematical structure and classical mathematical structure, including algorithmic randomness, reverse mathematics, effective mathematics, quantitative analysis, and metastability.
CUNY Logic WorkshopFriday, March 20, 20152:00 pmGC 6417

Johanna Franklin

Bases for two weak reducibilities

Hofstra University

While the standard Turing reduction is the main tool recursion theorists use to compare the computational strength of two sets, comparisons of other types of computational strength give rise to other reducibilities. Many of these reducibilities arise from the study of algorithmic randomness and are strictly weaker than Turing reducibility. In this talk, I will focus on two of these reducibilities, JT– and LR-reducibility, and study them through the lens of sets that avoid being random in a very strong way.

A base for randomness is a set A that is Turing computable from some A-random set Z. We can extend this concept to reducibilities other than the standard Turing reducibility: given a weak reducibility W, we say that a set A is a W-base for randomness if A ≤W Z for some A-random set Z. I will fully characterize the JT-bases for randomness and partially characterize the LR-bases for randomness.

This work is joint with Keng Meng Ng and Reed Solomon.

Reed Solomon
University of Connecticut
Reed Solomon is a Professor in the Department of Mathematics at the University of Connecticut, studying computability theory. He received his doctorate from Cornell University in 1998, under the supervision of Richard Shore, and subsequently held postdoctoral positions at the University of Wisconsin and Notre Dame University.
NERDS: New England Recursion & Definability SeminarSaturday, October 18, 2014Assumption College, Carriage House

NERDS

The Autumn 2014 meeting of NERDS, the New England Recursion & Definability Seminar, will take place on Saturday, October 18 at Assumption College, in Worcester, MA, beginning at 10:30 a.m. The principal organizers are Brooke Andersen, Damir Dzhafarov, and Reed Solomon. All talks are posted on nylogic.org (find NERDS under the “Conferences” tab), and they will take place in the Carriage House building on the campus of Assumption College. Directions.

Tuesday, May 13, 201410:30 am750 Schapiro CEPSR, Columbia University Morningside Campus

Computability and Strong Randomness

Victoria University of Wellington

In the 1960s Martin-Löf devised a formal definition of the notion of randomness for an individual infinite binary sequence. It formalizes the intuition that some sequences are more likely than others to be the result of a sequence of independent coin-flips – despite the fact that all are equally likely from the point of view of probability theory. Martin-Löf used effectively presented, effectively null sets.

At the same time he however noticed that the notions of computability that he used lacked certain closure properties and that this made the theory quite delicate. He suggested an alternative approach which used a much richer class of tests for randomness – the hyperarithmetic ones. Roughly speaking, these tests use the power of the halting problem and then repeat (taking the halting problem relative to the halting problem, and so on). This gives a pleasing theory of strong randomness, which has recently been studied by Hjorth, Nies, Chong and Yu.

I will describe some of the ideas and techniques involved.

CUNY Logic WorkshopFriday, October 11, 20132:00 pmGC 6417

Johanna Franklin

Randomness and ergodic theorems under measure-preserving transformations

Hofstra University

Ergodic theorems describe regular measure-theoretic behavior, and a point is said to be algorithmically random if it has no rare measure-theoretic properties of a certain kind. The connections between these two types of regularity have been well studied over the past five years. I will discuss Birkhoff’s ergodic theorem with respect to transformations that are measure-preserving but not necessarily ergodic in the context of a computable probability space. Then I will show that each point in such a space that is not Martin-L”of random does not satisfy Birkhoff’s ergodic theorem with respect to every computable set and measure-preserving transformation.

This work is joint with Henry Towsner.

Johanna Franklin
Hofstra University
Prof. Franklin has been an Assistant Professor in the mathematics department of Hofstra University since 2014. She studies algorithmic randomness and recursion theory, with applications in probability and ergodic theory. She received her doctorate from the University of California-Berkeley, under the supervision of Ted Slaman, and has taught at the University of Connecticut, Dartmouth College, the University of Waterloo, and the National University of Singapore.