Blog ArchivesThese are the items posted in this seminar, currently ordered by their post-date, rather than by the event date. We will create improved views in the future. In the meantime, please click on the Seminar menu item above to find the page associated with this seminar, which does have a more useful view order.
It has long been recognized that the existence of an interpretation of one countable structure B in another one A yields a homomorphism from the automorphism group Aut(A) into Aut(B). Indeed, it yields a functor from the category Iso(A) of all isomorphic copies of A (under isomorphisms) into the category Iso(B). In traditional model theory, the converse is false. However, when we extend the concept of interpretation to allow interpretations by Lω1ω formulas, we find that now the converse essentially holds: every Borel functor arises from an infinitary interpretation of B in A, and likewise every Borel-measurable homomorphism from Aut(A) into Aut(B) arises from such an interpretation. Moreover, the complexity of an interpretation matches the complexities of the corresponding functor and homomorphism. We will discuss the concepts and the forcing necessary to prove these results and other corollaries.
In 1973 Abraham Robinson gave a talk about the nonstandard analysis (NSA) at the Institute for Advanced Study. After his talk Kurt Gödel made a comment, in which he predicted that “…there are good reasons to believe that Non-Standard Analysis in some version or other will be the analysis of the future”. One has to admit that during almost forty five years since this prediction was made, it did not come true. Although the NSA simplified proofs of many deep results in standard mathematics and allowed to obtain new standard results, among which there are some long standing open problems, it did not become the working tool for the most part of mathematicians. When they are interested in some result obtained with the help of the NSA, they prefer to reprove it in standard terms. One of the reasons of rejecting the NSA, is that as a rule the job of reproving is not difficult. The other reason is that the transfer principle of the NSA that is crucial for deduction of standard results from nonstandard ones relies significantly on formalization of mathematics in the the framework of superstructures or of the Axiomatic Set Theory.
For mathematicians working in ODE, PDE and other areas oriented toward applications, who use at most the naïve set theory, these formal languages may be difficult and irrelevant, so they may not feel confident in nonstandard proofs.
In this talk I will present a new version of nonstandard set theory, that is formulated on the same level of formalization as the naïve set theory. I will try to justify my opinion that this is a version, in which the nonstandard analysis may become the analysis of the future. I will discuss some examples of NSA theorems about interaction between some statements in continuous and their computer simulations that are rigorous theorems in the NSA but can not be formulated in terms of standard mathematics. These theorems have clear intuitive sense and even can be monitored in computer experiments. Nowadays many applied mathematicians share a point of view that the continuous mathematics is an approximation of the discrete one but not vice versa. This point of view can be easy formalized in the naïve nonstandard set theory above. Being not interested in proving classical theorems with the help of NSA, we don’t need the predicate of standardness and the Transfer Principle of the NSA in full. This allows to avoid an excessive formalization.
Friday, November 25 is the day after Thanksgiving, so there will be no seminars at the Graduate Center that day.
The bounding, splitting and almost disjoint families are some of the well studied infinitary combinatorial objects on the real line. Their study has prompted the development of many interesting forcing techniques. Among those are the method of creature forcing, as well as Shelah’s template iteration techniques.
In this talk, we will discuss some recent developments of Shelah’s template iteration methods, leading to models in which the bounding, the splitting and the almost disjointness numbers can be quite arbitrary. We will conclude with a brief discussion of open problems.
I will give an overview over several hierarchies of forcing axioms, with an emphasis on their versions for subcomplete forcing, but in the instances where the concepts are new, their versions for more established classes of forcing, such as proper forcing, are of interest as well. The hierarchies are the traditional one, reaching from the bounded to the unbounded forcing axiom (i.e., versions of Martin’s axiom for classes other than ccc forcing), a hierarchy of resurrection axioms (related to work of Tsaprounis), and (inspired by work of Bagaria, Schindler and Gitman) the “virtual” versions of these hierarchies: the weak bounded forcing axiom hierarchy and the virtual resurrection axiom hierarchy). I will talk about how the levels of these hierarchies are intertwined, in terms of implications or consistency strength. In many cases, I can provide exact consistency strength calculations, which build on techniques to “seal” square sequences, using subcomplete forcing, in the sense that no thread can be added without collapsing ω1. This idea goes back to Todorcevic, in the context of proper forcing (which is completely different from subcomplete forcing).
An axiomatic theory of truth is a formal deductive theory where the property of a sentence being true is treated as a primitive undefined predicate. Logical properties of many axiom systems for the truth predicate have been discussed in the context of the so-called truth-theoretic deflationism, i.e. a view according to which truth is a ‘thin’ or ‘innocent’ property without any explanatory or justificatory power with respect to non-semantic facts.
I will discuss the state of the art concerning proof-theoretic and model-theoretic properties of the most commonly studied axiomatic theories of truth (typed and untyped, both disquotational and compositional ones), focusing in particular on the problem of syntactic and semantic conservativeness of truth theories over a base (arithmetical) theory (treated mostly as a theory of syntax). I will relate the results to the research on satisfaction classes in models of arithmetic, and if the time allows, to analysis of some semantic paradoxes.
(The talk previously scheduled for this time has been canceled due to visa issues.)
We consider the notion of the so-called intuitive learnability and its relation to the so-called intuitive computability. We briefly present and discuss Church’s Thesis in the context of the notion of learnability. A set is intuitively learnable if there is a (possibly infinite) intuitive procedure that for each input produces a finite sequence of yeses and nos such that the last answer in the sequence is correct. We further formulate the Learnability Thesis which states that the notion of intuitive learnability is equivalent to the notion of algorithmic learnability. The claim is analogous to Church’s Thesis. Afterwards we analyse the argument for Church’s Thesis presented by M. Mostowski (employing the concept of FM-representability which is equivalent to Turing-reducibility to $0’$ or to being $\Delta^0_2$). The argument goes in unusual lines – by giving a model of potential infinity, the so-called FM-domains, which are inifnite sequences of growing finite models, separating knowable (intuitively computable) sets from FM-representable (algorithmically learnable) ones via the so-called testing formulae and showing that knowable sets are exactly recursive. We indicate which assumptions of the Mostowski’s argument (concerning recursive enumeration of the finite models being elements of an FM-domain) implicitely include that Church’s Thesis holds. The impossibility of this kind of argument is strengthened by showing that the Learnability Thesis does not imply Church’s Thesis. Specifically, we show a natural interpretation of intuitive computability – containing a low set – under which intuitively learnable sets are exactly algorithmically learnable but intuitively computable sets form a proper superset of recursive sets. Last, using the Sacks Density Theorem, we show that there is a continuum of Turing ideals of low sets satisfying our assumptions. Therefore, if we admit certain non-recursive but intuitively computable relations (namely: some low relations) we are able to consider expanded FM-domains and there are continuum many of such. On the other hand, relations FM-representable in such FM-dommains are still $\Delta^0_2$. The general conclusion is that justifying Church’s Thesis solely on the grounds of procedures recursive in the limit is by far insufficient.
Weihrauch reducibility is a common tool in computable analysis for understanding and comparing the computational content of theorems. In recent years, variations of Weihrauch reducibility have been used to study Ramsey type theorems in the context of reverse mathematics where they give a finer analysis than implications in RCA0 and they allow comparisons of computably true principles. In this talk, we will give examples of recent results and techniques in this area.
Since CUNY will follow a Tuesday schedule on Friday, October 14, we will not have any of the usual Friday logic seminars that day. However, there will be a talk by George Metcalfe at 4 pm, described below.
In analogy with the ancient views on potential as opposed to actual infinity, set-theoretic potentialism is the philosophical position holding that the universe of set theory is never fully completed, but rather has a potential character, with greater parts of it becoming known to us as it unfolds. In this talk, I should like to undertake a mathematical analysis of the modal commitments of various specific natural accounts of set-theoretic potentialism. After developing a general model-theoretic framework for potentialism and describing how the corresponding modal validities are revealed by certain types of control statements, which we call buttons, switches, dials and ratchets, I apply this analysis to the case of set-theoretic potentialism, including the modalities of true-in-all-larger-Vβ, true-in-all-transitive-sets, true-in-all-Grothendieck-Zermelo-universes, true-in-all-countable-transitive-models and others. Broadly speaking, the height-potentialist systems generally validate exactly S4.3 and the height-and-width-potentialist systems generally validate exactly S4.2. Each potentialist system gives rise to a natural accompanying maximality principle, which occurs when S5 is valid at a world, so that every possibly necessary statement is already true. For example, a Grothendieck-Zermelo universe Vκ, with κ inaccessible, exhibits the maximality principle with respect to assertions in the language of set theory using parameters from Vκ just in case κ is a Σ3-reflecting cardinal, and it exhibits the maximality principle with respect to assertions in the potentialist language of set theory with parameters just in case it is fully reflecting Vκ < V.
This is current joint work with Øystein Linnebo, in progress, which builds on some of my prior work with George Leibman and Benedikt Löwe in the modal logic of forcing. Comments and questions can be made on the speaker’s blog.
Affine $n$-space $A_k^n$ and its algebraic equivalent, the polynomial ring $k[x_1,…,x_n]$, are basic and widely studied objects in geometry and algebra, about which we know a great deal. However, there remains a host of basic open problems (like the Jacobian conjecture, Zariski Cancellation Conjecture, Complement Problem, …) indicating that our knowledge is nonetheless quite limited. In fact, the greatest obstacle in solving the above conjectures is our inability to “pinpoint” affine space among all varieties (or $k[x]$ among all finitely generated $k$-algebras): this is the so-called Characterization Problem.
The most recent approach to these problems is via additive group actions on affine $n$-space, which corresponds on the algebraic side, to the theory of locally nilpotent derivations. Using this, for instance, N. Gupta recently showed the falsitude of the Zariski Cancellation Conjecture in positive characteristic.
From a model-theoretic point of view, the polynomial ring (in its natural ring language) is quite expressive: in characteristic zero, one can define the integers (as a subset), one can express in general that, say, Embedded Resolution of Singularities holds, etc. Of course, one of the peculiarities of model theory (and probably one of the reasons for its pariah status) is the unavoidable presence of non-standard models. In other words, a characterization problem is never solvable in model theory, unless one allows some non first-order conditions as well (e.g., cardinality in categorical theories–but most mainstream mathematicians would not be too happy about that either). But other, more intrinsic problems arise: there are elementary equivalent fields whose polynomial rings are not. So, can we find an expanded language plus a “natural” but non first-order condition, that pinpoints the standard model, i.e., $k[x]$ within the models of its theory. Or even better, since these complete theories will have unwieldy axiomatizations, can we find a (recursive?) theory, whose only model satisfying the extra non first-order condition is $k[x]$?
In view of the recent developments in algebra/geometry, to this end, I will propose in this talk some languages that include additional sorts, in particularly, a sort for derivations. This is different from the usual language of differential fields, where one only studies a fixed (or possibly finitely many) derivation: we need all of them! We also need a substitute for the notion of degree, and the corresponding group $Z$-action as power maps. To test our theories, we should verify which algebraic/geometric properties are reflected in this setup. For instance, affine $n$-space has no cohomology, which is equivalent to the exactness of the de Rham complex, and this latter statement is true in any of the proposed models. Nonetheless, this is only a preliminary analysis of the problem, and nothing too deep will yet be discussed in this talk.
Connections between logic and operator algebras in the past century were few and sparse. Recently, some long-standing open problems on the structure of operator algebras were solved using methods from mathematical logic. I will survey some of these results, with a particular emphasis on applications of set theory.
I will introduce several cardinal invariants associated with analytic P-ideals, concentrating on the ideal of sets of asymptotic density zero. I will give a summary of some recent work on these invariants relating them to the dominating, bounding and splitting numbers, and to some variants of the splitting number. Several fundamental problems remain open and I will try to discuss as many of these as I can.
We say that a class of finite structures for a finite first-order signature is R-compressible for an unbounded function R on the natural numbers if each structure G in the class has a first-order description of size at most O(R(|G|)). We show that the class of finite simple groups is log-compressible, and the class of all finite groups is log-cubed compressible.
The results rely on the classification of finite simple groups, the bi-interpretability of the twisted Ree groups with finite difference fields, the existence of profinite presentations with few relators for finite groups, and group cohomology. We also indicate why the results are close to optimal.
This is joint work with Katrin Tent, to appear in IJM, available here.
Let G be a countably infinite group and let SubG be the compact space of subgroups H≤G. Then a probability measure ν on SubG which is invariant under the conjugation action of G on SubG is called an invariant random subgroup. In this talk, I will discuss the invariant random subgroups of inductive limits of finite alternating groups. (This is joint work with Robin Tucker-Drob.)