Topic Archive: computer science
Is the universe inherently deterministic or probabilistic? Perhaps more importantly – can we tell the difference between the two?
Humanity has pondered the meaning and utility of randomness for millennia. There is a remarkable variety of ways in which we utilize perfect coin tosses to our advantage: in statistics, cryptography, game theory, algorithms, gambling and more. Indeed, randomness seems indispensable! Which of these applications survive if the universe had no randomness in it at all? Which of them survive if only poor quality randomness is available, e.g. that arises from “unpredictable” phenomena like the weather or the stock market?
A computational theory of randomness, developed in the past three decades, reveals (perhaps counterintuitively) that very little is lost in such deterministic or weakly random worlds. In the talk, Dr. Wigderson will explain the main ideas and results of this theory.
We say that a class of finite structures for a finite first-order signature is R-compressible for an unbounded function R on the natural numbers if each structure G in the class has a first-order description of size at most O(R(|G|)). We show that the class of finite simple groups is log-compressible, and the class of all finite groups is log-cubed compressible.
The results rely on the classification of finite simple groups, the bi-interpretability of the twisted Ree groups with finite difference fields, the existence of profinite presentations with few relators for finite groups, and group cohomology. We also indicate why the results are close to optimal.
This is joint work with Katrin Tent, to appear in IJM, available here.
We consider answering queries over datasources (for logicians: finite relational structures) that are connected to one another by integrity constraints (logical relationships). A relation in a datasource may have multiple means of access with different required arguments, and the means of access within and across relations may differ radically in cost. We present a method for generating the lowest cost query plan, where a plan is a reformulation of the query referencing only the allowable access methods, equivalent to the original query on all datasources satisfying the integrity constraints. A solution to this problem can be applied in many database contexts, including answering queries using views, optimizing queries using constraints, and querying on top of web services or web forms. It could also be applied to give a more logic-based foundation for traditional database query planning.
Proof-driven query reformulation generates plans by applying interpolation procedures to a “proof that the query can be answered,” an approach that goes back at least to Craig’s proof of the Beth Definability theorem via interpolation. A few additional ingredients are needed to apply interpolation to database problems, including:
— variations of Beth Definability that allow us to target plans (explicit definitions) of specific forms
— an extension of interpolation that considers the “access patterns” within quantification
— specialized proof procedures for classes of integrity constraints of interest to databases
— cost-based search procedures for concurrently exploring the collection of proofs and plans
The resulting query planning system blends automated deduction, query optimization, and heuristic search.
This is joint work with Balder ten Cate, Julien Leblay and Efi Tsamoura.