# Computational Logic Seminar

# Syntactic Epistemic Logic and Games III

We discuss the Muddy Children puzzle MC, identify and fix the gap in its solution: a completeness analysis is required if the problem is specified syntactically, but analyzed by reasoning on a specific model. We show that the syntactic description of MC is complete w.r.t. its model used in the standard solution. We will present two proofs of completeness: syntactic, by induction on a formulas, and semantic, which makes use of bounded morphisms, a special case of bisimulations. We then present a modification of MC, MClite, and show that it is not deductively complete, has no semantic characterization and cannot be handled by traditional semantic methods. We give a syntactic solution of MClite.

Time permitting, we will venture further into extensive games and their epistemic models

# Syntactic Epistemic Logic and Games II

The Syntactic Epistemic Logic, SEL, suggests making the syntactic formalization of an epistemic scenario its formal specification. Since SEL accommodates constructive model reasoning, we speak about extending the scope of formal epistemology. SEL offers a cure for two principal weaknesses of the traditional semantic approach.

1. The traditional semantic approach in epistemology covers only deductively complete scenarios in which all epistemic assertions of any depth are specified. However, intuitively, “most” of epistemic scenarios are not complete. SEL suggests a way to handle incomplete scenarios with reasonable syntactic descriptions.

2. In addition, SEL seals the conceptual and technical gap, identified by R. Aumann, between the syntactic character of game descriptions and the semantic method of analyzing games.

# Syntactic Epistemic Logic and Games I

We argue that the traditional way to specify an epistemic situation via a single model is unnecessarily restrictive: there are numerous scenarios which do not have conventional single-model characterizations: less-than-common knowledge (e.g., mutual knowledge), partial, asymmetric knowledge, etc. Syntactic Epistemic Logic, SEL, suggests a way out of this predicament by viewing an epistemic scenario as specified syntactically by a set of formulas which corresponds to the characterization via a class of models rather than a single model. In addition, in Game Theory, SEL helps to seal the conceptual and technical gap, identified by R.Aumann, between the syntactic character of game descriptions and the predominantly semantical way of analyzing games.

This will be the first in the series of talks on SEL. We will consider natural examples of epistemic or game theoretic scenarios which have manageable syntactic description but do not have acceptable single model characterizations.

# First-order justification logic JT45

Among the different propositional justification logics, JT45 is the justification counterpart of the modal logic S5. Although there is a growing literature about propositional justification logic, there are only a few papers about the first-order version of justification logic. And most of those papers focus on the first-order version of LP. The goal of this talk is to present a first approach to first-order JT45 (FOJT45). Using Fitting models we will present a possible worlds semantics for FOJT45 and with that establish a Completeness Theorem. We will also discuss some topics related to first-order modal logic such as the Barcan Formula and the failure of the Interpolation Theorem.

# What is Constructive Axiomatic Method?

The received notion of axiomatic theory, which stems from Hilbert, is that of set T of propositions (either contentful or non-interpreted aka propositional forms) with subset A of axioms provided with a notion of consequence, which generates T from A in the obvious way. I argue that this standard notion is too narrow for being an adequate theoretical model of many mathematical theories; the class of such counter-examples is apparently very large and it includes such different theories as the geometrical theory of Euclid’s Elements, Book 1, and the more recent Homotopy type theory. In order to fix this problem I introduce a more general notion of theory, which uses typing and a generalized notion of consequence applicable also to objects of other types than propositions. I call such a theory constructive axiomatic theory and show that this particular conception of being constructive indeed captures many important ideas concerning the mathematical constructivity found in the earlier literature from Hilbert to Kolmogorov to Martin-Lof. Finally I provide an epistemological argument intended to show that the notion of constructive axiomatic theory is more apt to be useful in natural sciences and other empirical contexts than the standard notion. Disclaimer: The notion of constructive axiomatic theory is not my invention. The idea and its technical implementation are found in Martin-Lof ‘s constructive type theory if not already in Euclid. My aim is to make this notion explicit and introduce it into the continuing discussions on axiomatic method and mathematical and logical constructivity.

# What ‘If’?

Solutions to the paradoxes of semantic self-reference which endorse the unrestricted T-schema (and cognate principles) normally assume that the conditional involved in the schema is a detachable one (i.e., one that satisfies modus ponens). In a paraconsistent setting, there is another possibility: to take it to be the material conditional. In this talk, I will discuss some of the ramifications of this possibility.

# Paraconsistent Justification Logic (II): Quasi-Realization Theorem

Paraconsistent justification logics are justification logic systems for inconsistent beliefs or commitments. In other words, these systems are intended to model the agent’s justification structure, when she has inconsistent beliefs or commitments.

Realization theorem is one important meta-theorem in the discourse of justification logic. Quasi-realization theorem can be taken to be a major step toward proving realization theorem. In the last semester, I introduced one paraconsistent justification logic, called PJ_b. The main focus of the talk on the coming Tuesday is quasi-realization. It will be proved that quasi-realization theorem – which holds for standard justification logic systems – also holds for PJ_b.

# Unified Logics

The presentation studies the propositional logical systems containing multiple-conclusion rules that admit the asserted and rejected propositions at the same time.

# Modeling Intuitionistic Epistemic Logic via Provability and Verification (joint work with S. Artemov)

In 1933, Goedel offered a provability semantics for intuitionsitic logic IPC by means of a syntactical embedding of IPC into the modal logic S4 which Goedel considered a calculus for classical provability. Goedel’s embedding later became a key ingredient of the BHK-style semantics of classical proofs for intuitionsitic logic via the Logic of Proofs. We will establish the embedding/interpretation of Intuitionistic Epistemic Logic IEL into a natural extension (which we suggest calling S4V) of Goedel’s provability logic S4 by a verification modality. Basically, we will explain IEL via classical provability and verification coded in S4V the way Goedel explained intuitionistic logic via classical provability by interpreting it in S4.

# On non-self-referential realizable fragments

Given the known fact that realizations of modal logics T, K4, S4, and intuitionistic propositional logic IPC (via S4) each requires justification formula that is self-referential, it is reasonable to consider the fragment of each that can be realized non-self-referentially, called the non-self-referential realizable (NSR) fragment. Based on prehistoric graphs of G3-style sequent proofs of modal theorems, we are able to define the loop-free-provable (LFP) fragment of each logic that turns out to be a decidable subset of the NSR fragment. In this talk, we will explore properties of LFP fragments, their applications in the study of NSR fragments, and other related issues.

# Separation Logic and Friends

Separation Logic is used as an assertion language for Hoare-style proof systems about programs with pointers, and there is an ongoing quest for understanding its complexity and expressive power. Alternatively, there are a lot of activities to develop verification methods with decision procedures for fragments of practical use.

Actually, there exist many versions or fragments of Separation Logic that can be viewed as fragments of Second-Order Logic, as well as variants of Modal or Temporal Logics in which models can be updated dynamically (as in Public Announcement Logic).

In this talk, after introducing first principles on Separation Logic, issues related to decidability, computational complexity and expressive power are discussed. We provide several tight relationships with Second-Order Logic, Interval Temporal Logic or Data Logics, depending on the variants of the logic and on the syntactic resources available.

Results presented during the talk span from standard ones about Separation Logic to more recent ones, some of them obtained in collaboration with Dr. Morgan Deters (NYU, CIMS).

# Height and Happiness

Non-rigid designators play a major role in modal logics, when applied to aspects of natural language. Not only constants can be non-rigid, but also function symbols, and this is sometimes essential. I will discuss some examples, formalize them, and provide a semantics. If time permits, I will also present a nested sequent proof system to go with all this.

# Parameters and the Complexity of Modal Satisfiability

The complexity of Modal Logic has been extensively studied. Ladner has established most of what are now considered classical results on the matter, determining that most of the usual modal logics are PSPACE-complete. Furthermore, as Chagrov and Rybakov showed, some of these logics remain PSPACE-complete even for their variable-free fragments. We present parameters of modal formulas whose effect on the complexity of modal logic has been studied – like modal depth – and we identify sources of complexity for modal satisfiability.

A big part of this talk is joint work with Michael Lampis and Valia Mitsou.

# From Intuitionistic Epistemic Logic to a constructive resolution of the Knowability Paradox (joint work with S. Artemov)

We outline an intuitionistic view of knowledge which maintains the Brouwer-Heyting-Kolmogorov semantics and is consistent with Williamson’s suggestion that intuitionistic knowledge be regarded as the result of verification. We argue that on this view A -> KA is valid and KA -> A is not. We show the latter is a distinctly classical principle, too strong as the intuitionistic truth condition for knowledge which can be more adequately expressed by other modal means, e.g. ~(KA & ~A) “false is not known.” We construct a system of intuitionistic epistemic logic, IEL, codifying this view of knowledge and prove the standard meta-theorems for it. From this it follows that previous outlines of intuitionistic knowledge, responding to the knowability paradox, are insufficiently intuitionistic; by endorsing KA -> A they implicitly adopt a classical view of knowledge, by rejecting A -> KA they reject the constructivity of truth. Within the framework of IEL, the knowability paradox is resolved in a constructive manner which, as we hope, reflects its intrinsic meaning.

# Models of Justification Logic III

In this lecture we will review topological models of modal and justification logics.

# On Models of Justification Logic II.

We will introduce a new, slightly more general class of models for Justification Logic and compare them with Mkrtychev, Fitting, and modular models. Completeness Theorem will be established, examples will be given and plausible areas of applications outlined.

# On models of Justification Logic

This will be a mostly survey talk in which we discuss a variety of models for Justification Logic: arithmetical models (Goedel, Artemov), Mkrtychev models, Fitting models, topological models (Artemov, Nogina), modular models (Artemov), Sedlar’s group models, and some others. We will continue discussing applications.

# Some new directions in Epistemic Logic.

We will review research avenues on which we will focus this semester.

1. Applications of Justification Logic: general theory of logical omniscience, new format of handling inconsistent information, argumentation theory, truth maintenance systems, tracking evidence and belief revision.

2. Syntactic Epistemic Logic. There is some well-known tension between a syntactic description of an epistemic scenario and its semantical analysis based on a specific model. Such analysis requires common knowledge of the model which does not usually follow from the assumptions. Syntactic Epistemic Logic offers an approach to epistemic situations which does not rely on the common knowledge of a model.

3. Generic Common Knowledge.The notion of common knowledge of F, as introduced by Lewis in 1969, refers to any situation in which each finite iteration of `agent 1 knows that agent 2 knows that … F’ holds. A similar approach has been offered by McCarthy and others. This situatuion is now called Generic Common Knowledge, GCK, of F. Now dominant notion of Common Knowledge CK was coined by Aumann in 1976 as the least of all GCK’s of F. Now there is a growing number of specific epistemic scenarios which require GCK rather than CK. We are developing the theory of GCK and applying GCK to CK in appropriate situations.

# Quantum Computing: 2nd trip report

# Large-data deduplication problem (a joint session with the Computer Science Colloquium).

Imagine that you have a long list of items, say a hundred thousands of items. For example, the items may be client addresses. Some of the addresses are essentially duplicates distinguished only by “St.” vs. “Street”, or “Bill” vs. “William”, or by little spelling errors, etc. You don’t want to miss any of your clients, and you don’t want to annoy them by sending them multiple copies of your communications. How do you clean up your item list? The problem is ubiquitous and hard. We analyze the problem and describe a fast probabilistic algorithm for it.

# Paraconsistent Justification Logic

In the literature of belief revision, there is one approach called belief base belief revision, where the belief set is not required to be closed under a consequence relation. According to Sven Ove Hansson, in belief base belief revision, there are two ways to define the revision operator:

revision = expansion + contraction

revision = contraction + expansion

Hansson has a result that these two ways of defining the revision operator do not collapse into the same operator. Hansson also thinks that in the first way, there is an intermediate inconsistent epistemic state that occurs after expansion. Paraconsistent justification logic is intended to model the agent’s justification structure, when the agent is in such an inconsistent epi-state. My hope is that this logic could help us better model belief revision.

In my talk, the motivation will be better clarified. And, a paraconsistent justification logic system will be introduced.

# Structures of Social Proof

“Social proof” means that single agents assume beliefs, norms or actions of other agents in an attempt to reflect the correct view, stance, behavior for a given situation. The structure and modularity of social proof is unravelled including formal characterizations of derived socio-informational phenomena like bystander-effects and cascades. Sometimes social proof may be responsible for information spinning out of control – in very unfortunate ways. Joint work with Rasmus K. Rendsvig.

# Realization Semantically

This talk continues my previous one from October 22. In that, I gave justification counterparts for S4.2 and for K4^3, possible world justification semantics, and a completeness proof. The completeness proof used a canonical model construction. Now I’l use that canonical model to prove, non-constructively, realization theorems for the two logics. The methodology is the same as in a paper of mine published in 2005, though the presentation has change somewhat. Primarily my motivation is to explore the range of modal logics having justification counterparts—to discover its extent and limits.

# Reflection Principles Involving Provability and Explicit Proofs

Reflection principles are classical objects in proof theory and the areas studying Gödel’s Incompleteness. Reflection principles based on provability predicates were introduced in the 1930s by Rosser and Turing, and later were explored by Feferman, Kreisel & Levi, Schmerl, Artemov, Beklemishev and others.

We study reflection principles of Peano Arithmetic involving both Proof and Provability predicates. We find a classification of these principles and establish their linear ordering with respect to their metamathematical strength.

# J-Calc: Justified Modal Types and Functional Programs

In this talk we give a computationally motivated presentation of J-Calc. J-Calc is an extension of simply typed lambda calculus with justifications and justified modal types.

Initiating from textbook examples of modular programs, we will present the modal type theory of J-Calc as a type system corresponding to programming with modular definitions. More specifically, we will show how justified modality offers a logical reading to separate compilation in modular functional programming.

# Justification Logic Semantics: A Little New, but Mostly Old

Possible world semantics was introduced for justification logic in 2005. Initially it was for the Logic of Proofs, but it quickly extended to “nearby” logics, and more slowly to a wider family. Eight years is a long time, and by now people may be generally familiar with the ideas without having gone through any of the details. I will try to remedy that.

# On the Complexity of Multi-agent Justification Logic Under Interacting Justifications II

We introduce a family of multi-agent justification logics with interactions between the agents’ justifications, by extending and generalizing the two-agent versions of LP introduced by Yavorskaya in 2008. Known concepts and tools from the single-agent justification setting are adjusted for this multiple agent case. We present tableau rules and some preliminary complexity results: in several important cases, the satisfiability problem for these logics remains in the second level of the polynomial hierarchy, while for others it is PSPACE or EXP-hard.

This week we will present the general tableau rules for the family. We look more closely at particular examples and complexity bounds and we identify common sources of jumps in their complexity.

# On the Complexity of Multi-agent Justification Logic Under Interacting Justifications

We introduce a family of multi-agent justification logics with interactions between the agents’ justifications, by extending and generalizing the two-agent versions of LP introduced by Yavorskaya in 2008. Known concepts and tools from the single-agent justification setting are adjusted for this multiple agent case. We present tableau rules and some preliminary complexity results: in several important cases, the satisfiability problem for these logics remains in the second level of the polynomial hierarchy, while for others it is PSPACE or EXP-hard.

# Schroedinger and Turing on the Logic of Life: from the “coding” to the “genesis” of forms.

Schroedinger’s and Turing’s analyses of life phenomena have a twofold aspects. They both follow, first, a “coding paradigm”, of embryogenesis or of human computations and deductions respectively, and then move towards a more “dynamicist” approach. Schroedinger, in the second part of his 1944 book, hints to biological organization as negentropy – a variant of Gibbs dynamical analysis of energy – that we revitalized as anti-entropy, see references. Turing, after stressing that “the nervous system is surely not a Discrete State machine” (1950), invents the mathematics for an action/reaction/diffusion process, a “continuous system” (1952), where chemical matter (an hardware with no software) organizes itself along morphogenesis.

We will hint to the paths for thought opened by Turing’s dynamics as continuous deformations at the core of Turing’s pioneering paper of 1952, where symmetry breakings are a key component of the bio-chemical processes.

References

Schrödinger, E. What Is Life?, Cambridge University Press, 1944.

Alan M. Turing, “On Computable Numbers with an Application to the Entscheidungsproblem”, Proc. London Math. Soc. 42, 230-265, 1936.

Alan M. Turing, “The Chemical Basis of Morphogenesis”, Philo. Trans. Royal Soc., B237, 37-72, 1952.

Francis Bailly, Giuseppe Longo. Mathematics and Natural Sciences : the Physical Singularity of Life, Imperial College Press, London, 2011.

Giuseppe Longo, Maël Montévil, Perspectives on Organisms: Biological Time, Symmetries and Singularities, Springer, 2013.

Papers in http://www.di.ens.fr/users/longo:

Giuseppe Longo, “From exact sciences to life phenomena: following Schrödinger and Turing on Programs, Life and Causality”. Information and Computation, 207, 5: 543-670, 2009.

Francis Bailly, Giuseppe Longo. Biological Organization and Anti-Entropy. In J. Biological Systems, Vol. 17, No. 1, pp. 63-96, 2009.

Giuseppe Longo. Incomputability in Physics and Biology. Invited Lecture, Proceedings of Computability in Europe, Azores, Pt, June 30 – July 4, LNCS 6158, Springer, 2010.

Longo G., P. A. Miquel, C. Sonnenschein, A. Soto. Is Information a proper observable for biological organization? Progress in Biophysics and Molecular Biology, Vol. 109, Issue 3, pp. 108-114, August 2012.

# The road from Leibniz to Turing: from syllogisms to computations. Tribute to Alan Turing.

A group of brilliant innovators spanning three centuries were concerned with the nature of human reason. This endeavor led to the all-purpose digital computer. Except for Turing, none of them had any idea that his work might have such a tremendous application to our modern world. Leibniz saw far, but not that far. Boole could hardly have imagined that his algebra of logic would be used to design complex electronic circuits. Frege would have been amazed to find equivalents of his logical rules incorporated into computer programs for carrying out deductions. Cantor certainly never anticipated the ramifications of his diagonal method. Hilbert’s program to secure the foundations of mathematics was pointed in a very different direction. And Goedel, living his life of the mind, hardly thought of application to mechanical devices (M. Davis). However, Turing’s great vision has now been realized.

# Towards Syntactic Epistemic Game Theory

The traditional semantic approach to consider a game as an Aumann structure, though flexible and convenient, is not foundationally satisfactory due to assumptions that a given Aumann structure adequately represents the game and that this structure itself is common knowledge for the players.

These assumptions leave a gap between the officially syntactic character of the game description that often admits multiple models and studying a game as a specific model that is somehow assumed to be commonly known. This gap has been largely ignored or covered up by using as examples simple epistemic scenarios with natural models that were tacitly used as definitions of the game instead of declared syntactic game descriptions. Among others, Aumman found this foundationally unsatisfactory and argued for using what he called ‘Syntactic epistemic logic’ for reasoning about games.

In this talk, we outline a systematic approach to epistemic game theory which we suggest calling ‘Syntactic Epistemic Game Theory’, SEGT, consistent with Aumann’s suggestions, that studies games as they are normally described, in their syntactic form. In SEGT, semantic methods should be properly justified from the original game description. As a case study, we offer a SEGT theory of definitive solutions of strategic games with ordinal payoffs.

# Coincidence of Two Solutions to Nash’s Bargaining Problem

John Nash in 1950 gave an elegant solution to the bargaining problem using his somewhat controversial IIA axiom. Ehud Kalai and Meir Smorodinsky in 1975 gave a different solution replacing the IIA condition by their own Monotonicity condition. While the two solutions do not coincide in general, they obviously do so when the problem is symmetric. Are there other cases where the two solutions coincide? Indeed there are and we give a complete characterization.

# Knowledge and the Passage of Time

This talk will discuss how knowledge, nested knowledge, and common knowledge are gained in the presence of clocks and time bound information. It will complement the previous talk, in providing the causal structure underlying knowledge gain, from which the causal structure underlying basic coordination follows.

The talk will be based on joint work with Ido Ben Zvi.

# Realization Implemented

Justification logics are connected to modal logics via realization theorems. These have both constructive and non-constructive proofs. Here we do two things. First we provide a new path to constructive realization proofs, going through an intermediate quasi-realization stage. Quasi-realizers are easier to produce than realizers, though like them they are constructed from cut-free proofs. Quasi-realizers in turn constructively convert to realizers, and this conversion is independent of the justification logic in question. The construction depends only on the structure of the formula involved.

The second thing we do is provide a Prolog implementation of quasi-realization, and quasi-realization to realization conversion, for the logic LP. Many other justification logics can obviously be treated similarly.

Our quasi-realization algorithm, and its implementation, assumes the underlying modal proof system S4 is based on tableaus. Since these may not be familiar to everybody, we provide a sketch of how tableaus work. Then we present our algorithms, our implementation, and a discussion of implementation behavior and design decisions.

We believe our algorithms are simple and straightforward. The original realization algorithm, for instance, needed the entire cut-free proof as input. Our quasi-realization algorithm works one formal proof step at a time. There is, in the literature, another realization construction that works one step at a time, but it requires extensive use of substitution while our quasi-realization algorithm does not. The conversion algorithm is, as noted above, independent of particular justification logics and so only needs to be understood once. It is only here that substitution is needed.

Both the program to compute realizations, and a tech report discussing the algorithm, necessary background, and the implementation, are available on the speaker’s web page:

http://comet.lehman.cuny.edu/fitting/

# On Time, Communication and Coordination

Recent work has shown that applications of the modal logic of knowledge allow a characterization of the interaction between communication and coordination in systems with clocks and timing information. This talk will survey some of these results and the underlying notions and techniques. Time permitting, some issues related to the interaction between time and causality will be discussed. The focus of the talk will be on the interface between logic (or semantic notions) and application, rather than on the properties of the logic itself.

# Justification Logic for Argumentation

Argumentation is the process of giving arguments, counter-arguments, counter-counter-arguments and so on. In artificial intelligence, since the late 1980s there has been research that tries to use formal methods to approach argumentation theory. One very young approach in formal argumentation theory is to study argumentation under the framework of modal logic. Two possible advantages of doing so are: (1) this is a new angle, from which to look at argumentation we might have new insights, and (2) techniques in modal logic might become available for studying argumentation. In this talk, I would like to provide a different way of connecting argumentation theory and modal logic. More specifically, I would like to try to locate formal argumentation theory within the framework of justification logic. The resulting logic is a justification logic for argumentation.

# Semantic Knowledge Base – system build in Association Oriented Database Model

Semantic Knowledge Base is a new approach towards building systems capable of storing and processing complex information also in the means of inference over it. The main assumption is to distinguish concepts (meanings) from terms (natural language identifiers). The information is held in two modules: ontological core and semantic network module. Ontological core is used for building a hypergraph, where each node is a concept and edges are links between them. Semantic networks are used to store more complex information (rules and facts) in a structure using predicate-object structures build over previously defined concepts. The system provides also support for inference in the means of modal and hybrid logic through custom build system of quantifiers and possibility of describing dimensions and spaces, where each and every part of information is valid or invalid (extended Kripke’s model of possible world theory).

The system has been designed in association oriented database model (AODB). In this model we assume two main categories: collections and associations. Collections are used to store information in objects holding the attributes values, while associations store information about n-ary relation between collections. Associations are consider to be containers for roles and each role is considered as the permission given to the collection to be part of an association. The model assumes that both, collections and association may build inheritance structures, where virtual, private and real inheritance of accordingly attributes and roles occur. Each of the structure elements of AODB has its own unique role therefore their semantic is strictly connected to the grammar of the model. Moreover such construction made it possible to unify conceptual and physical model of the database into one structure. AODB has its native object storage model. There have also been two languages designed and implemented – Association Modeling Language (AML) and Association Query Language (AQL). The latter is using graph templates as queries and returns list of graph.

# Computing Bounds from Arithmetical Proofs

We explore the role of the function a+2^x, and its generalizations to higher number classes, in analyzing and measuring the computational content of a broad spectrum of arithmetical theories. Newer formalizations of such theories, based on the well-known normal/safe variable separation of Bellantoni-Cook, enable uniform proof-theoretical treatments of poly-time arithmetics, through Peano arithmetic, and up to finitely iterated inductive definitions.

# Expanding the realm of the idea of da Costa

Non-classical logics that deny ex falso quod libet are said to be paraconsistent. Since the monumental work of Stanislaw Jaskowski in 1948, a large number of paraconsistent systems have been developed. Those include systems of da Costa, Nelson, Belnap-Dunn, Priest, Batens and Scotch-Jennings. In this talk, we focus on the consistency operator which is the characteristic connective of da Costa’s systems. We understand that the main idea of da Costa is to make explicit, within the system, the area in which you can infer classically. The aim of the talk is twofold. First, we review some of the results within the framework of Logics of Formal Inconsistency. Then, we introduce and present some results on normality operator which generalizes consistency operator so that one can deal not only with inconsistency, but also with incompleteness. Second, we show that the normality operator may be employed, in a sense to be specified, in developing other paraconsistent logics such as modal logics, Nelson’s systems and expansions of the four-valued logic of Belnap and Dunn.

# Lost in translation: a critical view of epistemic puzzles solutions.

There are two basic ways to specify an epistemic scenario:

1. Syntactic: a verbal description with some epistemic logic on the background and some additional formalizable assumptions; think of the Muddy Children puzzle.

2. Semantic: providing an epistemic model; think of Aumann structures – a typical way to define epistemic components of games.

Such classical examples as Muddy Children, Wise Men, Wise Girls, etc., are given by syntactic descriptions (type 1), each of which is “automatically” replaced by a simple finite model (type 2). Validity of such translations from (1) to (2) will be the subject of our study.

We argue that in reducing (1) to (2), it is imperative to check whether (1) is complete with respect to (2) without which solutions of puzzles by model reasoning in (2) are not complete, at best. We have already shown that such reductions can be justified in the Muddy Children puzzle MC due to its model categoricity: we have proved that MC has the unique “cube” model Q_n for each n. This fixes an obvious gap in the “textbook” solution of Muddy Children which did not provide a sufficient justification for using Q_n.

We also show that an adequate reduction of (1) to (2) is rather a lucky exception, which makes the requirement to check the completeness of (1) w.r.t. (2) necessary. To make this point, we provide a simplified version of Muddy Children (by dropping the assumption “no kid knows whether he is muddy”) which admits the usual deductive solution by reasoning from the syntactic description, but which cannot be reduced to any finite model.

# On model reasoning in epistemic scenarios

It was long noticed that the textbook “solution” of the Muddy Children puzzle MC via reasoning on the n-dimensional cube Q_n had a fundamental gap since it presupposed common knowledge of the Kripke model Q_n which was not assumed in the puzzle description. In the MC scenario, the verbal description and logical postulates are common knowledge, but this does not yield common knowledge of the model.

Of course, a rigorous solution of MC can be given by a formal deduction in an appropriate epistemic logic with updates after epistemic actions (public announcements). This way requires an advanced and generally accepted logical apparatus, certain deductive skills in modal logic, and, most importantly, steers away from the textbook “solution.”

We argue that the gap in the textbook “solution” of MC can be fixed. We establish that MC_n is complete with respect to Q_n and hence a reasoning on Q_n can be accepted as a substitute for the aforementioned deductive solution (given that kids are smart enough to establish that this completeness theorem is itself common knowledge). This yields the following clean solution of MC:

1. prove the completeness of MC_n w.r.t. Q_n;

2. argue that (1) is common knowledge;

3. use a properly worded version of the textbook “solution.”

This approach seems to work for some other well-known epistemic scenarios related to knowledge. However, it does not appear to be universal, e.g., it does not work in the case of beliefs. In general, we have to rely on the deductive solution which fits the syntactic description of the problem.

The completeness of MC_n w.r.t. Q_n was established by me some years ago, and shortly after my corresponding seminar presentation, Evan Goris offered an elegant and more general solution which will be presented at this talk.

# Geodesic Semantics for Belief Change

Any logical system that models human reasoning needs to address the possibility of error and the subsequent belief change once this error is recognized. The need to deal with error-prone reasoning has only been widely acknowledged in the last thirty years or so; witnesses the popularity of the AGM postulates for Belief Revision. Despite the variety of syntactical and semantical offerings, all seem to agree that we need to model a concept of minimal change by choosing the most similar epistemic state to the present one. The favorite choice mechanisms are preferential orderings and, their generalization, distance maps. Preferential orderings provide satisfactory representation results but fail to model iteration. Distance maps model iteration but fail to provide satisfactory completeness results. In this talk, I will introduce a third semantical approach using geodesic distance (length of shortest path on a graph) that lies between the two and combines their best features: geodesic semantics provide satisfactory completeness results like preferential orderings do and deal with iteration, as distance maps do. Further, and perhaps more important, geodesic semantics offer a novel, more natural representation of similarity using distinguishability.

# Definitive solutions of strategic games

In his dissertation of 1950, Nash based his concept of solution to a game on the principles that “a rational prediction should be unique, that the players should be able to deduce and make use of it.” In this paper, we address the issue of when Nash expectations of a definitive solution hold and whether the Nash Equilibrium (NE) solution concept is a match for such definitive solutions. We show that indeed, an existence of NE is a necessary condition for a definitive solution, and each NE σ is a definitive solution for some notion of rationality individually tuned for this σ. However, for specific notions of rationality, e.g., Aumann’s rationality, NE is not an exact match to definitive solutions, many games with NE do not have definitive solutions at all. In particular, strategic ordinal payoff games with two or more Nash equilibria, and even some games with a unique NE do not have definitive solutions. We also show that the iterated dominance approach is a better candidate for Nash’s definitive solution concept than the Nash Equilibrium.