Connexion

English version

PPS UMR 7126 – Laboratoire
Preuves, Programmes et Systèmes

Accueil · Présentation · Membres · Publications · Séminaire · Groupes de travail · Projets · πr²

Groupe de travail Théorie des types et réalisabilité

Le groupe de travail se réunit en salle 1007, au 1er étage du bâtiment Sophie Germain.

Liste de diffusion : liste de diffusion.


Exposés à venir 2015–2016

Mercredi 14 octobre 2015, 14h
Danko Ilik
The exp-log normal form of formulas

Logical incarnations of type isomorphism include the notions of constructive cardinality of sets and strong intuitionistic equivalence of formulas. These are challenging to study in simultaneous presence of functions (exponentials) and sums (disjoint unions, disjunction).

In this talk, I will present a quasi-normal form of types that arises from the decomposition of binary exponentiation into the unary exponentiation and logarithm. This normal form can be applied for disentangling the equational theory (beta-eta) of the lambda calculus with sums. By an extension of the normal form from simple types (propositional logic) to quantifiers, one can also retrieve an "arithmetical" hierarchy for intuitionistic first order logic. Finally, this suggests a sequent calculus for intuitionistic logic that uses the notation of exponential polynomials and removes the need for most of the invertible proof rules of usual sequent calculi.



Exposés passés

Mercredi 16 septembre 2015, 14h
Alexandre Miquel
An axiomatic presentation of forcing (or: forcing for the dummies)

The method of forcing was introduced by Cohen in 1963 to prove the relative consistency of the negation of the Continuum Hypothesis with respect to the axioms of ZFC. Since the relative consistency of the Continuum Hypothesis w.r.t. the axioms of ZFC was already proved by Gödel in 1938, Cohen's result gave a definitive answer to Hilbert's 1st problem. After Cohen's achievement, the method of forcing quickly brought further independence results in set theory, and became an invaluable tool in the study of large cardinals.

Traditionally, forcing is presented as a model transformation, that takes a given model M of ZF (the ground model) and builds a larger model M[G], called a generic extension of M. In this talk, I will show that forcing can be seen as a _theory transformation_ as well, that is: as a transformation taking a particular axiomatization T of set theory (describing some ground model) and building a new axiomatization T^* of set theory (describing the corresponding generic extension). As we shall see, the transformation T -> T* is surprisingly simple (the axioms of T^* are deduced from those of T using a very simple algorithm), while hiding most technical details of the underlying model-theoretic construction - which makes this presentation well-suited for the "end user".


Mercredi 16 septembre 2015, 15h (en salle 3052)
Nicolas Tabareau
Vers un analogue de l'axiome de Giraud en HoTT

Résumé :

Les axiomes de Giraud permettent de caractériser ce que doit satisfaire une (∞,1)-catégorie pour être un (∞,1)-topos. Ils sont donc importants pour comprendre la structure interne d'un (∞,1)-topos. En particulier, celui qui dit que tout "objet groupoide" est effectif.

La théorie des types homotopiques (HoTT) semble constituer un langage interne pour les (∞,1)-topos, pourtant il n'y a pas d'analogue aux axiomes de Giraud, car on ne sait pas dire ce qu'est un objet groupoide en HoTT.

Dans cet exposé, je présenterai un analogue du nerf de Cech en HoTT qui permet de montrer que la colimite du nerf de Cech d'une fonction f est son image. J'expliquerai ensuite en quoi ce résultat est un premier pas vers un axiome de Giraud en HoTT et la connexion avec l'axiome d'univalence.


Mercredi 9 septembre 2015, 14h
Andrej Bauer
A sound and complete language for type theory with equality reflection

Type theory with the equality reflection rule, saying that the identity type reflects into judgmental equality, is algorithmically tricky to handle, as it allows us to hypothetically assume that arbitrary types are equal. I shall discuss how we might design a useful language for such a theory that is sound, and hopefully complete.



Exposés passés 2014-2015

Mercredi 1 avril 2015, 14h
Gabriel Scherer
Which types have a unique inhabitant?

In this talk I will present an algorithm that takes a type of the simply-typed lambda-calculus with sums, and answers that:

(0) is not inhabitated, or that (1) it is uniquely inhabited (and gives the unique program at this type), or that (2) it is not uniquely inhabited (and gives two distinct programs at this type).

The algorithm relies on a novel "saturating" variant of focused intuitionistic logic, which is very close to the existing notion of maximal multi-focusing. Its termination uses techniques of propositional proof search, that needed to be adapted to preserve computational correctness: if you cut the search space too harshly, you may claim a type is uniquely inhabited when it is not.


Mercredi 11 mars 2015, 16h30, salle 3052
Noam Zeilberger
Functors are Type Refinement Systems

This is joint work with Paul-André Melliès, with paper at http://noamz.org/papers/funts.pdf.

The standard reading of type theory through the lens of category theory is based on the idea of viewing a type system as a category of well-typed terms. We propose a basic revision of this reading: rather than interpreting type systems as categories, we describe them as functors from a category of typing derivations to a category of underlying terms. Then, turning this around, we explain how in fact *any* functor gives rise to a generalized type system, with an abstract notion of typing judgment, typing derivations and typing rules. This leads to a purely categorical reformulation of various natural classes of type systems as natural classes of functors.

In the talk I want to motivate and introduce this general framework (which can also be seen as providing a categorical analysis of _refinement types_), and as a larger example give a sketch of how the framework can be used to formalize an elegant proof of a coherence theorem by John Reynolds. If time permits, I will also describe some of the natural questions raised by this perspective that are the subject of ongoing research.


Mercredi 4 mars 2015, 14h
Sergei Soloviev (IRIT, Toulouse)
Isomorphism of Dependent Products in Type Theory: Specifics and Scientific Context

Full description and deciding algorithm for isomorphism of types in a system with dependent product are obtained. One may note that the systems with dependent product are used in many proof assistants, e.g., Coq, Lego... The case is compared with the relatively well studied case of isomorphism of types in simply typed lambda-calculus, with accent on certain specific aspects of isomorphism of dependent types (e.g., "non-locality"). Non-trivial connections with group theory (representability of finite groups by automorphisms of types) will be discussed. One of the motivations for this study lies in the role played by type isomorphisms in the Univalent Foundations Program and Homotopy Type Theory.


Mercredi 25 février 2015, 14h
Andrew Polonsky
Defining equality by induction on type structure

We discuss progress and challenges in defining extensional equality. Our goal is a natural, type-theoretic construction of this notion in a way that would be faithful to the Curry–Howard philosophy. Concretely, we want to define, for every type A, and terms a,a':A, a new type a ≈A a' which - generalizes intensional Id_A(a,a'); - validates function extensionality; - gives clear, exhaustive rules for computing transports over equalities.

Even more concretely, we want a type (a ≈A a') that validates the axioms of equality given by Coquand on p.34 of http://www.cse.chalmers.se/~coquand/equality.pdf (Coquand showed that these axioms allow one to derive J.)

So far, we have succeeded in reflecting the standard logical relation defined by induction on types back into the type universe, in such a way that the universe is closed under this reflection operator. This gives the right notion of equality for closed terms and types.

What remains is to extend this construction to open terms, and add the kan filling conditions. In fact, the latter may be obtained by adding a formal "transport operator", for which the computation rules are clear. Proper treatment of open terms is the principal challenge that remains.


Mercredi 28 janvier 2015, 14h
Tomer Libal
Regularity in higher-order unification

Abstract: Huet's pre-unification procedure is the most popular higher-order unification procedure. Nevertheless, it terminates on non-unifiable problems only when the generated matching tree is finite. There are alternatives to the pre-unification procedure which terminate if the branches of these matching trees are finite or regular. These results are normally of very high-complexity and are rarely used in practice. In the talk I will try to describe a framework for combining the stronger "regular" unification techniques within Huet's efficient pre-unification procedure.


Mercredi 7 janvier 2015, 14h
Maxime Dénès
Coqonut: a formally verified JIT compiler for Coq

Evaluation of terms plays a crucial role in the performance of proof checking and execution of verified programs, and the trust one can put in them. Coq provides today various evaluation mechanisms, whether internal, in the kernel, or external, via extraction to OCaml or Haskell.

I will present the Coqonut project, which aims to provide a better treatment of the specific performance trade-offs and the delicate issues of trust behind these evaluation mechanisms. The project, led jointly with Xavier Leroy, is still in a preliminary phase, but I will show its objectives, some design choices we made and will report on its current status.


Mercredi 7 janvier 2015, 15h30, en salle 3052
Ali Assaf
Tarski and Coq

The Coq system uses a cumulative universe hierarchy where each universe is contained in a larger one. The universes are formalized in the Russell style where cumulativity is implicit in the terms. In Martin-Lof's intuitionistic type theory, cumulativity can be made explicit by using the Tarksi style of universes. In this talk, I will present the two styles and the conditions under which they are equivalent. I will then consider the implications of using the Tarski style for the calculus of inductive constructions and everyone's favorite chicken.


Mercredi 21 janvier 2015, 14h
Danko Ilik
Eliminating control operators from classical realizability

The traditional method to extract programs from proofs of classical Analysis (Peano Arithmetic + Axiom of Choice) is to use an extension of Gödel's System T with bar recursion. An alternative method is to use an approach based on computational side-effects (control operators, quote/clock instructions) like in the works of Krivine or Herbelin.

By classic results of Kreisel and Schwichtenberg, for the fragment of Analysis that makes sense computationally, the Π₂-fragment, we know that bar recursion is essentially primitive recursive — leaving open the question of how to actually avoid using it. In this talk, I will present some recent work (arxiv.org/abs/1301.5089) showing how realizers of System T can be extracted directly from proofs of the Σ₂-fragment of classical Analysis. Control operators are essential, but only at the meta-theoretical level: control operators can be compiled away from System T, at any simple type.


Mercredi 12 novembre 2014 à 14h
Jean-Baptiste Jeannin (CMU)
Differential Temporal Dynamic Logic for Hybrid Systems and Airplane Collision Avoidance

Differential Dynamic Logic can express important properties about Cyber-Physical Systems, by combining discrete assignments and control structures with differential equations. However it can only express properties about the end state of a system. In this talk, we first introduce the differential temporal dynamic logic dTL2, a logic to specify properties about the intermediate states reached by a hybrid system. It combines differential dynamic logic with temporal logic, and supports some linear time temporal properties of LTL. It extends differential temporal dynamic logic dTL with nested temporalities. We provide a semantics and a proof system for the logic dTL2, and show its usefulness for nontrivial temporal properties of hybrid systems. We take particular care to handle the case of alternating universal dynamic and existential temporal modalities and its dual, solving an open problem formulated in previous work.

We then show an application to airplane collision avoidance, with a verification of the next-generation Airborne Collision Avoidance System (ACASX). ACAS X is an industrial system intended to be installed on all large aircraft to give advice to pilots and prevent mid-air collisions with other aircraft. It is currently being developed by the Federal Aviation Administration (FAA). We determine the geometric configurations under which the advice given by ACAS X is safe under a precise set of assumptions and formally verify these configurations. We conduct an initial examination of the current version of the real ACAS X system and discuss some cases where our safety theorem conflicts with the actual advisory given by that version, demonstrating how formal, hybrid approaches are helping ensure the safety of ACAS X. Our approach is general and could also be used to identify unsafe advice issued by other collision avoidance systems or confirm their safety.


Mercredi 12 novembre 2014 à 15h15
Urs Schreiber
Linear Homotopy Types and Integration

Abstract: There is an evident candidate semantics for what one would want to call "linear homotopy type theory", given by parameterized spectrum objects in infinity-toposes. In the talk I indicate some general principles and present a standard model by parameterized E-module -spectra for E a multiplicative cohomology theory. Then I show how in linear homotopy type theory one may axiomatize a kind of linear integration which in the models translates to pushforward in twisted generalized cohomology theory. This is based on http://arxiv.org/abs/1402.7041


Mercredi 29 octobre 2014 à 14h
Matthias Puech
A Contextual Account of Staged Computations

Programming languages that allow us to manipulate code as data pose a notorious challenge to type system designers. We propose contextual type theory with first-class environments as a foundational typing discipline for multi-stage functional programming: it subsumes and refines previous proposals while being based on a Curry-Howard correspondence with modal logic.

We will review the existing type systems, corresponding to linear-time temporal logic and the modal logic of necessity, and show how contextual types generalize them. In particular, we showcase a novel embedding of Taha and Nielsen's environment classifiers into contextual types, giving a new logical foundation to this system.

This ongoing work will hopefully shed a new light on the relationship between modal and temporal logics, and has strong connections with normalization algorithms.


Mercredi 22 octobre 2014
Sylvain Schmitz (ENS Cachan)
Implicational Relevance Logic is 2-EXPTIME-Complete

We show that provability in the implicational fragment of relevance logic is complete for doubly exponential time. The arguments are purely syntactic, and exploit the close ties between substructural logics (here a focusing calculus for implicational relevance logic) and counter systems (here branching vector addition systems with states).


Mercredi 8 octobre 2014 à 14h
Eric Finster
Higher Dimensional Syntax

The connection between category theory, logic and functional programming by now has a long and storied tradition. Moreover, many authors have pointed out how these ideas can be deepened and refined by viewing them from the perspective of higher category theory: capturing rewriting of terms with directed higher cells, for example, or modelling invariance under substitution and intensional equality with higher categorical coherences. The recent advances in type theory brought about by Voevodsky's Univalent Foundations Program provide renewed motivation to pursue these directions of inquiry.

We argue in this talk that a major obstacle to progress in this direction is lack of a consistent and effective notation, or syntax, for higher categorical equations, and we propose that opetopes, a collection of higher dimensional polytopes first described by Baez and Dolan, provide a possible solution to this problem. Indeed, the very definition of these shapes is perhaps most easily read from the perspective of functional programming, as we exhibit here by describing an implementation in terms of traditional functional techniques (monads, applicative functors and zippers) intended to provide the syntactic core of a higher categorical proof assistant.



Exposés passés 2013–2014

Lundi 14 avril 2014 à 14h en salle 3052
Séminaire commun avec l'équipe Algorithmes et Complexité du LIAFA
Jaap Boender (Middlesex University, School of Science & Technology)
Verification of Quantum Protocols using Coq

Quantum computing and quantum communication are exciting areas of research at the intersection of physics and computer science, which have great potential for influencing the future development of information processing systems. Quantum cryptography, in particular, is well developed. Commercial Quantum Key Distribution systems are easily available and several QKD networks have been built in various parts of the world. However, the security of the protocols rely on information-theoretic proofs, which may or may not reflect actual system behaviour, and testing of the implementations. We present a novel framework for modelling and proving quantum protocols using the proof assistant Coq. We provide a Coq library for quantum bits (qubits), quantum gates, and quantum measurement. We implement and prove a simple quantum coin flipping protocol. As a step towards verifying practical quantum communication and security protocols, such as Quantum Key Distribution, we support multiple qubits, communication and entanglement. We further illustrate these concepts by modelling the Quantum Teleportation Protocol, which communicates the state of an unknown quantum bit using only a classical channel.


Mercredi 2 avril 2014 à 14h
Timothy Bourke (Inria and ENS)
Mechanization of a mesh network routing protocol and the proof of its loop freedom in Isabelle/HOL

This talk describes recently completed work to transfer a formal but pencil-and-paper model and analysis of a wireless mesh network protocol (AODV) into the proof assistant Isabelle/HOL. The nodes of such networks are reactive systems that cooperate to provide a global service (the sending of messages from node to node) satisfying certain correctness properties (messages are never sent in circles).

I will not talk much about mesh networks or the specifics of the AODV protocol. Rather, I will present the model and, along the way, briefly explain some of the basics of modelling and proof in Isabelle/HOL. While the process algebra models and the fine details of the original proof are quite formal, the structure of the proof is much less so. This necessitated the development of new framework elements and techniques in Isabelle. In particular, we adapted the theory of Manna and Pnueli to show invariants over individual processes and introduced machinery for assume/guarantee reasoning to lift these invariants to networks of communicating processes. While the original proof reasoned informally over traces, the mechanized proof is purely based on invariant reasoning, i.e., on reasoning over pairs of reachable states. I will explain the main features of the new proof relating them to existing theory and contrasting them with the original pencil-and-paper reasoning.

In collaboration with: Robert J. van Glabbeek (NICTA and UNSW, Sydney) Peter Hofner (NICTA and UNSW, Sydney)


Mercredi 19 mars 2014 à 14h
Cătălin Hriţcu (Inria)
QuickChick: Speeding up Formal Proofs with Property-Based Testing

Carrying out formal proofs while designing even a relatively simple system can be an exercise in frustration, with a great deal of time spent attempting to prove things about broken definitions. We believe that property-based testing (PBT) can dramatically decrease the number of failed proof attempts and reduce the overall cost of producing formally verified systems. PBT can find bugs in definitions and conjectured properties early in the design process, postponing proof attempts until one is reasonably confident that the design is correct. Moreover, PBT could be very helpful during the proving process for quickly validating potential lemmas, inductive invariants, or simply the current proof goal.

We have recently started a research program aimed at improving the state of the art in PBT and making it an invaluable part of the interactive theorem proving process. As a first step in this direction we built a Coq clone of Haskell's QuickCheck. We used this initial prototype to test noninterference for increasingly sophisticated dynamic information flow control (IFC) mechanisms for low-level code. This case study showed that random testing can quickly find a variety of bugs, and incrementally guide the design of a correct version of the IFC mechanisms. More recently, we used "polarized" mutation testing to systematically introduce all missing-taint and missing-check bugs and incrementally improve our testing strategy until all the bugs were found. We are now working on generalizing these findings and developing a reusable framework for polarized mutation testing.

Our future plans include developing a domain specific language for writing custom test-data generators and to deeper integrate PBT into the normal Coq proving process. We would like to use PBT at any point during a proof, and freely switch between declarative and efficiently executable definitions. We think we can achieve this by integrating PBT into small-scale reflection proofs. We believe we can support efficient testing in SSReflect by exploiting a recent refinement framework by Dénès et al, which allows maintaining a correspondence and switching between proof-oriented and computation-oriented views of objects and properties.


Mercredi 19 février 2014 à 14h
Pierre-Evariste Dagand (Inria)
Ornaments: a calculus of data-structures

With Conor McBride, I have studied inductive families in (dependent)

type theory and the structural ties between them. Indeed, in a simply-typed language such as ML, a datatype is nothing but a "data-structure". The choice of a particular datatype (say, binary trees) versus another (say, lists) is mostly governed by performance considerations.

In type theory, inductive families add a new dimension to datatypes: an inductive family is the combination of a data-structure (e.g., the binarily-branching structure of a tree) and a data-logic (e.g., to be of a certain height). Thanks to inductive families, we can define the datatype of vectors (lists of a given length), or balanced binary trees, etc. However, the choice of a particular data-logic is motivated by logical considerations: it lets the programmers enforce their invariants. This leads to an uncontrolled multiplication of similar yet incompatible datatypes: the same data-structure ends up being duplicated to account for its many logic-specific uses.

In this talk, I will introduce the notion of ornament, which allows us to relate structurally isomorphic datatypes. I will then introduce a few constructions on ornaments, building up an actual "calculus of data-structures". I shall conclude with an application of ornaments to code reuse, whereby a function – such as the addition of numbers – can be transformed into another – such as the concatenation of lists.


Mercredi 19 février 2014 à 15h
Carsten Schürmann (ITU Copenhagen)
A Type-Theoretic View on Voting

Abstract: Security, voting, communicating, and cryptographic protocols share some of the same properties: They are mathematical in nature, they are complex and difficult to implement, and society at large is depending on them. Specifying and proving protocols correct is an increasingly important socio-technical challenge. Linear logic provides a common language that can bridge the two cultures.

In my talk, I will show how we use linear logic to express and study voting schemes and what we discovered in the process.

This work is part of the DemTech research project that was sponsored by Det Strategiske Forskningsrådet grant CCR-0325808.

Bio: Carsten Schürmann is Associate Professor at the IT University of Copenhagen. He is the principal investigator of the DemTech research project. Before joining the faculty at ITU he was a member of the faculty in the computer science department at Yale University. Carsten Schürmann is a receipient of the NSF Career Award.


Lundi 17 février 2014 à 14h – Salle 3052, 3ème étage Sophie Germain
Conor McBride (Strathclyde University)
Keeping Your Neighbours In Order

Working in Agda, I present a systematic way to enforce ordering invariants on data structures such as lists and trees: I construct a universe in which all types admit ordering and develop generic operations, such as merging and flattening. A further refinement of the universe allows indexed types such as ordered vectors and balanced trees. I will show the implementation of insertion for 2-3 trees (and deletion, too, if time permits).

There is nothing especially Agda-specific about this development. Laptop-bearing audience members are encouraged to follow the same path in Coq. It is interesting to observe the way precision in types allows automation in programming.


Mercredi 12 février 2014 à 14h, salle 1007
Ludovic Patey
Introduction to reverse mathematics

Everybody has encountered the following kind of statement at some point: "Use theorem A to prove theorem B" or "Theorems A and B are equivalent". What does this mean ? How can we formaly express a relation between theorems ? How can we capture the strength of a theorem ?

Reverse mathematics is a successful approach to answer to these questions, using subsystems of second order arithmetics. Considering the computational content as the strength of a theorem, it exhibits deep relations between theorems of many different fields.

During this talk, we will introduce the framework of reverse mathematics and illustrate the two main observations of the domain: - Most classical mathematical theorems are "computationally weak" - Many theorems are computationally equivalent to each other.

[1] S. Simpson, Subsystems of second order arithmetic, 2nd ed., CUP, 2009 [2] D. Hirschfeldt, Slicing the truth, to appear, 2013


Mercredi 5 février 2014 à 14h, salle 1007
Nicolas Pouillard (IT University of Copenhagen, Denmark)
Counting on Type Isomorphisms (joint work with Daniel Gustafsson)

Big operators, such as Σ{x∈A}f(x), Π{x∈A}f(x) are the iterated versions of the binary operators + and *. They are common in mathematics and we propose tools to reason about them within a Type Theory such as Agda. Using a polymorphic encoding of binary trees one can lift any binary operator to the corresponding big operator. Thanks to the parametricity of this encoding one can easily lift properties of the binary operator to the corresponding big operator. In particular big operators such as sums and products can be put in correspondence with the cardinality of Σ-types and Π-types which enforces a correct implementation. Moreover these correspondences enable the use of type isomorphisms as a powerful reasoning tool. For instance using a standard isomorphism on Σ-types yields a constructive proof that adequate summation functions are invariant under permutations. Keywords: Agda, Type Theory, Homotopy Type Theory, Formal reasoning, Combinatorics, Isomorphisms, Parametricity, Probabilistic Reasoning


Mercredi 8 janvier 2014 à 14h, salle 1007
Alois Brunel (LIPN)
Monitoring algebras

In this talk, I will present the monitoring abstract machine, which is the result of a novel forcing program transformation. In this machine, a state is able to monitor and alter the execution of programs. This mechanism is similar to what happens in Miquel's KFAM (Krivine Forcing Abstract Machine). Based on this machine, I define the notion of Monitoring Algebra (MA), which internalizes the composition of linear forcing with a more usual biorthogonality-based realizability. As we will see, by considering different MAs, we obtain realizability frameworks for various programming languages (for example featuring higher-order references, recursive types, bounded-time/space execution, streams, etc.), all sharing a common soundness result. I will also show how the well-known technique of iterated forcing corresponds in our framework to a certain construction on MAs, allowing us to combine different programming features in a modular way. The ultimate goal of this work is to obtain a general, modular, algebraic framework in which we design and combine realizability models for all kind of programming languages.


Mercredi 18 décembre 2013 à 14h, salle 1007
Alexander Kreuzer (ENS Lyon)
Program extraction and higher order reverse mathematics

Reverse mathematics is a program which tries to determine which set-existence axioms are needed to prove theorems in mathematics. For instance, it was shown that the Bolzano-Weierstrass principle requires arithmetical comprehension (and is in fact equivalent to this), see e.g. [1]. Usually, this investigation are done in fragments of so-called second order arithmetic which is actually a two-sorted first order theory with a type for natural numbers and a type for sets of natural numbers. Kohlenbach introduce a finite-type extension of this system, see [2].

In this talk we will introduce reverse mathematics and this higher order extension. We will present program extraction techniques based on Gödel's functional interpretation. Using this we will analyze higher order statements like the existence of a non-principal ultrafilter or the existence of the Lebesgue measure on all subsets of the unit interval.

[1] S. Simpson, Subsystems of second order arithmetic, 2nd ed., CUP, 2009 [2] U. Kohlenbach, Higher order reverse mathematics, in Reverse mathematics 2001, Lect. Notes Log., vol 21, pp. 281-295, 2005 [3] A. Kreuzer, Non-principal ultrafilters, program extraction and higher order reverse mathematics, J. of Math. Logic, vol 12 (2012), no 1.


Mercredi 27 novembre 2013 à 14h, salle 1007
Thomas Braibant
Circuits in Coq: a tale of encodings

In this talk, I will use hardware verification as a vindication to discuss various encodings of progaming languages and data-structures in Coq.

First, I will shortly muse over the diagrammatic approach to circuits, (presented as a traced monoidal category). Then, I will argue that this variable-free embedding is appealing, it does not scale to more involved verification problems: variables are just too important to be ignored.

This will lead us to a second embedding of circuits, that uses Parametric Higher-Order Abstract Syntax (PHOAS) to deal with variables. More precisely, I will give a primer on this encoding of binders; and describe shortly how we used it (with Adam Chlipala) to encode and verify "sizable" circuits. The latter involves a lightly optimizing hardware compiler, which relies on a BDD library.

If time permits, I will use this BDD library as a shameless plug to present some recent work (with J.H. Jourdan and D. Monniaux) about how hash-consed data-structures can be encoded and verified in Coq.


Mercredi 20 novembre 2013 à 14h, salle 1007
Dominic Hughes (Stanford University)
Is linear logic two-thirds wrong (for classical logic)?

Linear logic decomposes classical logic into multiplicatives, additives and exponentials. Multiplicative proof nets are remarkably elegant; additive and exponential nets have struggled in comparison. Could this struggle mean that additives and exponentials are simply the wrong way to decompose classical logic?

I present a decomposition of classical first-order logic which retains multiplicatives at its core and replaces additives and exponentials with a naive notion: a function f is the ideal parallel representation of contraction and weakening. Contraction occurs when f(x)=f(y) and weakening occurs outside the image of f.

The approach clarifies the units: the positive multiplicative unit is retained and the three problematic units are rejected. As a standalone result quantifiers for linear logic are simplified, by replacing existential witnesses with a unification criterion. This yields local cut elimination and the identification of more proofs.

(Note: additives and exponentials are interesting and elegant in their own right; here I reject them only from the perspective of decomposing classical first-order logic.)


Mercredi 23 octobre 2013 à 14h, salle 1007
Matthias Puech
Constructing normalization by evaluation with GADTs: a tutorial

I propose a live-coding session, intended as an informal exposition of normalization by evaluation, using the OCaml type system's new feature: GADTs. I will show how it can be used to gradually turn the usual implementation of NbE "back" into an efficient normalization theorem for the simply-typed lambda-calculus, "proved in OCaml". I will then show how this method is modular: - knowing only the structure of types, we can normalize languages with i.e. control operators, thanks to CPS evaluation, - orthogonally, we can abstract from the representation of values (WIP)

Its content is ongoing work together with Olivier Danvy and Chantal Keller.


Mercredi 9 octobre 2013 à 14h, salle 1007
Chantal Keller
A Modular Integration of SAT/SMT Solvers to Coq through Proof Witnesses

In this talk, I will present a way to enjoy the power of SAT and SMT provers in Coq without compromising soundness. This requires these provers to return not only a yes/no answer, but also a proof witness that can be independently rechecked. We present such a checker, written and fully certified in Coq. It is conceived in a modular way, in order to tame the proofs' complexity and to be extendable. It can currently check witnesses from the SAT solver zChaff and from the SMT solver veriT. Experiments highlight the efficiency of this checker. On top of it, new reflexive Coq tactics have been built that can decide a subset of Coq's logic by calling external provers and carefully checking their answers.



Exposés passés 2012–2013

Mercredi 12 juin 2013 à 14h, salle 1007
Fernando Ferreira
Atomic Polymorphism

It has been known for six years that the restriction of Girard’s polymorphic system F to atomic universal instantiations interprets the full fragment of the intuitionistic propositional calculus. We firstly observe that Tait’s method of “convertibility” applies quite naturally to the proof of strong normalization of the restricted Girard system. We then show that each beta reduction step of the full intuitionistic propositional calculus translates into one or more beta-eta reduction steps in the restricted Girard system. As a consequence, we obtain a novel and perspicuous proof of the strong normalization property for the full intuitionistic propositional calculus. It is noticed that this novel proof bestows a crucial role to eta conversions.


Mercredi 22 mai 2013 à 14h
Lionel Rieg
Classical realizability and Herbrand theorem

Krivine presented in 2010 a methodology to combine Cohen's forcing with the theory of classical realizability and showed that the forcing condition can be seen as a reference that is not subject to backtracks. The underlying classical program transformation was then analyzed by Miquel (2011) in a fully typed setting in classical higher-order arithmetic (PAw). As a case study of this methodology, I present a method to extract a Herbrand tree from a classical realizer of an existential formula, following idea of the proof of Herbrand theorem. Unlike the traditional proof based on Konig's lemma (using a fixed enumeration of atomic formulas), our method is based on the introduction of a particular Cohen real. It is formalized as a proof in PAw, making explicit the construction of generic sets in this framework in the particular case where the set of forcing conditions is arithmetical. We then analyze the algorithmic content of this proof.


Mercredi 17 avril 2013 à 14h, salle 1007
Barbara Petit (Equipe Sardes, INRIA Grenoble)
LiDeAl: Certifying complexity with Linear Dependent Types

Dependant Linear PCF (dlPCF) was introduced by Dal Lago and Gaboardi as a type system characterising the complexity of PCF programs. It is parametrised by a an equational program, which is used to express some complexity annotations for types. This enables a modular complexity analysis, and the main strength of the system is to be relatively complete: any terminating PCF program is typable in dlPCF (and its complexity is thus determined) assuming that the equational program is expressive enough.

We have designed a type inference algorithm for this system: given a PCF program, it computes its type in dlPCF (and thus a complexity bound for the program) together with a set of proof obligations that ensures the correctness of the type, following the same scenario as the computation of weakest preconditions in Hoare logic. Checking formally the proof obligations generated by the type checker then amounts to a formal proof that the complexity bound is correct.

In this talk I will explain how the type system dlPCF can be turned into a tool for formal certification of the complexity of functional programs.


Mercredi 27 mars 2013 à 14h, salle 1007
Eduardo Bonelli (Universidad Nacional de Quilmes)
Some reflections on Curry-Howard for Justification Logic

We discuss possible variations in formulating a Curry-Howard correspondence for Justification Logic (JL), a modal logic that has recently been proposed for explaining well-known paradoxes arising in the formalization of Epistemic Logic. In JL assertions of knowledge and belief are accompanied by justifications: the formula [t]A states that t is ``reason'' for knowing/believing A.


Mercredi 20 mars 2013 à 14h, salle 1007
Cyril Cohen
Formalization of algebraic numbers in Coq

In this talk I show a formalization in Coq of real and complex algebraic numbers together with their structure of real closed field and algebraically closed field. These numbers are useful in the recently achieved formalization of the proof of the odd order theorem (aka Feit-Thompson theorem), where they substitute for real and complex numbers. In this proof, it is mandatory to know the partial order structure on complex numbers. This leads to build complex algebraic on top of real algebraic numbers. The construction of real algebraic numbers is based on a construction of Cauchy reals, in particular to define arithmetic operations. I will show the methodology we used to do epsilon-dela proof in a comfortable way. The final step of this construction involves a quotient, which we show how to perform. Finally, the construction of complex algebraic numbers and their order structure is done by extending real algebraic numbers with the imaginary element i and the proof of algebraic closure is a consequence of the fundamental theorem of algebra.


Mercredi 13 mars 2013 à 14h, salle 1007
Matthieu Sozeau
Univalence for free

We present an internalization of the 2-groupoid interpretation of the calculus of constructions that allows to realize the univalence axiom, proof irrelevance and reasoning modulo. As an example, we show that in our setting, the type of Church integers is equal to the inductive type of natural numbers.


Mercredi 6 mars 2013 à 14h, salle 1007
Keiko Nakata
Are streamless sets Noetherian?

In our previous works, we studied a notion of finiteness from a viewpoint of constructive mathematics. Constructively, there are at least four positive definitions for a set being finite, two of which are of our interest: a Noetherian set and a streamless set. A set is Noetherian if the root of the tree of duplicate-free lists over the set is inductively accessible. A set is streamless if every stream over the set has a duplicate. In this talk, I will present our on-going work, aiming at positively answering to our conjecture: it is constructively unprovable that a streamless set is Noetherian.

Joint work with Marc Bezem and Thierry Coquand.


Mercredi 20 février 2013 à 14h, salle 1007
Hugo Herbelin
Some reflections on equality in type theory

Relying on Altenkirch et al and Licata-Harper's approach to the computational content of equality, I will present informal ideas towards a type theory where both strict and univalent notions of equality coexist and compute.


Mercredi 13 février 2013 à 14h et 15h45, salle 1007 (double séance)
Philippe Malbos
Confluences algébriques

Cet exposé est une vue panoramique de résultats homologiques et homotopiques en réécriture. Squier a montré que la décidabilité par réécriture du problème du mot dans un monoïde est soumise à des conditions de finitude homotopique et homologique intrinsèques au monoïde. Après avoir rappelé les constructions de Squier, je présenterai une généralisation de la condition homologique basée sur la notion de stratégie de normalisation. Enfin, j'exposerai les raisons pour lesquelles certains résultats homotopiques de convergence en réécriture de mots ne se généralisent pas aux dimensions supérieures de la réécriture.

Hugo Herbelin
An introduction to homotopy type theory

I will present some key ideas developed in the context of the homotopic approach to type theory, including: the type-as-space correspondence, h-levels and n-truncation, the univalence axiom and consequences of it, higher inductive types.


Mercredi 6 février 2013 à 14h, salle 1007
Marc Lasson
A tactic for logical relations in Coq

Reynold's abstraction theorem is now a well-established result for a large class of type systems. We propose here a definition of relational parametricity and a proof of the abstraction theorem in the Calculus of Inductive Constructions (CIC), the underlying formal language of Coq, in which parametricity relations' codomain is the impredicative sort of propositions. During this talk, we will discuss the behavior of this parametricity translation in different version of CIC (with impredicative Set, a predicative hiearchy of Set, or with a monomorphic Set). Among many applications, this natural encoding of parametricity inside CIC serves both theoretical purposes (proving the independence of propositions with respect to the logical system) as well as practical aspirations (proving properties of finite algebraic structures, we give an example in Finite Group Theory). This a joint work with Chantal Keller. Titre : Confluences algébriques


Jeudi 17 janvier 2013 à 14h, salle 352
Gregory Malecha
Compiling Coq in Coq

I discuss the implementation of a compiler from Gallina, the dependently typed programming language of Coq, down to LLVM that was developed as part of a graduate course on functional language compilation. The implementation follows the discipline of compiling with continuations and includes several optimizations including common subexpresion elimination and destructive updates for memory re-use. Our implementation demonstrates some of the difficulties in compiling functional languages to LLVM while making effective use of the stack (a work in progress). Also of interest is the use of Coq in the development process, including the Haskell-like library which supports it using first-class abstractions based on typeclasses and a proving methodology based on type classes to write generic automation. Students found the interactive nature of Coq development appealing and the programming kernel was useful to shed light on concepts like type classes; though error messages remain cryptic to most.


Mercredi 9 janvier 2013 à 14h, salle 352
Maribel Fernández

Nominal Completion for Rewrite Systems with Binders

Nominal rewriting generalises first-order rewriting by providing support for the specification of binding operators, using the nominal approach. In this talk, I will present a new ordering that can be used to prove termination of nominal rewriting rules, generalising the recursive path ordering to take into account alpha equivalence. Using this ordering, we have designed a Knuth-Bendix style completion procedure (to our knowledge, the first completion procedure available for rewriting systems with binders). An implementation can be found here.


Mercredi 19 décembre 2012 à 14h, salle 352
Paul Downen
Building up delimited control with multiple prompts

We will examine an approach to deriving a reduction theory for delimited control with multiple prompts. Using continuation-passing style (CPS) translation as a guide, we will start from Parigot's lambda-mu calculus and consider a series of incremental extensions for increasingly expressive control effects. Each language inherits the semantics of its parents, giving us a systematic way to describe each effect.

We will also consider some questions regarding the nature of control delimiters: their connection with dynamic binding; the difference between "opaque" and "transparent" delimiters; and missing delimiters and their interaction with the top-level of the program.


Mercredi 28 novembre 2012 à 14h, bâtiment Sophie Germain, salle 352
François Bobot
Separation Predicates: a Taste of Separation Logic in First-Order Logic

This presentation introduces separation predicates, a technique to reuse some ideas from separation logic in the framework of program verification using a traditional first-order logic. The purpose is to benefit from existing specification languages, verification condition generators, and automated theorem provers. Separation predicates are automatically derived from user-defined inductive predicates. We illustrate this idea on a non-trivial case study, namely the composite pattern, which is specified in C/ACSL and verified in a fully automatic way using SMT solvers Alt-Ergo, CVC3, and Z3.


Mercredi 14 novembre 2012 à 13h45, salle internet (3e étage)
Pierre-Marie Pédrot
« Ça dépend, ça dépasse » ou une monade dépendante de contrôle délimité

Les continuations délimitées présentent de nombreux avantages par rapport aux continuations classiques, l'un des plus notables étant la présence d'une opération permettant d'appliquer localement les effets. Ce gain d'expressivité repose sur un raffinement des types de la CPS habituelle.

À travers une traduction CPS délimitée d'un calcul avec inductifs et élimination dépendante, nous montrerons que cette monade connaît elle-même une généralisation intéressante. Certains opérateurs de contrôle bien connus y prennent des types enrichis inattendus.

Enfin, guidés par la logique linéaire polarisée, nous présenterons une extension naturelle à des types infinitaires, qui permet d'exprimer certains comportements de dépolarisation propres à la logique linéaire, le plus notable étant la commutation non-ordonnée a priori entre tenseur et double négation.


Mercredi 31 octobre 2012 à 14h, salle verte 2
Pierre Boutillier

Assurer la terminaison de points fixes en Coq

L'assistant de preuve Coq donne la possibilité aux utilisateurs d'écrire des points fixes comme ils écriraient des fonctions récursives en ML. Pourtant, Si Coq admettait un programme qui ne termine pas, il admettrait une preuve de n'importe quel assertion mathématique (y compris quelque chose et son contraire).

Coq ne peut pas deviner qu'un programme termine, c'est indécidable. Il n'accepte donc qu'une petite partie des programmes qui terminent : il n'accepte que les définitions vérifiant un critère syntaxique de validité que l'on appelle la condition de garde. Nous regarderons sur des exemples durant mon exposé que le chemin qui sépare la version naïve de cette condition de la version utilisable en pratique. Nous nous poserons ensuite la question de sa justification.


Mercredi 17 octobre 2012 à 14h, salle orange 1
Jaime Gaspar
Gödel functional interpretation

The Gödel functional interpretation is a celebrated tool in mathematical logic with a wide range of applications, for example:

In this talk, in a first part we introduce from scratch the bare-bones of the Gödel functional interpretation. Then in a second part we present two applications to analysis of the Gödel functional interpretation:



Exposés passés 2011-2012

Mercredi 4 juillet, 14h, salle rose
Federico Aschieri
Eliminating Skolem function with Interactive Realizability

We present a new syntactical proof that first-order Peano Arithmetic with Skolem axioms is conservative over Peano Arithmetic alone for arithmetical formulas. This result – which shows that the Excluded Middle principle can be used to eliminate Skolem functions – has been previously proved by other techniques, among them the epsilon substitution method and forcing. In our proof, we employ Interactive Realizability, a computational semantics for Peano Arithmetic which extends Kreisel's modified realizability to the classical case.

Note: Joint work with Margherita Zorzi.

Vendredi 6 juillet, 10h30, salle rose
Pierre Boutillier
Giving for nearly free, a bit more of power to inductive

By eliminating a term of an inductive type in Coq, you can do a lot of things at once. You refine the case you handle but you eliminate impossible cases and cast types of the environment at the same time. Therefore, this operation is heavy to manipulate at the logic level. Now, if you want to give to users facilities to write lighter terms, then you must carefully study all the subtleties required to make explicit the intuitive behavior of a dependent pattern matching.

I propose to exhibit some of these constructions on key examples in order to draw in background the picture of how Coq compiles Gallina (its user level language) into CIC (its logic language).

Mercredi 20 juin, 14h, salle verte 1
Matthias Puech
Certificates for incremental type checking

A lightweight way to design a trusted type-checker is to let it return a certificate of well-typing, and check it a posteriori (for instance Agda and GHC adopt this architecture internally). Most of the time, these type-checkers are confronted sequentially with very similar pieces of input: the same program, each time slightly modified. Yet they rebuild the entire certificates, even for unchanged parts of the code. Instead of discarding them between runs, we can take advantage of these certificates to improve the reactivity of type-checking, by feeding parts of the certificates for unchanged parts of the original program back into the new input, thus building an incremental type-checker. We present a language-independent framework making possible to turn a type system and its certifying checking algorithm into a safe, incremental one. It consists in a model of incrementality by sharing, a generic data structure to represent reusable derivations based on Contextual LF, and a general-purpose language to write certifying programs.

Mercredi 11 mai, 10h30
Jonathan Protzenko
HaMLet : a programming language with permissions

HaMLet is a programming language in the tradition of ML. The core concept of HaMLet is the notion of permission ; permissions allow one to describe in an accurate manner how objects are laid out in memory – more specifically, permissions describe the shape of the heap. Permissions also enables the programmer to control ownership of objects, which turns out to be paramount in a concurrent setting.

The talk will start with an informal introduction to our language. We will then see how HaMLet relieves the programmer from verbose annotations, while allowing them to write programs that could not be type-checked before. We will also mention the various challenges that we are currently facing in our implementation of the language.

Mercredi 2 mai, 14h, salle orange 2
Hugo Herbelin
Classical forcing vs classical-like intuitionistic forcing

We will remind how to interpret Kripke-style "semantics" in direct style using a monotonically updatable memory, and, as an example, we will show how to prove Gödel's completeness theorem in such a way that the resulting derivation mimics the structure of the proof of validity. We will show that adding control leads to two incompatible extensions. Either we add polymorphically-typed delimited control and typically get a system in which completeness with respect to Kripke semantics for disjunction can be proven and the resulting derivation mimics the structure of the proof of validity, or, we add bottom-typed delimited control and get a system in which Cohen-style forcing can be expressed.

Mercredi 25 avril, 14h, salle rose
Conor McBride
Totality versus Turing Completeness

It's a well known fact that a total programming language can't encode its own interpreter. Some advocates and most detractors of total programming note that it comes at the cost of Turing Completeness, another well known fact. These well known facts are about as true as the fact that "Haskell can't handle I/O." I shall talk about ways in which total languages can model the risk of nontermination, and show you a method I have found useful, inspired by notions of algebraic effect.

Mercredi 18 avril, 14h, salle verte 1
Thorsten Altenkirch
Weak ω-Groupoids in Type Theory

Based on joint work with Ondrej Rypacek.

Inspired by Voevodsky's recent work on univalent Type Theory we want to model Type Theory without the principle of uniqueness of identity proofs. As a first step in this direction we study the formalisation of weak ω-groupoids in Type Theory. We develop a syntax of weak ω-groupoids using an inductive recursive definition. A weak ω-groupoid is given by the interpretation of the syntax in a globular set, which is defined coinductively. The goal is to give an interpretation of Type Theory using ω-groupoids which eliminates the univalence axiom. This is one of the central open problem in the development of univalent Type Theory.

Mercredi 11 avril, 15h, salle Algorithme (attention horaire inhabituel)
Arnaud Spiwack
μ: Linear! Dependent?

Herbelin & Curien's μ language gives a syntax to classical sequent calculus while leaving contraction and weakening implicit (more precisely dealt with by variables), like one would expect of a programming language. I will present an extension of H&C's μ for linear sequent calculus which still relies on variables for contraction and weakening. I will then proceed and show how to give it a hint of dependent types, though I must confess there are some troubles there.

Mercredi 4 avril, 14h, salle verte 1
Herman Geuvers
Combining control with data-types: lambda-mu-T

Joint work with Robbert Krebbers and James McKinna

Calculi with control operators have been studied as extensions of simple type theory. Real programming languages contain datatypes, so to really understand control operators, one should also include these in the calculus. As a first step in that direction, we introduce lambda-mu-T, a combination of Parigot's lambda-mu and Godel's T, to extend a calculus with control operators with a datatype of natural numbers with a primitive recursor.

We consider the problem of confluence on raw terms, and that of strong normalization for the well-typed terms. We provide proofs of confluence (by complete developments) and strong normalization (by reducibility and a postponement argument) for our system.

Mercredi 21 mars, 14h and 15h15, salle orange 1 (double séance)
Chuck Liang
An Intuitionistic Logic for Sequential Control

We introduce the propositional logic ICL for "Intuitionistic Control Logic", which adds to intuitionistic logic elements of classical reasoning without collapsing it into classical logic. As in linear logic, ICL contains two constants for false. However, the semantics of this logic is based on those of intuitionistic logic. Kripke models are defined for this logic, which are then translated into algebraic and categorical representations. In each case the semantics fit inside intuitionistic frameworks (Heyting algebras and cartesian closed categories). We define a sequent calculus and prove cut-elimination. We then formulate a natural deduction proof system with a variation on the lambda-mu calculus that gives a direct, computational interpretation of contraction. This system satisfies the expected properties of confluence and strong normalization. It shows that ICL is fully capable of typing programming language control constructs such as call/cc while maintaining intuitionistic implication as a genuine connective. We then propose to give a more computationally meaningful interpretation of disjunction in this system.

Pierre-Louis Curien
System L syntax for sequent calculi

We recall two related syntaxes for focalised logic (linear and classical), derived from Curien-Herbelin's duality of computation work, that have been proposed by Munch-Maccagnoni in 2009 and (for the classical case) by Curien - Munch-Maccagnoni in 2010. We explain how the latter (with explicit "shifts", i.e. change-of-polarity operators) is an "indirect style" version of the former. We explain their relation with tensor logic and LLP.

We then discuss bilateral systems, in which the duality positive/negative is made distinct from the duality programme/context. We recover (a sequent calculus version of) Levy's Call-By-Push-Value as a fragment, and we discuss the conditions under which the shifts are or are not forced to define the monad of continuations. This last part is developped in collaboration with Marcelo Fiore.

Mercredi 14 mars, 13h, salle orange 1 (attention horaire inhabituel)
Patrick Baillot (ENS Lyon)
Higher-order Interpretations and Program Complexity

Polynomial interpretations and their generalizations like quasi-interpretations have been used in the setting of first-order functional languages to design criteria ensuring statically some complexity bounds on programs. This fits in the area of implicit computational complexity, which aims at giving machine-free characterizations of complexity classes. Here we extend this approach to the higher-order setting. For that we consider the notion of simply typed term rewriting systems, we define higher-order polynomial interpretations (HOPI) for them and give a criterion based on HOPIs to ensure that a program can be executed in polynomial time. In order to obtain a criterion which is flexible enough to validate some interesting programs using higher-order primitives, we introduce a notion of polynomial quasi-interpretations, coupled with a simple termination criterion based on linear types and path-like orders.

This is joint work with Ugo Dal Lago.

Mercredi 22 février, 14h, salle rose
Brigitte Pientka (McGill University)
Programming with Binders and Indexed Data-Types

We show how to combine a general purpose type system for an existing language with support for programming with binders and contexts by refining the type system of ML with a restricted form of dependent types where index objects are drawn from contextual LF. This allows the user to specify formal systems within the logical framework LF and index ML types with contextual LF objects. Our language design keeps the index language generic only requiring decidability of equality of the index language providing a modular design. To illustrate the elegance and effectiveness of our language, we give programs for closure conversion and normalization by evaluation. We have proven type safety and have mechanized our proof in Coq. This is joint work with Andrew Cave.

Vendredi 16 décembre, 14h, salle rose
Gyesik Lee
GMeta: A Generic Formal Metatheory Framework for First-Order Representations

GMeta is a generic framework for first-order representations of variable binding that provides once and for all many of the so-called infrastructure lemmas and definitions required in mechanization of formal metatheory. The framework employs datatype-generic programming and modular programming techniques to provide a universe representing a family of datatypes. Using this universe, several libraries providing generic infrastructure lemmas and definitions are implemented. These libraries are then used in case studies based on the POPLmark challenge, showing that dealing with challenging binding constructs, like the ones found in System System F with subtyping (F<:), is possible with GMeta. All of GMeta's generic infrastructure is implemented in Coq, ensuring the soundness of the generic infrastructure.

Mercredi 14 décembre, 14h, salle rose
Jamie Gabbay
Stone duality for first-order logic (a nominal approach)

I will present a brand new paper in which we prove a Stone duality between a nominal algebra axiomatisation of first-order logic (FOL-algebras), and a notion of topological space (forall-Stone spaces).

We are familiar with Boolean algebras being sets with conjunction and negation actions satisfying certain axioms. We are also familiar with the fact that powersets naturally have a Boolean algebra structure, given by interpreting conjunction as sets intersection and negation as sets complement.

Using nominal techniques we can axiomatise substitution and first-order logic, so we can try to extend the Stone duality theorem from Boolean algebras to FOL-algebras, and to some class of nominal topological spaces.

If we can answer this question, then we obtain a nominal representation of first-order logic without Tarski-style valuations. A variable populates the denotation directly as nominal atoms, and substitution acts on variables directly in that denotation. This is a very different view of variables in logical meaning than the one which the reader is most likely accustomed to.

The proofs contain a wealth of interesting structure and they give a sense in which variables really can directly inhabit denotations in logic and topology. The paper is online at my papers page. See also a precursor paper on Stone duality for the NEW quantifier.

Mercredi 30 novembre, 14h, salle rose
Yves Guiraud
Polygraphes et actions de monoïdes sur des catégories

Je présenterai un travail en commun avec Stéphane Gaussent et Philippe Malbos sur l'utilisation de méthodes de réécriture de dimension supérieure pour les actions de monoïdes sur des catégories.

L'objectif est de calculer, à partir d'une présentation d'un monoïde par générateurs et relations, une « base d'homotopie » engendrant toutes les relations entre les relations : cette donnée est exactement

ce qui manque à une présentation pour obtenir une définition pratique

d'action du monoïde sur des catégories.

Nous montrons comment des méthodes de réécriture (théorème de Squier, complétion de Knuth-Bendix), adaptées au cadre des polygraphes de Burroni, peuvent permettre de calculer cette base d'homotopie. Nous obtenons notamment ainsi une nouvelle preuve, algébrique et constructive, d'un résultat de Deligne sur les actions de groupes de tresses.

Mercredi 23 novembre, 14h, salle rose
Pierre-Marie Pédrot
Double-glueing and linear logic: models, polarization, games

Double-glueing is a categorical construction formalized by Hyland and Schalk, which is pervasive in models of linear logic. In a nutshell, it consists in refining a raw model through a realizability process, heavily relying on a notion of orthogonality between proofs and counter-proofs, akin to ludics.

I'll present this particular construct and underline the fact that most models of LL are a particular instance of double-glueing. Furthermore, it also provides a natural framework to understand polarization, by relaxing the double-orthogonal closure requirement of linear logic, and a better insight into the relationship between exponentials and polarization.

Finally, I'll try to convince the audience that such a realizability construction seems to be a legit way of integrating dependent types (and subtyping) into linear logic.

Mercredi 19 octobre, 14h, salle verte 2
Federico Aschieri
A new way of using Friedman's translation: Interactive Realizability

We present a classical realizability interpretation based on interactive learning. A realizer is an intelligent, self-correcting program, representing a proof/construction. This construction depends on some state, which is an approximation of an oracle for the Halting problem. The realizer interacts with the environment, which may provide a counter-proof, a counterexample invalidating the current construction of the realizer. But the realizer is always able to turn such a negative outcome into a positive information, which consists in some new piece of knowledge learned about an oracle for the Halting problem.

We then characterize this "interactive realizability" in terms of translation of classical into intuitionistic Arithmetic. It is known, for example, that Krivine's classical realizability is equivalent to Krivine-Streicher-Reus negative translation composed with Friedman's translation. On the other hand, interactive realizability is equivalent just to a special restricted Friedman translation. That is, one can

see interactive realizability as a new use of Friedman's translation, that validates classical principles with out passing through negative translation, and yet is suitable for program extraction from classical proofs.

Mercredi 12 octobre, 14h, salle orange 1
Tim Griffin (Cambridge, in sabbatical in PPS)
A language of combinators for algebraic structures implemented in Coq

I'll describe ongoing work on a domain-specific language for implementing various algebraic structures (semirings, ordered semigroups, etc). Expressions in the language are made up of constants (representing simple algebras such as min-plus) and combinators that construct new structures from components (such as direct and lexicographic products).

The key feature of the language is that it has been designed so that a fixed class of important properties (such as distributivity, idempotence, commutativity, and so on) are automatically proved or refuted by the "type system" of the language. This allows a user to specify complex algebraic structures and obtain these proofs (or refutations) automatically. It is hoped that the language and its implementation will facilitate the rapid exploration of the algebraic design space.

We have implemented a prototype using the Coq theorem prover. All of the essential theorem proving (for the typing rules) is performed at "language design time." Users writing algebraic expressions at "specification time" do not run Coq directly. Rather, they run code that has been extracted automatically from the (constructive) proofs of our library of Coq theorems.

Much of the current implementation was done by my former PhD student, Vilius Naudziunas (now at Google), but I plan to continue this work during my sabbatical year at PPS.

Mercredi 14 septembre, 14h
Paul Downen
Building Up Multiple Control: From pure lambda-calculus to shift-upto

From type-directed partial evaluation to call-by-need application, multiple prompt control has many uses. However, there is still a need for a foundational account of control with multiple prompts.

In this talk, we will begin with the pure, call-by-value lambda-calculus and systematically derive languages with increased control. We will cover delimited control, where only a portion of the context is captured, and dynamic binding, where variables refer to the most recent binding rather than the nearest one. We will also review continuation- and environment-passing transforms and their use as a definition of semantics.




Exposés passés 2010-2011

Mardi 28 juin, 15h (attention, jour et heure inhabituels)
Andreas Abel
Higher-Order Dynamic Pattern Unification for Dependent Types and Records

Higher-order unification is a central ingredient to type reconstruction in dependently typed languages. While undecidable in general, there exist decidable fragments such as Miller's pattern unification. In this talk, I recapitulate constraint-based unification modulo beta-eta and discuss several extensions as implemented in the dependently typed language Agda. Finally, I'll explain how to scale

unification to strong Sigma-types, i.e., record types with eta laws.

Du mercredi 15 au vendredi 17 juin : workshop structural.

Mercredi 18 mai, 14h salle rose
Marco Gaboardi
Linear Dependent Types for Certified Resource Consumption
I present the ideas underlying the design of an interactive programming framework useful to ensure the reliability and correctness of programs with respect to their resource consumption. The key ingredient of this framework is a type system, dlPCF, mixing ideas coming from linear logic and dependent types. dlPCF is a joint work with Ugo Dal Lago that will be presented at LICS 2011. In this talk I introduce dlPCF and I show that it is sound and relatively complete. Completeness holds in a strong sense: dlPCF is not only able to precisely capture the functional behaviour of programs (i.e. how the output relates to the input) but also some of their intensional properties, namely the complexity of their evaluation. Additionally, dlPCF is parametrized on the underlying language of index terms, which can be tuned so as to sacrifice completeness for tractability. Finally, I will show a possible use of dlPCF combined with Krivine's Realizability in the setting of cost-preserving certified compilation.

Mercredi 11 mai, 14h, salle rose

Hugo Herbelin
An introduction to control calculi for classical logic
A posteriori, we can observe that more than 50 years were needed to get out of Gentzen's cut-elimination procedure for sequent calculus LK some intelligible constructive meanings to classical logic. I will try to give a survey of the main steps of the history of constructive classical logic, a domain that significantly sped up in the last 25 years after Felleisen et al designed the lambda-C calculus and Griffin revealed it provides with a direct-style extension of the proof-as-program correspondence from intuitionistic to classical logic. I will also try to give some basic criteria allowing to compare the different syntactic frameworks supporting classical logic.

Mercredi 4 mai, 14h, salle rose
Arnaud Spiwack
Adjunctions for the type-theoretist
Here is a common pattern in Computing Science conferences: the speaker drops, somewhere near his second slide, the word "adjunction". And then, suddenly, the room feels like 1940's London: almost no one is around anymore. If you're usually among the awkward part of the audience, returning to sleep or to the internet, this talk is meant for you. I will try and show how the notions of categories and adjunction can arise from a logical perspective. I aim at giving tools to the type theorist to develop an intuition about adjunction. This will suppose little prerequisite, namely familiarity with first order logics, a touch of Curry-Howard, and vague memories of algebra from your university classes.

Mardi 12 avril, 14h, salle rose (attention jour inhabituel)
Keiko Nakata
On trees that are finitely red (Proof Pearl with FAN and Bar Induction & finiteness)

Finiteness is a concept that seems as intuitive as it is fundamental in all of mathematics. At the same time finiteness is notoriously difficult to capture axiomatically.

In this talk, I look at several definitions of "finitely red" for binary red-and-black trees with infinite paths. Starting from all-black-trees, we gradually move to more reddish trees. Each definition comes with a tree-based view and a path-based view, which are not necessarily equivalent. Deriving tree-based views from path-based views are instances of FAN and Bar Induction. Our expedition arrives at a diagram of finitely redness which reads differently in constructive and classical settings: one edge uses LPO, therefore breaks constructively; one edge uses weak continuity, therefore breaks classically.

In the latter part of the talk, I examine a definition of binary sequences that are almost always zero. The definition comes out from the expedition and is new to us.

Joint work with Marc Bezem and Tarmo Uustalu.

Mercredi 23 février, salle rose à  14h

Georges Gonthier

Type Design Patterns for Computer Mathematics

Proof assistants such as the Coq system have traditionally been a laboratory for exotic type features such as dependent and computable types, and first-class type classes. Although these features have been introduced independently, it turns out they can be combined in novel and nontrivial ways to solve some of the more challenging problems posed by the formalization of advanced mathematics. Many of these patterns could also be useful for general programming.

Journées thématiques sur les modèles du calcul des constructions

Lundi 14 février, salle rose
11h: Alexandre Miquel, Classical realisability for the Calculus of Constructions (slides)
13h30: Andreas Abel, Normalisation-by-Evaluation for the Calculus of Constructions (slides)
15h: Bruno Barras, Formalizing a set-theoretical model of CIC in Coq/IZF (part I) (slides part I and II)

Mardi 15 février, salle rose
10h: Marc Lasson, Realizability and Parametricity in Pure Type systems (slides)
11h: Hugo Herbelin, What needs to be changed in CIC to make it compliant with Voevodsky's univalent model
11h30: Jeff Sarnat, Logical relations for sequent calculus with guarded (co)recursion
14h: Bruno Barras, Formalizing a set-theoretical model of CIC in Coq/IZF (part II)


Mercredi 26 janvier, salle orange 2
Guillaume Munch-Maccagnoni
From CPS to polarisation: a proof-theoretic decomposition of (delimited) CPS translations

The understanding of continuation-passing style (CPS) translations, an historical source of denotational semantics for programming languages, benefits from notions brought by linear logic, such as focalisation, polarities and the involutive negation. Here we aim to show how linear logic helps as well when continuations are delimited, i.e. return and can be composed, in the sense of Danvy and Filinski.

First we provide a polarised calculus with delimited control (first-class delimited continuations) which is, at the level of computation, a variant of Girard's polarised classical logic LC. It contains variants of delimited control calculi which spawned as answers to the question “what order of evaluation can we consider with delimited control?”. Thus our polarised calculus is one answer which is unifying to some degree.

Subsequently we decompose the calculus through polarised linear logic. The only difference with non-delimited continuations is the use of specific exponentials, that account for the specific semantics of the target of delimited CPS translation, as well as annotations of type-and-effect systems.

As a by-product, we obtain an explanation of CPS translations through a factoring, each step of which accounts for distinct phenomena of CPS translations. Although the factoring also holds for non-delimited CPS translations, it did not appear in its entirety before.

Mercredi 19 janvier, salle bleue 2
Bruno Woltzenlogel Paleo
Proof Compression: CIRes and Resolution Hypergraphs

The topic of proof compression involves challenging and under-investigated proof-theoretical problems as well as practical applications such as the efficient integration of theorem provers (Sat/SMT-solvers, automated provers, proof assistants...) via proof exchange.

In this talk, I will briefly show that defining the problem of proof compression in a precise, general and abstract way is not as straightforward as it might seem at first. Then I will proceed to describe two techniques that I have been developing recently.

The first technique, CIRes, consists of compressing sequent calculus proofs by introducing (atomic) cuts. CIRes works by extracting an unsatisfiable clause set from a cut-free proof P. By seeing resolution inferences as atomic cuts, any refutation of this clause set can then serve as a skeleton for a sequent calculus proof with atomic cuts. I will show that, in some cases, CIRes is capable of achieving an exponential compression in the size and length of proofs.

The second technique targets propositional resolution proofs, such as those output by Sat- and SMT-solvers. In order to more easily spot eliminable redundancies, I have developed a graphical way of looking at a resolution proof: its resolution hypergraph. Redundancies that appear far apart in the proof tend to appear close together in its hypergraph. Moreover, since resolution hypergraphs are invariant w.r.t. permutations of resolution inferences in their corresponding proofs, having the same hypergraph might be a good criterium for judging whether two resolution proofs are "essentially the same".

Mercredi 12 janvier, salle bleue 2
Olivier Danvy
A Synthetic Operational Account of Lazy Evaluation

Seen in Item 101B of HAKMEM (AI Memo, 1972), about continued fractions: We present the first operational account of lazy evaluation that connects theory and practice. Practice: the store-based implementation technique of using updateable thunks to account both for demand-driven computation and memoization of intermediate results, initiated by Landin and Wadsworth and prevalent today to implement lazy programming languages such as Haskell. And theory: the storeless operational semantics providing a completely syntactic support for demand-driven computation and memoization of intermediate results, initiated by Ariola, Felleisen, Maraist, Odersky and Wadler and prevalent today to reason equationally about lazy programs, on par with Barendregt et al.’s term graphs. Our operational account takes the form of two new calculi for lazy evaluation of lambda-terms and our synthesis takes the form of three bisimulations. The first calculus is a context-sensitive variant of Ariola et al.’s call-by-need lambda-calculus and makes “neededness” syntactically explicit. The second calculus uses updateable thunks and an algebraic store, where the storage structure induced by lazy evaluation is made explicit. Through the refocusing program-transformation technique, the first calculus gives rise to a storeless abstract machine and the second to a traditional store-based abstract machine implementing lazy evaluation. The machines are intensionally compatible with extensional reasoning about lazy programs and they are lock-step equivalent. Our result reveals a genuine computational unity of theory and practice.



Mercredi 5 janvier, salle rose
Matthias Puech
A logical framework for incremental type-checking

In this talk, I will present an ongoing work on a variant of the LF metalanguage, suitable for the core implementation of an incremental type-checker. Such a type-checker would allow eventually to provide in turn different "versions" of a typing derivation, of which only the differing sub-derivations are to be rechecked. This involves to finely track the dependencies between objects, usually hidden in LF's "flat" term syntax. This language features a rich notion of first-class reduction- and typing environments, and is stripped down from all its positional aspects. I will discuss some of the practical applications of such a type-checking discipline, as well as the currently open questions on its metatheory.

Mercredi 1er décembre, salle verte 2
Martin Hofmann
Relational semantics of effect-based program transformations

This is a status report on my ongoing collaboration with Nick Benton and Andrew Kennedy from Microsoft Research. Basically, our idea is to give a rigorous justification to effect-based program transformations such as

e1;e2 = e2;e1 provided that reads and writes of e1, e2 don't overlap

lambda x.let y=e1 in e2 = let y=e1 in lambda x.e2 provided that e1 is pure

etc

While these equivalences are reasonably obvious for straight-line programs with global store they become more intriguing when the participating program fragments contain free procedure variables that are merely specified by some contract such as "does not read", "is pure", ..., for then we must ascribe a precise semantic meaning to such a contract. Furthermore, effect descriptions like "is pure" "reads and writes do not overlap" might only hold "cum grano salis" in the sense that a program fragment deemed "pure" might very well modify parts of the store but only in some way that "doesn't matter", e.g. modify some internal data structures of the operating system for an extreme case or update some cached field for the sake of efficiency.

Our semantical justification should allow one in principle to make such arguments completely rigorous.

On the technical level, we will encounter admissible logical relations, regions, invariants, and also still open problem in basic domain theory.

Vendredi 26 novembre, salle orange 1, à 9h30
Randy Pollack
A Canonical Locally Named Representation of Binding
Joint work with Masahiko Sato

Binding is fundamental in mathematics and computer science. We must be able to formally represent and reason about binding to carry out a program of mechanising metatheory in computer science. One broad approach uses distinct syntactic classes of "parameters" (used for free or globally bound variables) vs "local variables" (used for bound variables). This general idea, which we call local representation, goes back (informally) to Frege.

In 1992 McKinna and Pollack formalised a local representation using distinct classes of names for parameters and variables. This representation is not canonical (terms have more than one representation) but much metatheory of Pure Type Systems was formalised without the need to reason about (or even define) alpha-equivalence.

In the present work we give a refinement of McKinna–Pollack representation that is canonical. The central idea that makes this possible is due to Sato. The work is formalised in Nominal Isabelle.

References: Journal of Symbolic Computation 45 (2010) pp. 598-616, Journal of Automated reasoning (to appear)

Mercredi 24 novembre, salle rose
Olivier Danvy
CPS for ever

A quoi servent les continuations? De quoi sont-elles faites? Comment sont-elles représentées? Et comment s'en sert-on? Le but de cet exposé de travail est de répondre à ces questions classiques, avec les résultats clés du domaine, des exemples simples, et des cas de situation motivant d'abord les continuations non-délimitées et ensuite les continuations délimitées.

Mercredi 3 novembre
Stefan Hetzl
Some remarks on the (non-)confluence of classical logic

A Curry-Howard correspondence for an intuitionistic natural deduction system can often be obtained in a canonical way as a confluent and terminating term calculus. In sequent calculus formulations of classical logic the general set of reduction rules (i.e. without fixing a particular reduction strategy) is not confluent. Fixing a suitable strategy one obtains a confluent and strongly normalising reduction corresponding for example to call-by-name or call-by-value. However, the general reduction without strategy is hardly investigated.

This talk will be about the question which normal forms can be obtained from the general reduction. The work is carried out in first-order logic using Herbrand-Disjunctions as convenient mechanism for comparing cut-free proofs. A first result will show that the number of different normal forms is extremely large: it can grow as fast as the size of the normal forms (non-elementary). A second result will show that despite this, all of these normal forms do share certain structural properties which can be characterised by a regular tree grammer. Finally, I will speak about partial results and work in progress towards obtaining stronger results in the spirit of the second one.

Jeudi 21 octobre, 14h, salle verte
Alexandre Miquel
Une analyse calculatoire de la transformation de preuve par forcing

Le forcing est une technique inventée par Cohen pour démontrer la cohérence et/ou l'indépendance de certains axiomes vis à vis de la théorie des ensembles. Du point de vue de la théorie de la démonstration, le forcing repose sur une traduction des formules et des preuves paramétrée par un ensemble ordonné quelconque: l'ensemble des "conditions de forcing". Récemment, Krivine a proposé une méthode permettant de combiner le forcing avec la théorie de la réalisabilité classique, et a montré que la traduction des preuves par forcing correspond (au niveau des termes de preuves à la Curry) à une transformation de programme étonamment compréhensible.

Suite aux travaux de Krivine, je montrerai comment reformuler le forcing en arithmétique d'ordre supérieur, en me plaçant dans une extension adéquate du système F-omega (classique) avec termes de preuves à la Curry. Je présenterai la traduction des formules et la transformation de preuve-programme sous-jacente. Enfin, j'analyserai le comportement calculatoire des programmes traduits, et en déduirai une extension de la machine de Krivine avec environnements explicites - la KFAM - conçue pour traiter directement les preuves par forcing sans passer par la transformation de programme. Je concluerai sur les liens que cette étude fait apparaître entre la méthode de forcing et certains principes à la base des systèmes d'exploitation, ainsi que sur les perspectives en logique.

English abstract

Mercredi 22 septembre, 14h, salle rose
Andrej Bauer
Programming with algebraic effects
(contents of the talk)

Eugenio Moggi's observation that computational effects are monads has been very influential and is known to every Haskell programmer. Gordon Plotkin and John Power promoted the idea that computational effects are not "just" monads, but also algebraic theories that refine monads with operations and equations. Gordon Plotkin and Matija Pretnar recently explained the nature of handlers for computational effects in terms of algebra homomorphisms.

Such a well-rounded theoretical development deserves to be taken to practice. Thus, in an effort to make algebraic theories and algebra homomorphisms known to every programmer (of a certain kind), we have developed a simple programming language Eff based on the idea that computational effects are described by operations and handlers. Eff allows the programmer to define new computational effects and handle existing ones with great flexibility. Importantly, in contrast to Haskell monads, effects in Eff can be mixed seamlessly. It turns out that in Eff we also get delimited continuations for free, which is remarkable since continuations are the leading example of a computational effect that cannot be described by an algebraic theory.

This is joint work with Matija Pretnar from University of Ljubljana.

Jeudi 9 septembre, salle 6A92, 175 rue du Chevaleret
Carsten Schürmann
Tutorial on Twelf
10h-11h15 présentation de Twelf
11h30-12h45 session d'introduction sur machine (apporter son portable)
14h-17h session sur machine (pause incluse)

Mercredi 8 septembre, 14h, salle orange
Carsten Schürmann
Linear Contextual Modal Type Theory

When one develops, implements, and studies type theories based on substructural logics, for example, in the context of theorem proving, logic programming, and formal reasoning, one is immediately confronted with questions how to deal with logic variables.

In this talk, I define a linear contextual modal type theory that gives a mathematical account of the nature of logic variables. Our type theory is an conservative extension of intuitionistic contextual modal type theory proposed by Nanevski, Pfenning, and Pientka. It is sound, a result that was mechanically verified in the Twelf proof assistant, and it is extremely useful: it is the key to understanding linear explicit substitution calculi and linear pattern unification.


Exposés de l'année universitaire 2009-2010

Mercredi 14 juin, 14h, salle rose
Thomas Seiller
Graphes d'interaction

Je présenterai une sémantique localisée du fragment multiplicatif de la logique linéaire (MLL) où les preuves sont représentées par des

graphes. Cette sémantique, inspirée des derniers travaux de J.-Y. Girard, se révèle être une version combinatoire de la géométrie de l'interaction dans le facteur hyperfini. Elle permet en particulier de mettre en rapport ces récents développements et les réseaux de preuves, les sémantiques de jeu ou la ludique.

Mercredi 19 mai, 14h, salle orange
Alois Brunel
Church => Scott = PTIME: an application of resource sensitive realizability

We introduce DIAL_lin, a dual variant of intuitionistic linear logic with second order quantifiers and type fixpoints, both restricted to purely linear formulas. Recasting an older result of Leivant and Marion, we give a characterization of the complexity class PTIME as the functions representable by a lambda-term of type 'Church' => 'Scott', where 'Church' is the type of usual Church binary words and 'Scott' the type of Scott words (a purely linear representation of words).

We will focus on the technique used to prove the soundess part of this result. It is a variant of the Dal Lago & Hofmann's realizability framework, which both gives meaning to the logic and allows us to bound the runtime of any typable lambda-term. We will discuss the possible extensions and simplification of this construction.

Mercredi 21 avril 2010, 14h, salle rose
Jeffrey Sarnat
Syntactically Finitary Account of Logical Relations (or: Do You Really Trust Your Proofs?)

Logical relations are a class of mathematical structures that are widely used to prove a variety of theorems about both programming languages and logics. Examples include normalization/cut-elimination (e.g. Girard's reducibility candidates, Luo's saturated sets), parametricity and contextual equivalence. Proofs by logical relations are popular both because they scale well to new theories, and because they provide a somewhat uniform method for proving what might otherwise be incredibly difficult theorems. In particular, they are most popular for proving foundational properties that are closely related to the notion of logical consistency, and in some cases (e.g. the normalization for System F) some variation on a proof by logical relations is nearly the only

game in town; we will see why this is not entirely a coincidence.

Gödel's second incompleteness theorem more-or-less states that any non-trivial logic is incapable of proving its own consistency; in other words, in order to prove some consistency result R, you must, at some level, make proof-theoretic assumptions R' which are at least as strong as assuming R outright. This dealt a harsh blow to Hilbert's program, whose aim was to demonstrate the consistency of all mathematical reasoning via minimalistic, "finitary" reasoning, which turned out to be impossible. However, if we view consistency proofs as finitary reductions between proof-theoretic assumptions (analogous to how Karp reductions are polynomial-time reductions from the complexity of one decision procedure to another), then such proofs become interesting for both philosophical and pragmatic reasons. Here, we give such an account of proofs by logical relations and, time permitting, how they can be tied into the branch of proof theory known as "ordinal analysis."

Mercredi 14 avril 2010, 14h, salle rose
Jeremy Avigad (professor of Philosophy and Mathematical Sciences, Carnegie Mellon University; currently visiting the INRIA - MSR Joint Research Centre in Orsay)
Forcing and Hilbert's program

Although much of Hilbert's rhetoric in the 1920's was focused on the problem of establishing the consistency of infinitary classical methods, the program can be more broadly viewed as an attempt to reconcile modern methods with more explicitly computational approaches that dominated mathematics through the nineteenth century.

After Gödel's discovery of the incompleteness theorems, proof theory turned to the general reductive program of justify classical theories relative to constructive ones. Forcing methods have proved to be a remarkably effective tool in this regard, providing syntactic ways of adding "ideal" objects in a conservative way. In this talk, I will discuss a number of examples, and explain how forcing methods can be used to provide constructive interpretions of weak König's lemma (a compactness principle), Ramsey's theorem for pairs, skolem functions, model-theoretic arguments, and more.

Mercredi 7 avril 2010
Après-midi thématique sur les systèmes de types purs

de 14h à 15h15
Denis Cousineau
Sound and complete candidates for λΠ modulo

Proof normalization is a fundamental property that ensures witness and disjunction properties for constructive proofs, completeness of certain methods of automated deduction, etc... In 1971, following the work of Tait, Girard developed a method, called reducibility candidates, for building proofs of strong normalization, giving, by the universal aspect of its method, a sound criterion for proof normalization in various logical frameworks. But this criterion is not known to be complete, there could exists normalizing theories for which one cannot build such a set of reducibility candidates.

In this talk, we present how to refine this notion of reducibility candidates in order to provide a sound and complete criterion for proof normalization. This criterion is defined for the λΠ-calculus modulo, a logical framework with dependent types in which theories such as functional Pure Type Systems can be expressed within a set of rewrite rules.

de 16h à 17h15
Vincent Siles
A unified view of PTS's conversion

Pure Type Systems (or PTS) are a generic framework which allow to deal with several systems (Simply Typed Lambda Calculs, System F, Calculus of Constructions,...) all at once. Those systems mostly rely on an untyped notion of equality called conversion, which is based on program evaluation. To prove some theoretical properties of those systems, like their consistency in presence of external axioms, we may need to type the process of evaluation, leading to the notion of PTS with "judgmental equality". However the theory behind this typed equality is strangely difficult to establish: the proofs of Confluence, Subject Reduction or Injectivity of Products are all linked together in a circular dependency. A few years ago, Robin Adams managed to prove that for the particular subclass of functional PTS, both visions of the conversion were equivalent. His proof uses a intermediate system which mixes ideas from confluency and typing, called Type Parallel One Step Reduction (TPOSR). But in the end, the question whether this is true for all PTS remains open. After a review of the already known results about PTS, I'll present a proof that this equivalence holds for an alternative class of PTS called "semi-full", based on Adams's TPOSR and Jutting's study of types in general PTS. Then, I'll give some hints on proving such an equality for systems involving sub-typing.

Mercredi 24 mars 2010, 14h, salle rose
Stéphane Glondu
Certifying Coq's extraction: the internal extraction approach
The extraction in Coq consists in removing some parts of Coq terms that are irrelevant for computation and turn the remaining parts into programs in a practical programming language such as (purely functional) OCaml. Its principle and several aspects of its current implementation have been described and studied by Pierre Letouzey in his PhD thesis. There is still a gap between the paper proofs and the implementation, though.

In this talk, I will remind the principle of Coq's extraction, a definition of correctness based on operational semantics, and how it can be stated and proved inside Coq. I will present the "internal extraction" approach, where the extraction function itself is not proved, but generates correction proofs instead. In this approach, the target language is formalized in Coq, but the source language doesn't need to be and its semantics is used implicitly.

Mercredi 3 mars 2010, 14h, salle rose
Danko Ilik
Constructive Completeness Theorems and Delimited Control

Motivated by facilitating reasoning about meta-theory inside the Coq proof assistant, we revisit constructive proofs of completeness theorems.

In the first part of the talk, we discuss Krivine's constructive proof of completeness of classical logic and its computational contents. Then we present a class of Kripke-style models for which completeness can be shown via a normalisation-by-evaluation proof, and we compare its computational behaviour to the one of Krivine's proof.

In the second part, we revisit constructive completeness of intuitionistic logic (with disjunction and the existential) w.r.t. Kripke models. We discuss Danvy's Ocaml program for normalisation-by-evaluation of simply-typed lambda calculus with sums, and delimited control operators therein used. From his program, we extract a notion of model (which has the same shape as the notion presented in part 1 of the talk) that is sound and complete for full intuitionistic predicate logic. Finally, we present the cube of Herbelin and discuss some potential gains of adding delimited control operators to an intuitionistic or classical proof system.

Mercredi 24 février 2010, 14h, salle rose
Arnaud Spiwack
Mathematics without Unique Choice

I will present in this talk the principles of choice and unique choice, and how they are usually articulated around constructive mathematics. From this starting point, I will describe why the unique choice, which is traditionally valid, is a hindrance in the project of writing computationally efficient mathematics. Fortunately, the principle of unique choice is not provable in Coq, due to the (weak) irrelevance of the sort of propositions. The separation of the sort Prop from the rest of the logic was motivated by program extraction, which is, of course, related to the problem of efficient mathematics. In a sense, we will seek to give some mathematical substance to the pragmatics behind Prop. I will discuss the mathematical consequences of dropping the unique choice (a lot of things break, some seriously. That's the cost of efficiency).

Mercredi 10 février 2010
Après-midi thématique sur la sémantique des langages de bas niveau

de 14h à 15h30


Derek Dreyer (Max Plank Institute for Software Systems MPI-SWS)
The Art of the State in Logical Relations
(joint work with Amal Ahmed, Andreas Rossberg, Georg Neis, and Lars Birkedal)

Logical relations are a powerful method for reasoning about observational equivalence of higher-order programs. Although they were originally developed for pure languages like System F, they have in the last decade been generalized successfully to handle languages with semantically tricky features, such as general recursive types and higher-order mutable state.

In this talk, I will present an overview of some of our recent work on developing "step-indexed" Kripke logical relations for ML-like languages. One of the central ideas of our work is that, when reasoning about local state, establishing *invariants* on the local state is not enough; rather, one must be able to establish properties about local state that *change* (in a controlled way) over time. Thus, the possible worlds that we use to index our Kripke relations are essentially state transition systems (STS's), with each state corresponding potentially to a different relation on heaps. In the first part of the talk, I will motivate and sketch the details of our original logical relations model (POPL'09), using a representative example of a "generative ADT" (i.e. an ADT that uses local state to dynamically generate fresh inhabitants of the abstract type).

In the second part of the talk, I will present some brand new work in which, inspired by game semantics, we show how to generalize our POPL'09 model in several orthogonal directions. First, we can account for the concept of "well-bracketed" state change (familiar from game semantics) by enhancing our STS's with (1) "private" transitions and (2) "inconsistent" (or "reject") states. Second, we can account for the presence of control operators (call/cc) using biorthogonality (TT-closure). Third, we can account for the difference between first- and higher-order state by observing that the former enables one to sometimes make *illegal* state transitions while the latter does not. These extensions can be combined in quite interesting ways – for instance, we can prove the subtle "callback with lock" equivalence of Banerjee and Naumann in a language with first-order state and call/cc, as well as in a language with higher-order state but no call/cc. (In the presence of both higher-order state and call/cc, the equivalence doesn't hold.) Furthermore, due to our use of biorthogonality, it is straightforward to show that our logical relations are not only sound but also complete w.r.t. observational equivalence.

de 15h30 à 16h


Pause café

de 16h à 17h30
Sandrine Blazy (IRISA)
Mechanized semantics for the Clight language

In this talk, I will present the formal semantics of a large subset of the C language called Clight. Clight is the source language of the CompCert compiler. This compiler is formally verified using the Coq proof assistant. It guarantees that each compiled program behaves as prescribed by the semantics of its source program. The formal semantics of Clight is a small-step, continuation-based semantics that enables nontrivial reasoning about program transformations. This semantics is also equipped with traces of input/output events that observe both terminating and diverging executions, thus leading to stronger properties of observational equivalences. In addition, I will describe the integration of the Clight formal semantics in the CompCert compiler and several ways by which the semantics was validated.


Mercredi 3 février 2010, 14h, salle rose
Tristan Crolard (LACL – Université Paris-Est) : Dependent Type Systems for delimited control operators
(joint work with Emmanuel Polonowski) (pdf)

Relying on the formulae-as-types paradigm for classical logic, we define a dependent type system for an imperative language with higher-order procedural variables and non-local jumps. As a by- product, we obtain a non-dependent type system which is more permissive than what is usually found in statically typed imperative languages. We also show that this framework is expressive enough to encode imperative versions of the delimited continuations operators shift and reset, and we derive a program logic for these operators.

Mercredi 20 janvier 2009, 14h, salle rose
Pierre Letouzey : Les évolutions récentes des bibliothèques modulaires de Coq

FSets, FMaps, Numbers, OrderedType, etc: ces structures modulaires peuplent maintenant la bibliothèque standard de Coq depuis un bon moment. Dans cet exposé, je compte montrer que la situation de ces aspects de la bibliothèque standard est très loin d'être figée. De façon générale, on bénéficie maintenant en plein des travaux récents d'Elie Soubiran, à la fois sur la théorie des modules, et sur leur implémentation concrète (p.ex. ajout d'un "Include"). De mon coté, je présenterai les améliorations pratiques de ces bibliothèques, en particulier grâce à quelques petites modifications additionnelles du système de modules réalisées en collaboration avec Elie: application d'un foncteur au contexte local, chaînage d'"Include", etc etc. Ces modifications fournissent désormais un niveau de flexibilité et de modularité encore jamais atteint en Coq.

Mercredi 13 janvier 2010, 14h, salle rose
Matthieu Sozeau : Equations: a dependent pattern-matching compiler

We present a compiler for definitions made by pattern matching on inductive families in the Coq system. It allows to write structured, recursive dependently-typed functions, automatically find their realization in the core type theory and generate proofs to ease reasoning on them.

The basic functionality is the ability to define a function by a set of equations with patterns on the left-hand side and programs on the right-hand side, in a similar fashion to Haskell function definitions. The system also supports with-clauses (as in Epigram or Agda) that can be used to add patterns on the left-hand side for further refinement. Both "syntactic" structural recursion and "semantic" well-founded recursion schemes are available in definitions, the later being generalized enough to cope with general inductive families efficiently.

The system provides proofs of the equations that can be used as rewrite rules to reason on calls to the function. It also automatically generates the inductive graph of the function and a proof that the function respects it, giving a useful elimination principle for it. It gives a complete package to define and reason on functions in the proof assistant, substantially reducing the boilerplate code and proofs one usually has to write, also hiding the intricacies related to the use of dependent types.

Mercredi 6 janvier 2010, 14h, salle rose
Noam Zeilberger : Classical polarity and delimited continuations

Fact: classical polarized logic is the logic of continuation-passing style, in the sense that different logical fragments capture different fragments of CPS. This fact naturally suggests two questions:

1. What does classical polarity have to do with the "polarity-like" phenomena of various intuitionistic systems? In particular, type theories such as Levy's Call-By-Push-Value and Watkins et al's Concurrent Logical Framework seem to maintain polarity distinctions, but in an oddly restricted way (e.g., with no "par" connective). And they have no special connection to CPS.

2. What does classical polarity have to do with *delimited* continuation-passing? This is an important question, because empirical and theoretical evidence suggests that delimited continuations are in fact more suitable as a general framework for reasoning about effects and evaluation order.

In the talk I will try to answer these questions. In short, the answer is that "answers are positive".

Mercredi 9 décembre 2009: pas de réunion du GT (habilitation à diriger des recherches de Alexandre Miquel à 10h; à noter aussi l'habilitation de Antonio Bucciarelli le jeudi 10 décembre à 16h30)

Mercredi 2 décembre 2009 à partir de 14h en salle orange
Après-midi thématique sur la sémantique des langages de bas niveau

de 14h à 15h30
Chung-Kil Hur (PPS)
Observational equivalence on low-level programs and compositional compiler correctness
(Joint work with Nick Benton)

Giving a good notion of observational equivalence on low-level programs is particularly difficult due to the presence of non-functional operations such as those examining instructions of function closures. In the talk, (1) I discuss this difficulty by giving instructive examples in the untyped lambda calculus extended with the operation that tests alpha-equality of two terms; (2) give a notion of equivalence on those lambda terms by means of logical relation equipped with step-indexing and biorthogonality; (3) explain how to use this relation to give a compositional notion of correctness of compilers and hand-optimizations; (4) and finally discuss limitations of step-indexing and future work. These results have been fully formalized using the Coq proof assistant.

de 15h30 à 16h
Pause café

de 16h à 17h30
François Pottier (Gallium, INRIA Rocquencourt)
Hidden state and the anti-frame rule.

Modular reasoning about heap-manipulating programs relies on two important features.

One is the ability to make part of the store temporarily invisible – more precisely, invisible within a lexical scope – so as to be able to invoke a piece of code that does not know about this part of the store. This mechanism, known as local reasoning, is formalized, in separation logic and in type-and-capability systems, in terms of frame rules.

The other is the ability to make part of the store invisible forever – more precisely, invisible outside of a lexical scope – so that a piece of effectful code, packaged together with its state, is able to masquerade as effect-free code, and can be invoked by a client that does not know about this part of the store. This mechanism is known as hidden state.

The idea of hidden state goes back to Hoare (1972). However, its formulation in the setting of a program logic for a modern programming language, with higher-order functions and dynamic memory allocation, is challenging. There have been attempts to explain hidden state in terms of local reasoning, that is, in terms of (higher-order) frame rules. In this talk, I argue that this approach to hidden state imposes a continuation-passing style, which is impractical. Instead, I offer a direct explanation of hidden state, in the form of a so-called (higher-order) anti-frame rule.

I present this rule in the setting of a type-and-capability system for an ML-like programming language. I will discuss several applications of the rule, with example code.

Mercredi 25 novembre 2009, 14h, salle rose
Alexis Saurin: Une hiérarchie de calculs pour le contrôle délimité en appel par nom

Je présenterai une hiérarchie de calculs étendant naturellement le Λμ-calcul: les Λi-calculs, qui constituent une version en appel par nom de la hiérarchie CPS de Danvy et Filinksi. Cette hiérarchie est motivée par deux éléments caractéristiques du

Λμ-calcul comparé à son grand-frère, le λμ-calcul de Parigot:

On obtient ainsi une famille de calculs possédant de très bonnes propriétés, pour laquelle on peut établir précisément la correspondance avec la

hiérarchie CPS.

Mercredi 18 novembre 2009, 14h, salle rose
Hugo Herbelin: On the logical content of control delimiters

Danvy-Filinski's reset and Felleisen's prompt are operations that delimit the scope of the continuation caught by a control operator. In the continuation-passing-style semantics, their role is to reset the current continuation with the identity continuation and it turns out that this is similar to what happens in Friedman's A-translation.

A-translation is a tool to prove the intuitionistic admissibility of Markov's principle. We will then show that the extension of intuitionistic logic with delimiters allows to reason with A-translation in direct style and to prove Markov's principle (i.e. ¬¬∃x.A(x) → ∃x.A(x) for A decidable).

More generally, we will show that adding delimiters to classical logic provides with a way to internally extract the intuitionistic content of classical proofs (if any).

Jeudi 12 novembre 2009, 13h30, salle rose
Andreas Abel: Remarks on Typed Equality for the Calculus of Constructions

Martin-Löf Type Theory is usually presented with a typed equality judgement,

Γ ⊢ t = t' : A

expression that in context Γ, the terms t and t' are of type A and βη-equal. Typed equality has some advantages over untyped equality: The soundness theorem for models is easier to establish, and equality can be easily extended by axioms that need typing, like βη for the unit type or proof irrelevance. On the downside, implementations of equality checks use untyped reduction or evaluation, meaning that a subject reduction theorem has to established for untyped reduction. Subject reduction in turn depends on the infamous "injectivity of Π" property, which is trivial for confluent untyped equality, but notoriously hard for typed equality. The implementation of type checking depends crucially on the injectivity of Π, even if one assumed subject reduction.

In this talk, I will first present the state of the art of typed equality for Martin-Löf Type Theory: How one constructs a PER model which yields a typed normalization algorithm (normalization by evaluation) and injectiviy of Π. How one obtains a sound and complete type checker from these results. Then I will sketch how to transfer these results to the Calculus of Constructions.

Mercredi 4 novembre 2009, 14h, salle orange 2

Pierre-Louis Curien: La dualité du calcul à travers les lunettes de la focalisation

Nous adaptons le kit syntaxique de Curien-Herbelin afin de donner un compte-rendu syntaxique précis des preuves focalisées (en calcul des séquents) de la logique linéaire. L'exposé analyse divers choix syntaxiques, que l'on regroupe de manière générique (en suivant Munch) sous le nom de "système L".

L'exposé comprend cinq parties ....

Mercredi 21 octobre 2009, 11h, salle orange 1
Noam Zeilberger: Towards a Proof-Theoretic Semantics of Programming

In this talk, I will present some ideas and results from my thesis [1] aimed at narrowing the gap between the theory and practice of propositions-as-types. In particular, I will examine the proof-theoretic notion of polarity, explain its connection to the old idea (of Gentzen, Prawitz, Dummett, and others) of a proof-theoretic "justification" of the logical connectives, and describe how to extend polarized logic into a *polarized type theory* capable of analyzing the critical concepts of evaluation order and pattern-matching in modern functional languages. In addition, I will explain the role of "focusing" ( la Andreoli) in polarized type theory, and a connection between focusing and infinitary proof theory that enables one to give a uniform account of both untyped and intrinsically-typed computation (with cut-elimination and eta-expansion defined in a connective-generic way). And finally I will explain how this framework can be used to study *extrinsic* types (such as intersection and union types), and discuss some interesting partial results relating proof-theoretic and model-theoretic notions of subtyping.
[1]"The Logical Basis of Evaluation Order and Pattern-Matching", available at http://reports-archive.adm.cs.cmu.edu/anon/2009/abstracts/09-122.html