Accueil · Présentation · Membres · Publications · Séminaire · Groupes de travail · Projets · πr²

Solving the master equation of chemical reaction networks is notoriously difficult. To circumvent this problem, rule-based modeling provides techniques that approximate the time evolution of means and moments of molecule counts. However, rigorous methods for the approximation to arbitrary precision have not been presented for rule-based modeling, yet.

The talk presents recent results on how to solve the general problem of computing the time evolution of means and moments of ``occurrence counts'' in rule-based models; computability is in the sense of Weihrauch. Roughly, established techniques, namely adaptive uniformization and finite state projection, can be reused – provided that occurrence counts (and their powers) are bounded by a polynomial in the ``size'' of the rule-based system and the ``size'' of the system does not explode. The most interesting results can be obtained for context-free systems, which can be thought of as branching processes equipped with structure; for these systems we have PTIME computability.

All results will be exemplified for the case of string rewriting, aiming for a widest possible audience of computer scientists, assuming only basic knowledge of continuous time Markov chains with countable state space. Computing the time evolution of mean word counts in stochastic string rewriting is already highly non-trivial, even if it is the most basic example. The gem of the talk is the computation of mean word counts for context-free grammars!

A standard contextual equivalence for process algebras is strong barbed congruence. Configuration structures are a denotational semantics for processes in which one can define equivalences that are more discriminating, i.e. that distinguish the denotation of terms equated by barbed congruence. Hereditary history preserving bisimulation (hhpb) is such a relation. We define a strong back and forth barbed congruence using a reversible process algebra and show that the relation induced by the back and forth congruence is equivalent to hhpb. Hence we give a characterization of hhpb as a contextual equivalence in a reversible process algebra.

In the Internet, packets are routed with the next-hop routing paradigm. The routing decision is classically performed considering only the destination address of the packets. Source-specific routing is an extention of this paradigm where the routing decision depends also on the source address of the packets. It found its main application in multihomed edge networks, where it solves reachability problems, avoiding packets to be filtered, and permits hosts to increase their performances. Indeed, a host can choose over multiple source-specific routes of a network, for the same destination, by selecting the source address of the outgoing packets: multipath at the transport or application layer is made possible.

I will begin my talk by an overview of classical next-hop routing, multihoming, and the benefits of source-specific routing in multihoming. I will then goes more deeply through the source-specific routing applicability, from the extension of existing routing protocols to the issues encountered when interacting with the kernel forwarding tables. I will also shared our experience and results in using the multiple paths induced by source-specific routing, both at the transport layer, using Multipath TCP, than at the application layer, with our modified version of Mosh.

Les réseaux booléens représentent l’interaction de composantes pouvant prendre deux valeurs, 0 et 1, et sont utilisés pour modéliser divers réseaux biologiques, notamment les réseaux de régulation génétique. Je ferai le point sur l’étude récente des relations entre la structure et la dynamique de ces réseaux. On verra notamment que cette étude conduit à introduire les notions d’application héréditairement bijective et de réseau auto-dual pair ou impair. On montrera que ces deux notions se caractérisent simplement en termes géométriques (comme une orthogonalité entre certains sous-espaces affines), ce qui fournit une construction de la classe des applications héréditairement bijectives, et une propriété de stabilité de cette classe.

We study observational semantics for networks of chemical reactions as used in systems biology. Reaction networks without kinetic information, as we consider, can be identified with Petri nets. We present a new observational semantics for reaction networks that we call the attractor equivalence. The main idea of the attractor equivalence is to observe reachable attractors and reachability of an attractor divergence in all possible contexts. The attractor equivalence can support powerful simplifications for reaction networks as we illustrate at the example of the Tet-On system. Alternative semantics based on bisimulations or traces, in contrast, do not support all needed simplifications.

In the standard testing theory of DeNicola-Hennessy one process is considered to be a refinement of another if every test guaranteed by the former is also guaranteed by the latter. In the domain of web services this has been recast, with processes viewed as servers and tests as clients. In this way the standard refinement preorder between servers is determined by their ability to satisfy clients.

But in this setting there is also a natural refinement preorder between clients, determined by their ability to be satisfied by servers. In more general settings where there is no distinction between clients and servers, but all processes are peers, there is a further refinement preorder based on the mutual satisfaction of peers.

In this talk we are concerned with the behavioural characterisations of the server and the client preorders. Acceptance sets are enough to capture the server preorder, but not the client one. To characterise the client preorder we introduce the notion of "usability", and combine it with the one of acceptance sets. These notions suffice to characterise also the peer preorder, and usability is crucial to use existing proof techniques on clients and peers.

Causal reasoning is a powerful tool in analysing security protocols, as seen in the popularity of the strand space model. However, for protocols that branch, more subtle models are called for to capture the ways in which they can interact with a possibly malign environment. Toward the goal of achieving a general, all-encompassing model for such protocols, I'll describe a semantics based on concurrent games for a simple security protocol language. I shall then show how it supports causal reasoning about a protocol for secure digital signature exchange. Along the way, I'll point to new areas where research into concurrent games can be applied, for example applications of symmetry and probability.

We present a new model checking procedure for concurrent systems against safety properties such as data races or atomicity violations. Our analysis sidesteps the state space explosion problem by inferring causal dependencies for concurrent traces instead of searching over a space of reachable states, and can be understood as an interplay between local trace inference and termination analysis based on causal loops. Local trace inference introduces new actions anywhere in the trace if they causally follow from the context. Our procedure terminates if we either find a complete error trace or the whole space of potential error traces is covered by causal loops. The causality-based verification of multi-threaded programs can be dramatically faster than the standard state space traversal. In particular, we show that the complexity of verifying multi-threaded programs with locks reduces from exponential to polynomial.

The fusion calculi are a simplification of the pi-calculus in which input and output are symmetric and restriction is the only binder. We highlight a major difference between these calculi and the pi-calculus from the point of view of types, proving some impossibility results for subtyping in fusion calculi. We propose a modification of fusion calculi in which the name equivalences produced by fusions are replaced by name preorders, and with a distinction between positive and negative occurrences of names. The resulting calculus allows us to import subtype systems, and related results, from the pi-calculus. We examine the consequences of the modification on behavioural equivalence (e.g., context-free characterisations of barbed congruence) and we investigate an axiomatisation of this equivalence.

Abstract: Consistency properties of concurrent computations, e.g., sequential consistency, linearizability, or eventual consistency are essential for devising correct concurrent algorithms. In this talk I am going to present a logical formalization of such consistency properties that is based on epistemic logic – a formalism that has gained prominence in the context of distributed systems. Our formalization provides a declarative perspective on what is imposed by consistency requirements and provides some interesting unifying insight on these properties that is not apparent in their standard formulation. This is joint work with Andrey Rybalchenko.

I will discuss the role which epistemic reasoning can play in concurrency theory. The idea of using epistemic modalities as programming constructs has been largely unexplored. Logic programming emerged under the slogan ``Logic as a programming language.'' In this talk I will explore the role of modal logic as a programming language, incorporating epistemic and closely related spatial modalities, as part of the programming language and not just as part of the meta-language for reasoning about protocols. I will present spatial and epistemic process calculi for reasoning about spatial information and knowledge distributed among the agents of a system, and I will discuss domain-theoretical structures for representing spatial and epistemic information.

In this talk I will present the focused proof system SELLFU, which extends intuitionistic linear logic with subexponentials with the ability of quantifying over them, hence allowing for the use of an arbitrary number of modalities. We show that the view of subexponentials as specific modalities is general enough to give a modular encoding of different flavors of Concurrent Constraint Programming (CCP), a simple and powerful model of concurrency. More precisely, we encode CCP calculi capturing time, spatial and epistemic behaviors into SELLFU, thus providing a proof theoretic foundation for those calculi and, at the same time, setting SELLFU as a general framework for specifying such systems. I will also show how a suitable fragment of SELLFU gives rise to a constraint system able to express soft constraints (i.e., preferences, costs, probabilities, etc) in the CCP model. (joint work with Elaine Pimentel and Vivek Nigam)

Bisimilarity of two processes is formally established by producing a bisimulation relation, i.e. a set of pairs of process expressions that contains those two processes and obeys certain closure properties. In many situations, particularly when the underlying labeled transition system is unbounded, these bisimulation relations can be large and even infinite. The bisimulation-up-to technique has been developed to reduce the size of the relations to exhibit while retaining soundness, that is, the guarantee of the existence of a bisimulation. Such techniques make defining candidate relations easier and they significantly reduce the amount of work in proofs. Furthermore, they are increasingly becoming a crucial ingredient in the automated checking of bisimilarity. In this talk, we present our experience in using the Abella theorem prover for the formalization of the meta theory of several major bisimulation-up-to techniques for the process calculi CCS and the pi-calculus. Our formalization is based on recent work on the proof theory of least and greatest fixpoints for accommodating the heavy use of coinductively defined relations and coinductive proofs about such relations. An important feature of our formalization is that our definitions and proofs are, in most cases, straightforward translations of published informal definitions, and our proofs clarify several technical details of the informal descriptions. The presence of binders, as in the pi-calculus, is notoriously a delicate aspect to treat in formal proofs but we show that the underlying proof theoretic foundations of Abella makes it possible to smoothly lift the results from CCS to the pi-calculus. This is thanks to the use of lambda-tree syntax and generic reasoning through the nabla quantifier.

Joint Work with Kaustuv Chaudhuri and Dale Miller

The partial order semantics for concurrent processes leads to consider trajectories of a concurrent transition system as partial orders of events. In turn, time instants appear to have a lattice structure, which obviously differs from the total order we are used to for "classical" time instants.

When considering the addition of a probabilistic layer, this particular structure raises the following questions that will be considered in the talk: - what is a Markov model with lattice ordered time instants? - can we adapt in this framework classical tools from probability theory such as stopping times and obtained results on Markov concurrent models analogous to results for finite Markov chains?

I will present a synthesis of two works.

The first one focuses on the Vertex Cover problem (a classical NP-hard problem) in a specific model, which is based on the fact that the size of the memory of processing unit is very limited (compared to the graphs). In particular, we propose three algorithms that satisfy this model. We give analytical tools to evaluate and compare them (for both quality and complexity) and we summarize and discuss experiments made on very huge graphs (more than 90 billions of vertices and edges for the biggest one).

The second part deals with detection community in complex networks. I will present a recent work (not yet published) about generalization of the well-known method of Louvain, which is currently the best algorithm (both quality and complexity) to find communities in large graphs.

- The first work is a joint work with Eric Angel (Lab. IBISC, Univ. Évry) and Christian Laforest (LIMOS, ISIMA, Clermont-Ferrand). - The second work is a joint work with Patricia Conde Céspedes (LSTA, UPMC) and Jean-Loup Guillaume (LIP6, UPMC).

The Actor model and Process Calculi are two mathematical models of concurrent computation that share some similarities but diverge in many ways. They both have been proposed in the '70ies and they evolved "concurrently" with mutual influences. In this talk we will discuss the assets produced by the distinctive key features of the two models, providing insights and trying to device further cross-fertilizations.

The literature on causality has grown exponentially in the last two decades. Philosophers developed numerous accounts and scientists increasingly express a need for conceptual tools that are appropriate to their research questions and methods. Time is ripe for a reassessment of this vast body of literature, in the light of scientific tasks such as inference, explanation, or prediction. In this talk I will give a gentle introduction to the main causal concepts, methods, and approaches that are available. I will plead for the idea that there isn't one single causal theory that fits the bill. Causality is very much like a mosaic, where each tile has its relevance, use, and importance.