Research

Conference and Journal Publications

2024

[JACM '24] - Arthur Oliveira Vale, Zhong Shao, and Yixuan Chen. 2024. A Compositional Theory of Linearizability. J. ACM Just Accepted (January 2024). DOI: https://doi.org/10.1145/3571231


Abstract

Compositionality is at the core of programming languages research and has become an important goal toward scalable verification of large systems. Despite that, there is no compositional account of linearizability, the gold standard of correctness for concurrent objects.

In this paper, we develop a compositional semantics for linearizable concurrent objects. We start by showcasing a common issue, which is independent of linearizability, in the construction of compositional models of concurrent computation: interaction with the neutral element for composition can lead to emergent behaviors, a hindrance to compositionality. Category theory provides a solution for the issue in the form of the Karoubi envelope. Surprisingly, and this is the main discovery of our work, this abstract construction is deeply related to linearizability and leads to a novel formulation of it. Notably, this new formulation neither relies on atomicity nor directly upon happens-before ordering and is only possible because of compositionality, revealing that linearizability and compositionality are intrinsically related to each other.

We use this new, and compositional, understanding of linearizability to revisit much of the theory of linearizability, providing novel, simple, algebraic proofs of the locality property and of an analogue of the equivalence with observational refinement. We show our techniques can be used in practice by connecting our semantics with a simple program logic that is nonetheless sound concerning this generalized linearizability.

PDF
2023

[POPL '23] - Arthur Oliveira Vale, Zhong Shao, and Yixuan Chen. 2023. A Compositional Theory of Linearizability. Proc. ACM Program. Lang. 7, POPL, Article 38 (January 2023), 32 pages. DOI: https://doi.org/10.1145/3571231


Abstract

Compositionality is at the core of programming languages research and has become an important goal toward scalable verification of large systems. Despite that, there is no compositional account of linearizability, the gold standard of correctness for concurrent objects.

In this paper, we develop a compositional semantics for linearizable concurrent objects. We start by showcasing a common issue, which is independent of linearizability, in the construction of compositional models of concurrent computation: interaction with the neutral element for composition can lead to emergent behaviors, a hindrance to compositionality. Category theory provides a solution for the issue in the form of the Karoubi envelope. Surprisingly, and this is the main discovery of our work, this abstract construction is deeply related to linearizability and leads to a novel formulation of it. Notably, this new formulation neither relies on atomicity nor directly upon happens-before ordering and is only possible because of compositionality, revealing that linearizability and compositionality are intrinsically related to each other.

We use this new, and compositional, understanding of linearizability to revisit much of the theory of linearizability, in particular providing novel, simple, algebraic proofs of the locality property and of an analogue of the equivalence with observational refinement. We show that our techniques can be used in practice by connecting our semantics with a simple program logic that is nonetheless sound concerning this generalized linearizability.

PDF Technical Report
2022

[POPL '22] - Arthur Oliveira Vale, Paul-André Melliès, Zhong Shao, Jérémie Koenig, and Léo Stefanesco. 2022. Layered and object-based game semantics. Proc. ACM Program. Lang. 6, POPL, Article 42 (January 2022), 32 pages. DOI: https://doi.org/10.1145/3498703


Abstract

Large-scale software verification relies critically on the use of compositional languages, semantic models, specifications, and verification techniques. Recent work on certified abstraction layers synthesizes game semantics, the refinement calculus, and algebraic effects to enable the composition of heterogeneous components into larger certified systems. However, in existing models of certified abstraction layers, compositionality is restricted by the lack of encapsulation of state.

In this paper, we present a novel game model for certified abstraction layers where the semantics of layer interfaces and implementations are defined solely based on their observable behaviors. Our key idea is to leverage Reddy's pioneer work on modeling the semantics of imperative languages not as functions on global states but as objects with their observable behaviors. We show that a layer interface can be modeled as an object type (i.e., a layer signature) plus an object strategy. A layer implementation is then essentially a regular map, in the sense of Reddy, from an object with the underlay signature to that with the overlay signature. A layer implementation is certified when its composition with the underlay object strategy implements the overlay object strategy. We also describe an extension that allows for non-determinism in layer interfaces.

After formulating layer implementations as regular maps between object spaces, we move to concurrency and design a notion of concurrent object space, where sequential traces may be identified modulo permutation of independent operations. We show how to express protected shared object concurrency, and a ticket lock implementation, in a simple model based on regular maps between concurrent object spaces.

PDF Technical Report

Manuscripts

Review Papers

2018

Efficient Proof Net Verification and Sequentialization


Abstract

When Girard introduced Linear Logic [Gir87], he also developed a novel syntax for writing proofs, proof nets, which enjoys many desirable properties for representing proofs. Some important technical issues in handling proof nets is that of correctness (whether a syntactically sound proof structure is logically sound) and sequentialization (constructing a sequent proving the same conclusion as that of the proof net). This article surveys the literature on efficient proof net correctness verification and sequentialization for the multiplicative fragment of Linear Logic without units. First, we briefly present the original correctness criterion by Girard, which yields exponential time correctness verification in the size of the proof structure. Then, we show that the most famous criterion for correctness, due to Danos and Regnier in [DR89], is equivalent to that of [Gir87], yielding exponential time correctness verification over the number of par nodes in the proof structure. Next, we show that a development on ideas from [DR89] yields correctness verification and sequentialization in quadratic time over the number of links in the proof structure. Finally, we introduce the main idea of Guerrini from [Gue11] which leads to linear time complexity over the size of the proof structure for simultaneous correctness verification and sequentialization.

PDF

Written in fulfillment of the University of Chicago's Mathematics REU. A review of algorithms for efficient verification and sequentialization of proof nets for the multiplicative unit-free fragment of linear logic.

2016

Barriers in Complexity Theory


Abstract

Several proof methods have been successfully employed over the years to prove several important results in Complexity Theory, but no one has been able to settle the question P =?NP. This article surveys three of the main proof methods used in Complexity Theory, and then shows that they cannot be used to prove P =?NP, thus exposing technical barriers to the advance of Complexity Theory. The first of those is diagonalization, which is shown to prove that the Halting Problem is not computable. Then, the concept of relativization is used to prove that diagonalization alone cannot prove P =?NP. Circuits are then introduced as a way of settling P =?NP. Then, the concept of a natural proof is introduced, and is proven to not be able to solve P =?NP. The third method, is algebrization, which is closely related to some results that were proven after the introduction of natural proofs, but that fail to relativize and naturalize. A result, IP = PSPACE, that does not relativize and is used together with non-naturalizing techniques to overcome the barriers of relativization and naturalization in proving lower bounds is exposed and then algebrization is introduced as a framework to understand why the methods used in IP = PSPACE can’t settle P =?NP.

PDF

Written in fulfillment of the University of Chicago's Mathematics REU. A discussion of a sequence of meta-theorems in complexity theory proving barriers towards proving P ≠ NP. The main results discussed are relativization, natural proofs and algebrization.