Modularity in Programming

Cone’s module system has been on my mind of late. The best design for modules is neither easy or obvious, as evidenced by how much modules vary from one language to the next. To guide my approach for Cone, I went back to basics: What is the role (and benefit) that modularity plays in programming (languages)? What role do modules play within this larger picture. This post synthesizes my findings.

Data Flow Analysis

Note: This is a heavily revised version of an earlier post The Cone compiler performs a data flow analysis pass after name resolution and type checking. Given that this sort of analysis is rarely covered by compiler literature, I thought it might be useful to jot down some thoughts about its purpose and intriguing mechanics. Trigger Warning: This blog post is highly technical and brief. It reads more like an organizing outline for a design spec than a typical essay-oriented post.

Region Modules: The Rest of the Story

The previous Lifecycle of a Reference post illuminates the working relationship between references and regions. It explains how a reference’s region annotations are programmer-definable struct-like types, which can specify an allocated object’s region state using fields, and region-based operations on an object (e.g., allocate and free) using methods. This, however, leaves out an important part of the story about region definitions: their global state and global runtime logic. Holistically, a Cone region is fully defined by an importable module, within which lies:

The Lifecycle of a Reference

Over two and half years ago, I described something I called Gradual Memory Management. Inspired by Rust and Pony, my paper proposed that it is feasible and desirable for a systems programming language to allow programs to exploit multiple memory management and permission strategies. Without putting memory and data race safety at risk, doing so would facilitate significant improvements to throughput and latency, even in multi-threaded architectures. It is easy to propose wild ideas in words.

Weakening Cycles So That Turing Can Halt

It is notable how often paradoxes arose in the historical journey that led to the Rise of Type Theory. Resolving Russell’s paradox led to his theory of types. Gödel’s Incompleteness theorems rested its proof around a variation of the Liar Paradox. Church and Turing recapitulated this result using their distinctive formalisms. Intuitionistic type theory was deliberately designed to avoid these paradoxes. And so it goes. As an undergraduate, I had the pleasure of studying Gödel’s ingenious proof as part of a course on Predicate Logic.

The Fascinating Influence of Cyclone

In 2001, Trevor Jim (AT&T Research) and Greg Morrisett (Cornell) launched a joint project to develop a safe dialect of the C programming language, an outgrowth of earlier work on Typed Assembly Language. After five years of hard work and some published papers, the team (including Dan Grossman, Michael Hicks, Nik Swamy, and others) released Cyclone 1.0. And then the developers moved on to other things. Few have heard of Cyclone and almost no one has used it.

The Power of Lifetimes

The execution of a program unfolds over some interval of time. The lifetime of every temporary resource (e.g., variable or object) is the time span between that resource’s “creation” and “destruction”. This lifetime is wholly contained within the typically-longer lifetime of the program. The goal of this post is to explore how versatile lifetime analysis has increasingly become in managing memory efficiently, safely and with better performance. By the end of this post, we will explore exciting new ways to apply lifetime analysis, beyond their current support in Rust.

Can a compiler guarantee multi-owner memory safety?

Is it possible to improve on Rust’s single-owner strategy to support more complex data structures? Before digging into this challenge, let’s summarize the story so far… The Promise and Limitations of Single-Owner Rust’s single-owner memory management is a form of automatic memory management; a garbage collection strategy that is distinct from tracing and reference-counting. Fundamentally, it is an improvement on RAII, which automatically finalizes some defined resource at the end of its defined lexical scope.

Move Mechanics

Most programming languages support only copy semantics. A value passed to a function or stored in a variable is a copy of the original value. We know it is a copy, because any change we make to the copy has no impact on the original value. A few languages, like C++ and Rust, also support move semantics. Unlike a copy, a transfer moves the original value to its new home; that value is no longer accessible at its previous home.

The Challenge of Counting References

Note: The latter part of this post is outdated in terms of the mechanisms that the Cone compiler uses to de-alias ref-counted references. See this post for an updated description. In the world of automatic memory management, reference counting is considered to be one of the easiest to implement. The rules seem simple: When a reference is created to an allocated memory area, set its counter to 1 When the reference is copied (aliased), increment the counter When an alias is destroyed (de-aliased), decrement the counter When the counter reaches zero, free the reference’s memory area The simplicity of these rules does not always translate to a simple implementation.