When Modules Are Not Just Namespaces

It is time for Cone to get a proper module system. This design space is complex and rife with historical missteps. In order to distill the topography of the landscape and clarify the key requirements, I felt it necessary to begin the journey with cross-language research and contemplation. I wound up pursuing three rabbit-chasing adventures in module wonderland: What do programmers want from modularity? You can read about this adventure in my earlier post on modularity, which captures what we want from modularity, and summarizes how the three modularity capabilities are surfaced across different layers of programming language features (and program decomposition).

Modularity in Programming

Cone’s module system has been on my mind of late. The best design for modules is neither easy or obvious, as evidenced by how much modules vary from one language to the next. To guide my approach for Cone, I went back to basics: What is the role (and benefit) that modularity plays in programming (languages)? What role do modules play within this larger picture. This post synthesizes my findings.

Dispatch Magic and Concurrency

I have long enjoyed the convenience of the “dot” dispatch syntax: instance.log("Ethereal cognistrands do not support quantum entanglement") The magic of Universal Function Call Syntax makes this sugar for a function call: log(instance, "Ethereal cognistrands do not support quantum entanglement") The broad popularity of method dispatch is driven by these benefits: Ad hoc polymorphism. When used in conjunction with method overloading, we can reuse the same easier-to-remember semantic names across multiple types and parameter configurations, with minimal-to-no ambiguity on which implementation to use.

Data Flow Analysis

Note: This is a heavily revised version of an earlier post The Cone compiler performs a data flow analysis pass after name resolution and type checking. Given that this sort of analysis is rarely covered by compiler literature, I thought it might be useful to jot down some thoughts about its purpose and intriguing mechanics. Trigger Warning: This blog post is highly technical and brief. It reads more like an organizing outline for a design spec than a typical essay-oriented post.

The Siren Song of Declarative Programming

Let’s continue our journey of discovery through the wild jungle of programming paradigms by clearing away more obstructive vines. This post tackles the taxonomic distinction that many attempt to make between declarative and imperative programming. What becomes clear, after some study, is that “declarative” and “imperative” do not have singularly simple and clear meanings. Wikipedia offers three incompatible definitions! The many strands of these confusing concepts entangle themselves into a Gordian knot that resists all attempts to neatly untie it.

What is a Programming Paradigm?

Have our conversations about programming paradigms grown stale? It seems so. Paradigms like object-oriented programming and functional programming, the two most talked-about, are decades-old. Nearly always, conversations about them devolve into reciting the same desiccated stereotypes. Is this because the notion of a programming paradigm has outlived its usefulness? Dr. Harper appears to think so. Even among those who find some value in one paradigm or another, there is a notable wariness to engage in fruitful discussions across diverse communities.

Region Modules: The Rest of the Story

The previous Lifecycle of a Reference post illuminates the working relationship between references and regions. It explains how a reference’s region annotations are programmer-definable struct-like types, which can specify an allocated object’s region state using fields, and region-based operations on an object (e.g., allocate and free) using methods. This, however, leaves out an important part of the story about region definitions: their global state and global runtime logic. Holistically, a Cone region is fully defined by an importable module, within which lies:

The Lifecycle of a Reference

Over two and half years ago, I described something I called Gradual Memory Management. Inspired by Rust and Pony, my paper proposed that it is feasible and desirable for a systems programming language to allow programs to exploit multiple memory management and permission strategies. Without putting memory and data race safety at risk, doing so would facilitate significant improvements to throughput and latency, even in multi-threaded architectures. It is easy to propose wild ideas in words.

Weakening Cycles So That Turing Can Halt

It is notable how often paradoxes arose in the historical journey that led to the Rise of Type Theory. Resolving Russell’s paradox led to his theory of types. Gödel’s Incompleteness theorems rested its proof around a variation of the Liar Paradox. Church and Turing recapitulated this result using their distinctive formalisms. Intuitionistic type theory was deliberately designed to avoid these paradoxes. And so it goes. As an undergraduate, I had the pleasure of studying Gödel’s ingenious proof as part of a course on Predicate Logic.

The Rise of Type Theory

After three dreary posts on syntax, let’s change the pace and pursue an entirely different, deeper and more fun topic! In our community’s #theory channel on Discord, someone asked: “Is there a basis for some of the mathematics related to type theory and its relationship to its usage in programming languages?” This triggered a spirited dialogue about the historical antecedents of type theory, how type theory evolved from that, and the interplay between programming languages and type theory.