Modularity in Programming

Page content

Cone’s module system has been on my mind of late. The best design for modules is neither easy or obvious, as evidenced by how much modules vary from one language to the next.

To guide my approach for Cone, I went back to basics:

  • What is the role (and benefit) that modularity plays in programming (languages)?
  • What role do modules play within this larger picture.

This post synthesizes my findings. I hope you also find value in it.

Modularity Strategies and Benefits

For programmers, modularity is essential to good design, particularly when working with large, complex systems.

Indeed, the benefits of modularity matter well beyond programming; it is a design principle promoted by many science and technology disciplines, as well as many forms of industrial design. Although these domains differ in their detailed perspectives on modularity, they share the idea that modularity represents “the degree to which a system’s components may be separated and recombined, often with the benefit of flexibility and variety in use.” Wikipedia.

Programming-based modularity typically clusters around three synergistic mechanisms (and their corresponding benefits):

  • Complexity isolation. Best practice involves breaking some system into mostly-independent “black box” components. From the outside, the interior implementation logic within each component cannot be seen by other components. Other components interact with it using its public interface. Done well, component isolation reduces cognitive complexity for the human programmer, who can apply limited short-term memory and focus on a subset of the implementation and take comfort that most changes will have only a local effect.

    Benefit: Component isolation reduces complexity for the programmer by reducing the surface area for changes, thereby improving the stability of an ever-evolving system.

  • Interface-based substitution. Once we restrict the use of some component to its public interface, we can leverage this to build other components that comply with that public interface, but differ in the underlying implementation. We can abstract this public interface as a standard, allowing us to flexibly plug together components in a variety of useful configurations. This is the same principle that allows us to screw multiple different lightbulbs into some standard-based light socket.

    Benefit: Interface-based substitution facilitates plug-and-play design versatility.

  • Multi-use generation. What if we notice that multiple components have essentially the same underlying logic, but vary in details (e.g., the types of the data)? This modularity strategy allows the programmer to build a single general-purpose (abstracted) component whose part(s) can be re-used to generate many specialized components. There are a broad range of strategies that facilitate reuse at compile-time or runtime, such as monomorphization (generics/templates), inheritance, parametric flexibility, metaprogramming and more. Extensibility is yet another reuse strategy which is increasingly being explored.

    Benefit: Multi-use generation accelerates development productivity.

To achieve modularity that is stable over change, it matters where we draw the boundaries between one component and another. What boundary-drawing criteria optimize stability?

  • We can represent all cross-reference dependencies between the individual parts of every component as a network diagram. Using this diagram, we can count the dependencies within each component (cohesion) and contrast that to the dependencies between components (coupling). Ideally, we want a modular design with high cohesion and low coupling (e.g., a large ratio of cohesion to coupling).

  • More commonly, we use conceptual guidelines like separation-of-concerns or, more narrowly, single-responsibility principle to improve coherence, reducing a component’s inherent complexity. This thought experiment helps identify latent complexity that future changes will encounter. If a component is trying to weave together too many largely-independent concerns, that complexity will likely multiply rapidly as future enrichments are added.

    However, one must be careful when applying these subjective coherence judgments. If modularity is too aggresively fragmented in anticipation of future intra-component complexity, one can needlessly overshoot the goal of reducing overall complexity, due to overall coupling costs increasing more rapidly.

It is worth noting that whereas the isolation aspect of modularity aims to decrease complexity and design fragility, the substitution and generation aspects of modularity will often act to increase coupling costs, and therefore complexity and fragility.

How do modularity capabilities vary by system layer?

Modularizing a system into components doesn’t happen just once. Components get subdivided into subcomponents. Subcomponents get subdivided into smaller parts. And so it goes.

In effect, software systems have layers of modularity. The nature of these layers are shaped by the modular building blocks offered by the programming language(s) used to build the system. Here is a common list of such modular building blocks, ordered from smallest to largest:

  • Control blocks
  • Functions
  • Types
  • Threads
  • Modules and/or Packages
  • Programs, libraries or services

Each of these building blocks differs in the way they surface the three modularity strategies listed earlier. In the small (blocks and functions), languages largely agree on how to surface modularity capability. As we move down the list, we see less and less agreement between languages on how to facilitate modularity.

Let’s take a closer look.

Control Block Modularity

It was Dijkstra and Structured Programming that introduced modularity to control flow in the form of control flow blocks. They were introduced to combat “Go to”-based spaghetti code, the antithesis of modularity, which suffered from high mental complexity and fragility to change.

  • Isolation. Blocks introduce modularity from a control flow perspective, in that all access to the block’s logic enters at the same place (the top) and ultimately terminates at the same place (the bottom).

    In many languages, blocks take modularity one step further by introducing the notion of a private state (local variables) or context that is guaranteed to be resolved and gone (e.g., RAII or “defer”) when the block’s logic completes.

    In some languages (e.g., Jai), blocks offer almost function-like isolation capability.

  • Substitution and Generativity. Given that blocks are function-owned and -used, it makes little sense to embue them with substitution and generativity capability. Should you want these capabilities for a block, it makes more sense to promote the block into a stand-alone function.

Function Modularity

Functions have long been the standard bearer for comprehensive, low-level modularity.

  • Isolation. A function (or method) is a standalone component whose public interface is its signature. The signature describes what types of values it accepts in and what types of values it returns. Functions isolate both their local state and their execution state as a distinct frame within a FIFO stack. To the caller, the function is a black box. It cannot see the function’s implementation or local state.

    Local variables and parameters are isolated to the function they are defined and used in. Two functions can use the same name for a variable. There is no name collision between functions, nor any confusion about which value the name refers to.

  • Substitution. In most languages, it is possible to create multiple functions that have the same function signature. When a language supports “first-class” functions or closures, it is possible to pass a (pointer to a) signature-compliant function to some other function, thereby enabling customizable versatility in the calling function.

  • Generativity. The most common forms of function generativity are generic functions, macros, and method inheritance. Generic functions and macros generate multiple versions of the same logic, parameterized by (typically) substituted types. Inheritance allows function implementations to be reused or extended across derived types.

Type Modularity

At minimum, a type describes its state’s internal structure, often by composing together fields of different types. Types usually go beyond this, defining the operations (methods) used on values of this type. Types can also include related static functions, constants, subtypes, and more.

Type-based modularity mechanisms are among the richest and most diverse from one language to the next.

  • Isolation. Encapsulation is a data hiding approach for distinguishing a type’s public interface from its private implementation. Methods (and fields) marked as public are accessible from the outside, private are not, and protected are only accessible by related types. As with functions, this allows a system to treat type behaviors, isolating what we want to do (the interface) from how we do it (the implementation). This is particularly valuable when we want to consistently enforce specific invariants on the type’s state or operations.

    An additional form of type isolation offered by most languages, is giving every type its own namespace. Each referenceable, defined part of the type (e.g., field or method) gets its own unique name in the type’s namespace. This means that, outside the type’s logic, there is no syntactic confusion over names, as all reference to a type’s names are qualified by either the type name (e.g., Point::origin) or some value of that type (point.x).

  • Substitution. Type substitution is called subtype polymorphism. It is typically facilitated using existential types (variously called interfaces, traits, protocols, type classes, signatures and abstract classes).

    Used together with dependency injection, type substitution can be a powerful contributor to plug-and-play versatility, enabling the some logic to be specialized by the interface-complying type(s) you hand it. Programming languages vary in how they support type substitution: nominal subtyping, structural subtyping and row polymorphism. And they vary in when and how substitution takes place: static dispatch, virtual dispatch (using vtables), message-passing, and message-queuing.

  • Generativity. The most common forms of type generativity are generic types (or templates) and inheritance. Generic types (or templates) generate multiple versions of the same logic, parameterized by (typically) substituted types. Inheritance offers a different capability: it allows new types to be created that can reuse method implementations and state from some other type.

It is worth pointing out here that traditional implementation inheritance (e.g., Java) encourages the creation of modularity-violating code. When subclasses are used to specialize base class behavior, especially across multiple levels of inheritance, we get OOP-style spaghetti code with high coupling across these classes, and corresponding reductions in coherence. Bugs are hard to diagnose. Maintenance and refactoring become complicated and error-prone.

Modularity can be preserved for polymorphism and inheritance by switching from traditional inheritance mechanisms to explicit composition, structural interfaces, type extension, and delegated inheritance. The result is code verstability and reuse driven by strong compliance to properly-isolated public interfaces.

Module Modularity

Skipping past threads for a moment, let’s focus on modules, as their modularity mechanisms are intriguingly similar to (and sometimes conflated with) those for types.

A module is typically the largest unit of modularity supported by most programming languages. When we have a large, complex program, we usually subdivide it first into modules. More importantly, when we reach for library packages, each of these is a module.

There are (roughly) two flavors of language-supported modules:

  • Modules as Namespaces. Similar to how a type is made up of fields, methods and more, a module is made up of types, functions, global variables, macros, possibly sub-modules, and more. The module is a namespace for composing all these uniquely-named parts. Namespace management is often as far as most languages go for modules.

  • SML-inspired Modules. In ML languages, the module system is much richer. A module comprises a signature, a structure, and a functor, in effect extending modules to handle subtype and parametric polymorphism. For SML, these features wrap around a type to provide the sort of type modularity capabilities described in the previous section. However, MixML and 1ML (for example), extend these capabilities, including the ability to treat modules as first-class values.

Another divergence is the correspondence of modules to compile units. Some languages allow a module to span multiple source files, while others (e.g., Rust) restrict any module’s scope to a single compile unit (source file). The latter approach may sound too restrictive, but it gets around this by adding features that allow you to effectively stitch one module’s namespace together with all modules it imports. This parent module is effectively a composite of multiple source files.

As already mentioned, the modularity mechanisms for modules are similar to those for types:

  • Isolation. Modules also use encapsulation (or ascription) to distinguish a module’s public interface from its private implementation. A module can only refer to public parts in other modules. Since each module has its own namespace, reference to public parts in other modules are qualified by the name of module (module::type).

  • Substitution. Very few languages (e.g., Ocaml) support module substitution (subtype polymorphism). The need for it is not as strong as for type substitution, and type substitution can often be used as a work-around. That said, providing module substitution capability in a language would improve the versatility of modules, at some cost to complexity. This would make it easier to configure and plug in the appropriate module(s) to customize how a program works, without needing to create singleton types.

  • Generativity. Similarly, few languages (e.g., Delphi) support module generativity via generics. Here too, there is an opportunity to offer a more powerful module system by supporting generics and inheritance for modules.

Improving the modularity of modules beyond namespaces/isolation carries intriguing potential but also brings to bear a number of complex issues, as highlighted by Graydon Hoare

Thread Modularity

Although threads have been around for decades, they are receiving renewed focus and innovation lately, as Moore’s law has slowed to a crawl and devices increasingly support multicore processors. It is no longer enough for a language to just offer OS-based threads and synchronization capabilities. Newer languages are demonstrating the added value of static permissions for data race safety and green threads. Cooperatively-yielding green threads (backed by a work-stealing scheduling) improves scaling and latency, particularly around non-blocking i/o.

So far, we see the emergence of three distinct green-thread concurrency models:

  • Async/await (e.g., C#, JS, Rust). Built around single-shot promises and syntactic sugar for i/o callback logic. Communication between threads is typically accomplished by synchronized (locked) data structures. The downsides for this approach are well-covered in “What Color is Your Function”.

  • Gothreads (e.g., Go). There is no such thing as an asynchronous function. A new (go)thread can be explicitly spawned on any function call. Each thread allows blocking i/o, which effectively yields the thread under the covers. Communication between threads is facilitated with channels.

  • Actors (e.g., Pony). Each spawned actor is a green thread, and is inherently asynchronous. One actor communicates with another by enqueuing a message on its queue (effectively an actor-owned channel). The scheduler activates the next actor by dispatching the next message on that actor’s queue to the appropriate behavior (a function/method of the actor that never blocks and has no return value).

Needless to say, there is no consensus yet around the best way to modularize threads. There is, however, an emerging awareness that unstructured concurrency is similar to GOTO, in that threads achieve spaghetti-like complexity, resulting in difficulty diagnosing problems or enriching capability. What is needed is structured concurrency control flow blocks akin to those proposed by Dijkstra for structured programming. The clearest articulation is found in the “Notes on Structured Concurrency” post.

Despite the lack of consensus, are we learning ways to achieve modularity for threads?

  • Isolation. There are several emerging mechanisms that look promising:

    • Channels. Channels make inter-thread synchronizing communications easy, particularly when baked into actors. Each thread is effectively an isolated modular unit, with its channel’s message format being the defined interface for coupling. Paired with effective logging, debugging and refactoring become a lot easier.

    • Structured Concurrency. Similar to how Dijkstra structured control blocks tamed GOTO, control blocks can be fashioned to manage concurrency control flow. In particular, a block could establish a scope such that all threads (including a timeout) launched within the block are joined (or cancelled) before finishing the block.

    • Memory Isolation. Isolation is a key principle behind separation logic, which we can use to prove that compiler-enforced static permissions (e.g., Pony’s reference capabilities) result in data race safety.

      More broadly, isolating memory use by thread can greatly improve performance, by reducing cache invalidation and GC inefficiencies. To the latter, if we give each thread (e.g., actor) responsibility for its own memory management, we eliminate the waste of costly stop-the-world multi-thread inefficiencies.

  • Substitution. Few languages offer any built-in support for thread substitution. Rust does offer traits whose methods can be async (this can be challenging to work with). This gets at runtime-dispatch for launching a new green thread.

    More intriguing, to me, is the possibility of defining substitution interfaces that work with actors, so that messages can be dynamically routed to the appropriate, compliant actor.

  • Generativity. For a language that supports async/await and generic functions (e.g., Rust and C#), supporting generic, async functions is relatively straightforward.

    Again, it intrigues me to imagine applying inheritance or generics to actors. However, applying generics to an actor’s behaviors would not be straightforward for the same reason that we do not apply generics to an interface’s methods: a compiler is no longer able to statically anticipate and generate the virtual interface’s type (e.g., vtable or message type).

Program/Service Modularity

This is typically the highest level of modularity we grapple with as programmers. With few exceptions, modularity support here is rarely baked into our programming languages. Perhaps that will change in the future.

In the absence of any broad consensus, or even nascent vision, on the best collection of modularity features for this level, I am going to break my pattern and not break down features by isolation, interface, and generativity. Instead, I will offer a few interesting examples of modularity designs at the program/service level:

  • Unix/Bash. One of the reasons Bash is so popular is because of how richly, and simply modular it is. It relies on how nicely several substitution interfaces fit together: Unix’s notion of a stream of characters (irrespective of how that stream is implemented), the argc/argv interface for running a program and passing its parametric arguments as strings, and Bash’s ability to pipe the output stream of one program’s execution as the input stream to the next program’s execution. There are so many utility programs we can easily chain together using Bash, to accomplish so many tasks. Given the simplicity of the interfaces, the power of this modularity still delights.

  • SQL, etc. Here again, we see a few interfaces that allow us to manipulate table-structured data in powerful ways. SQL, and so many of the big data alternatives it spawned, remain powerful because of the versatility of these interfaces.

  • Windows UI. Standards baked into Windows make it easier to create ad hoc connections between arbitrary programs. The most notable feature is the ability to copy data from one program and then paste it into another.

  • Http. After several failed attempts (e.g., Corba and other variants of RPC), communications between services has largely coalesced around use of HTTP as the interface standard. The payload format is still in flux however: first HTML, then XML/XHTML, then JSON, and lately Protobuf (or its variants). The challenge is to find a payload format that is fast, simple and versatile. That’s not an easy challenge.

  • Source -> Deployment. Here too we have seen a fascinating evolution of interlocking standards, that began with the emergence of git (local version control system), then the emergence of Github (VCS service), then continuous integration/testing/build, and finally continuous deployment via Docker containers and Kubernetes deployment. None of these are simple interfaces, but the way they work together dramatically improves the agility and quality of constantly-evolving software.

Some of these trends may end up surfacing as important modularity features that a programming language (or its ecosystem) should support. For example:

  • Baked-in data serialization and deserialization capabilities, similar to what Protobuf and others enable externally.

  • Build configuration programs (e.g., CMake or Cargo TOML) that automate complex builds, including package retrieval, semver constraints, incremental, dependency-ordered compilation, etc.


Doing the research and organizing my thoughts for writing this post really deepened my understanding of what we are trying to accomplish with modularity, and how we design programming features to facilitate those goals. There is an underlying symmetry to these principles, allowing us to convey forward good design patterns from the past (structured programming) into desired modular patterns for inheritance, modules, threads, and (maybe someday) services.

Seeing how modularity recapitulates across PL features makes it easier for me to look for ways to make the modularity features in Cone more modular. For example, I can more clearly see now how to make modularity for types, modules and threads look the same, including how to support name-folding (delegated inheritance) the same way for types and modules.

The final insight takeaway for me was to see how central modularity is to good programming language design. Enabling modularity is right up at the top, along with the more talked-about topics of expressiveness (how easily we can express all the power we need) and powerful type systems (where constraint enforcement catches bugs earlier and improves performance).

A language that encourages modularity will undoubtedly be more complicated (e.g., Go’s unfolding agony over generics), but at the end, you are doing a favor to programmers. Modularity improves reuse and versatility and it makes programs easier to reason over and debug, ultimately improving code quality and programmer productivity.

Ultimately, we want

Jonathan Goodwin avatar
About Jonathan Goodwin
3D web evangelist. Author of the Cone & Acorn programming languages.