Command Sourcing
Command Sourcing (CS) is a novel approach to application development. It rejects the idea of storing state in a database and gains both developer productivity and run-time performance. If you can fit your application data in memory, CS might be for you.
We often use patterns where moving data to and from a database makes up a significant part of the development effort, and managing database schema versions takes a toll on operations.
It doesn’t have to be that way, and indeed there are many solutions intended to automate these tasks, but do so while adding quite a bit of complexity.
CS stands out as an architecture that actually reduces complexity. It requires however that all application state fits in memory. With the memory prices of today, that is very often feasible. It didn’t use to be feasible, which might be one reason this style of application programming is often overlooked.
For the sake of simplicity I’ll begin with a slightly opinionated guide to how a CS application functions, biased towards dynamic programming languages and functional style. After that I’ll go through the up- and downsides with a bit more nuance.
The pattern
Think of an application as a service. A service typically
- Receives commands, i.e. requests to change its state or take some other action.
- Receives queries, i.e. requests to provide information.
- Takes responsibility for managing its own state.
In a vanilla application, a conventional database is used to persist the application state. When a command arrives, the application operates on the database, changing it to a new persisted state. When the change has occurred, both the command that caused the change and the previous application state are lost by default. Let’s visualize it like this:
By contrast, an application using CS persists all commands as they arrive. The application logic operates on the state that is kept in memory, changing it to a new state without persisting it. While the states are not persisted, every state can be recreated by replaying the persisted commands up to that point. Let’s visualize it like this:
This is the simple idea at the foundation of CS.
Similar architectures have been around for a while. Here’s Mr Fowler talking about one such architecture back in 2011:
The key element to a memory image is using event sourcing, which essentially means that every change to the application’s state is captured in an event which is logged into a persistent store. Furthermore it means that you can rebuild the full application state by replaying these events.
Event Sourcing is similar to, but distinct from Command Sourcing. With CS you persist the commands, the actual input to the application. With Event Sourcing you persist the events, which are closer to the output, describing a change to be applied to the state. Think of them as cause vs effect.
Structure and example
Let’s get right into it. We’ll show a minimal example CS application in Node.js JavaScript, one part at a time.
Preamble
Let’s use Express for this example.
The projectors
and reactors
variables will be used later to bind the different parts together.
const fs = require("fs");
const lineReader = require("line-reader");
const express = require("express");
const app = express();
let projectors = [];
let reactors = [];
Persistence interface
The persistence interface is used to persist commands and read them back. This typically involves (de)serialization and talking to a database or maybe a file system.
Here, we’ll use a simple text file for storing one command per line.
function persist(command) {
fs.appendFileSync("./command-store.txt", JSON.stringify(command) + "\n");
}
function map_persisted_commands(fun) {
lineReader.eachLine("./command-store.txt", function (line) {
fun(JSON.parse(line));
});
}
Projectors
A projector keeps track of a model, i.e. an in-memory representation of some aspect of the current application state. When a command arrives, the projector uses it and the current model to produce the next version of the model.
Here, we’ll make just one projector, that keeps track of a dictionary that can be altered by replacing or appending entries.
let model1 = {};
function projector1(command) {
switch (command.type) {
case "set":
model1[command.key] = command.value;
break;
case "add":
model1[command.key] += command.value;
break;
}
}
projectors.push(projector1);
Reactors
A reactor communicates with external services or has other side effects. This may in turn produce commands.
Here, we’ll just print to the terminal.
function reactor1(command) {
console.log("Reacting to command", command);
}
reactors.push(reactor1);
Command interface
The command interface receives incoming commands, persists them using the persistence interface, and sends them to the projectors and reactors. Importantly, the command is persisted before the effects take place. This is called Write-ahead logging (WAL).
app.get("/command", function (req, res) {
let command = req.query;
persist(command);
for (let receiver of [...projectors, ...reactors]) {
receiver(command);
}
res.send("ok\n");
});
Query interface
The query interface receives incoming queries, runs them against the current in-memory models and delivers the result to the client.
app.get("/query", function (req, res) {
let query = req.query;
let key = query.key;
res.send(model1[key] + "\n");
});
Reprocessor
The reprocessor is activated when the application starts. It uses the persistence interface to read all persisted commands and send them to the projectors, but not to the reactors.
function reprocess() {
map_persisted_commands(function (command) {
for (let receiver of projectors) {
receiver(command);
}
});
}
Application main
At startup, the application runs the reprocessor. When the reprocessor is finished, the command and query interfaces become available.
function main() {
reprocess();
let server = app.listen(8080, function () {
let port = server.address().port;
console.log("Command Sourcing example listening at port %s", port);
});
}
main();
And that’s it.
Running the example
Let’s try it out using two terminals. In terminal 1, install dependencies and then start the server:
$ npm i express line-reader
$ node mip-example.js
Command Sourcing example listening at port 8080
In terminal 2, let’s interact with it:
$ curl 'http://localhost:8080/command?type=set&key=abc&value=def'
ok
$ curl 'http://localhost:8080/command?type=add&key=abc&value=ghi'
ok
$ curl 'http://localhost:8080/query?key=abc'
defghi
We see the correct reply to the query in terminal 2, so we know the commands were processed correctly. In terminal 1, we see the side effects of the reactor:
Reacting to command { type: 'set', key: 'abc', value: 'def' }
Reacting to command { type: 'add', key: 'abc', value: 'ghi' }
Now, let’s terminate and restart the server in terminal 1:
^C
$ node mip-example.js
Command Sourcing example listening at port 8080
We see no side effects of the reactors. Yet, we can determine that the state was recreated by trying the same query in terminal 2:
$ curl 'http://localhost:8080/query?key=abc'
defghi
In command-store.txt
you’ll find the command log, the persisted source of truth:
$ cat command-store.txt
{"type":"set","key":"abc","value":"def"}
{"type":"add","key":"abc","value":"ghi"}
So, there you have it: A minimal Command Sourcing example.
TL;DR
So, what are the up- and downsides of Command Sourcing, and should you use it in your upcoming project? That’s what the rest of this article is trying to answer. It got kind of long, so here’s a TL;DR:
- CS improves developer productivity by
- decoupling persistence logic from application logic
- decoupling persisted data/schema from application state/structure
- decoupling the past from the present
- CS improves run-time performance by reducing the need for disk access
- CS introduces the following requirements/caveats/downsides:
- Commands need to be reified, a.k.a. first-class citizens.
- Commands need to be deterministic.
- Horizontal scaling could be limited at some point, in some aspect.
- You need enough memory to hold all application state.
- You might lose the power of SQL queries.
- Storing a complete command log can affect storage needs.
- Purging data becomes more complicated.
- CS works well with functional programming. Imperative programs with CS might run faster, but they also introduce a couple of downsides.
How does Command Sourcing improve developer productivity?
Many concepts in programming can be thought of as a particular way to decouple or separate concerns in order to help humans deal with complexity.
Since CS is all about boosting productivity by reducing complexity, I’ll describe the advantages of CS as a number of such decouplings.
Decouple how to persist from how to process
CS decouples persistence logic from application logic. In other words, it allows you to keep the code that stores data on disk separate and untangled from the application code that processes data.
In vanilla applications, it is common to describe the structure of application data at least twice; in the classes or other data types for the application logic, and in the database schema. Then, care must be taken to correctly map these two separate existences for the same data. Much has been written about the related problems. (See for example c2 and codinghorror.) This is a significant burden on developers, and CS solves it.
CS allows a separation of concerns that otherwise would be difficult to attain. Persistence happens before the application logic rather than during, and it can be completely separated from application-specific concerns. It can be performed by a library, maybe even without a schema. To drive this point home: Even if in practice you might want to validate commands before they are persisted, this architecture holds up even without it. An “invalid” command technically doesn’t have to be anything more than a command that causes the application logic to make no state change. The point is that persistence can be performed with no regard to application logic. With CS, you can add an operation to an application simply by specifying how that operation changes the state of the application model. That’s it. No database schema changes, no code to translate the operation to database manipulation, and nothing else related to a database. Persistence is no longer a cross-cutting concern.
Decouple what to persist from what to process
There is no reason why a state change must capture all information in the command that caused it, and it often doesn’t. In a vanilla application, what command information is kept and what is lost depends on how the current particular version of the application handles that particular command type. Also, future commands can change the state so that information coming from past commands is lost. In short, the command log persisted in a CS application will likely include a lot of information not deducible from the application state. With CS, you have access to historical data and meta-data by default. History is already part of the persisted data model – indeed, it constitutes the persisted data model. You may use this history to create new ways of looking at your data.
If for example there is a command type where you do not add a record to the database about who made the change or when, then a vanilla application would lose that information. Extending the application to keep track of this cannot recover such data for the past. With CS, history is kept by default even if all of it isn’t used. You can change the application logic to produce a state that includes new kinds of detailed or aggregated data. When you reprocess the command log using the new application version, you end up with an application state that looks like you collected that kind of data from the start – which of course you did. In order to benefit from this effect, capture commands early in order to make sure that they describe as exactly as possible the user’s (or external system’s) intention, not any implementation details of what the effect should be.
For example, the elements in a list [A, B]
could be reversed in a GUI either by moving A after B or moving B before A.
Distinguish between those two cases, even though the effect might be the same – for now.
Maybe a future version of the application will present the list [A, X, B]
in the same or another context, in which case moving A after B and moving B before A are no longer equivalent operations.
Of course, there is nothing keeping you from making a vanilla application keep data that isn’t used, but it doesn’t tend to happen. As a developer, your thought process will often depend on the architecture. If the architecture is such that you ask yourself “Alright, what data do I need in order to do X?”, then the question of what data to keep is tightly coupled to the question of what data to use. CS means the command log is the one source of truth and the only thing to be persisted, so when you add functionality, it becomes a lot easier to be in the habit of first asking, “What kinds of commands do I need to allow in order to do X?”, then ask the separate question “How can I describe the intention of these commands in an exact manner?”. This is how the decision of what data to persist becomes at least somewhat decoupled from the decision of what data to process.
Decouple structure of persisted data from structure of application data
So you have commands coming in that are persisted by code that doesn’t really care much what they look like. The application code produces the current application state, with a structure that can look wildly different from the incoming commands.
Now, what happens when new needs come along and you want to change the structure of the application state? Since persistence is independent from the structure of application data, and since you have taken care to let the commands describe nothing more or less than their intention, chances are that you only have to change the application logic.
After restarting and reprocessing the stored commands, your new application version should function just as well as the previous, no schema migration necessary. You can even have different versions of the application running against the same command log, with incompatible application state schemas.
Decouple the past from the present
Persisting the commands (the input for the application) and making sure the application is deterministic is a simple recipe for preserving a lot of information that is traditionally lost. This can be used in a number of ways:
- You get an audit trail for free. Make sure every command includes metadata such as who did it and when. Implemented at one place only, this will preserve audit trail information for all operations across the application. Whenever the need arises for an audit trail for some particular aspect of the application state, you can implement it and enjoy data already populated from the beginning of time.
- You gain the ability to recreate past states. Protecting against data loss due to user action is a responsibility your application can take over from your backup system – and perform with perfect granularity.
- You get some test cases for free since all your data can be reused as test cases.
- If you do a major refactoring you can then prove that the application would have acted the same in production up until now, simply by reprocessing production data with both application versions and comparing every state.
- If the application state is supposed to look different between two application versions, you might still be able to use production data as test cases by comparing them using a method other than equality.
- You can more easily automate testing. Perform the test manually through the user interface once, then use the saved commands together with a serialized application state for automated testing.
- You will have more information about how the application ended up in a certain state, which can be handy during debugging. You can query the application state for any point in time. If you need a buzzword, it’s time travel queries.
- If time travel queries are about the past then what-if queries are about possible futures. The application and persistence mechanism are so loosely coupled that you can completely separate them at will. You can run the application without persisting the commands, investigate the application state and then reset it. This is useful for testing; mocking the database becomes a non-issue.
Also, since both commands and application states are immutable, caching becomes both simpler and more efficient.
How does Command Sourcing improve run-time performance?
Memory and performance are complex topics, and benchmarking and such is out of scope for this article. Still, there are a few things to be said about CS and performance.
Latency
Memory is several orders of magnitude faster than disk, so keeping your data in memory makes your application a lot faster. Already with traditional databases, it’s not unusual to run database servers with enough memory to keep the entire database in memory.
During normal operation after startup, the disk is used only for writing commands. This has the potential to be very performant because:
- No disk read operation is inherently necessary.
- With only a stream of commands to persist, data can be written consecutively which minimizes seeking.
- Most other write operations such as index updates and transaction logging, can be eliminated entirely.
Since command persisting and command processing are independent operations, it’s also possible to run them in parallel. This might be worth doing if the processing is heavy. When a command arrives, begin both persisting and processing, and when both operations are done, send a confirmation to the client. This way, the total time from receiving a command to sending a confirmation can approach the time to write the input to disk, or the time to process the command in-memory, whichever is greatest.
So, the odds are pretty good that your application will be quite a bit faster with CS than otherwise, in terms of latency.
One thing to consider is the input rate. CS processes one command at a time, thereby limiting parallelization. If commands arrive at a high enough rate to queue up, latency will suffer.
Throughput
On the upside, lower processing time means higher throughput on a single thread. On the downside, you will likely end up limiting command processing to one thread, which in turn limits throughput. However, if your input rate is high enough to cause this kind of concern, then CS might not be a good fit anyways due to other problems such as startup time and command log storage, discussed later.
Summary
Command Sourcing will probably improve performance if your application doesn’t have a very high input rate and throughput isn’t your constraint of greatest concern. Even if throughput is important, CS might be worth trying. It is not argued that CS improves hardware utilization.
What are the caveats and subtleties?
Now that we understand the basic architecture and value proposition of Command Sourcing, let’s take a more nuanced look into some things to look out for.
Architecture buy-in
CS has some implications/requirements for your application code.
Since commands need to be persisted as such, they need to become first-class citizens in your application – a reification of service method invocations if you will. In statically typed languages you could end up with an entire class hierarchy to do this, but in a dynamic language it can be a one-liner like in our minimal example:
let command = req.query;
The idea of a replayable command log sounds nice, but in order to make it work for a service that communicates with other services, you need to differ between projectors and reactors. Projectors calculate the next state given the current state and a command, and they can have no side effects beyond that. Reactors only have side effects, such as calling out to other services. This distinction is necessary since the application will want to achieve the side effects during execution but not during reprocessing.
Operations involving calling out to external services become less trivial. Since they cannot be considered deterministic, they can’t be implemented as a processor. Rather, they will have to become reactors, that may or may not produce more commands that are fed back to the application.
Changing the command schema in backwards incompatible ways could become slightly more complicated since commands are persisted and considered immutable. You could version commands similarly to an API, or commit to complete backwards compatibility, or bite the bullet if you do end up having to change the schema.
Having said this, it’s probably easier to transition away from the CS architecture than into it, simply because the application state can be computed from the commands, but not the other way around.
Limited horizontal scaling
You need enough memory to keep application state in memory. Starting around last decade, this should no longer be a problem for the majority of applications. As of writing, I can for example find dedicated servers with 1.95 TB DDR4 ECC memory for 1735 USD/month. Ask yourself:
- Will your application ever outgrow that?
- If so, will it happen before a larger server is available?
- If so, think of the consistency boundaries (sometimes called aggregate boundaries) in your application and whether you could partition the application to run on several servers, each with their separate command log and state, while retaining consistency. For example, if your application serves SaaS customers in such a way that communication between customers through the application, if any, does not need strong consistency guarantees, then each customer can become a separate partition or tenant and run separately. Will you ever have a partition/tenant/customer that outgrows the largest server you can buy?
For the majority of applications, the answer is no to at least one of the questions above. For the remaining ones, CS is not recommended.
Price trade-off
CS is a trade-off. The increased need for memory comes with a cost. On the other hand, developer productivity and run-time performance improves. If your application handles large amounts of data, this trade-off might not be worth it. Apply CS selectively, to those services where it pays off. And note that it is an application/service architecture – not a system architecture.
Querying
SQL is good at querying data. You will probably write your application in a language that has been used a lot in tandem with SQL, in which case it might have evolved weaker querying capabilities. While the reason might be historical rather than technical, the issue is real.
If you need strong querying capabilities, you could for example:
- Add a query interface to your application models, such as LINQ.
- Use an in-memory database as a model.
- Structure your models to make them easier to query.
Concurrency
With CS you linearize operations, i.e. you run them one after another. This makes the application much easier to reason about than if operations are running in parallel. No complicated locking or tracking frameworks are necessary. One downside to this approach is that it limits throughput, although modern hardware pushes this limitation ever higher.
Traditional RDBMSs have been developed, battle tested and refined over several decades, to address this trade-off. These systems employ various forms of automated reasoning in order to give the illusion of linearization, while actually running operations in parallel (MVCC). In practice, this illusion is not always as clean-cut as it may seem.
There are situations where linearization still isn’t ok, and in those situations you probably shouldn’t use CS. The point here, is that it very commonly is ok, especially when you allow for application partitioning as discussed above.
Greg Young talks about this issue (albeit in the context of Event Sourcing):
Linearization is great… you can assume a global ordering of messages, which makes your life much, much simpler… 90% of systems you can probably linearize… There are [3] reasons that you would want to not linearize:
- Occasional connectivity…
- You want to favour availability… over consistency…
- Very, very high throughput Greg Young
Startup time
With just a command log persisted, the current application state is not immediately available at application startup. Instead, all commands have to be reprocessed to recreate the in-memory application state. This turns application startup from an operation that runs in constant time to one that depends on the size of the command log. This might actually work for longer than you think, but what do you do if this becomes an issue?
You use snapshots. Every once in a while, serialize the current application state and save it as a snapshot. Startup is then performed by loading the latest snapshot and reprocessing the commands after that point in time. Your startup time now depends on the size of the application state rather than the command log. Just note that when you change the application logic you might have to recompute snapshots.
Purging
The command log gives you a complete audit log by default. In other words, your application will remember everything forever by default. At the same time, you may have legal requirements to forget.
In a vanilla application, once the offending data is identified you can execute a transaction, manually or otherwise, to remove it and be done. While the method of purging data will necessarily depend on the application logic, it doesn’t have to become a major problem.
This is an area where CS becomes a bit of a burden. Operations that remove information from the application state no longer actually delete that data. You must change the command log, which really is working against the application architecture. The very idea of a command log is that it won’t change, and maybe you have caching layers or other infrastructure that build on that assumption and therefore need manual intervention.
If you are subject to legal purging requirements, try to identify what kind of purges you will be required to perform, and develop ways to deal with them. You could for example:
- Adjust consistency boundaries so that most kinds of purges are likely to happen outside of the CS application.
- Purge by changing the contents of a commands without removing them, in a way that guarantees that reprocessing will succeed.
Command log storage and system prevalence
You need storage space to persist the commands. Over time, this will only grow even if the application state doesn’t. If this turns out to be too much to store, it’s possible to transition the system into a sister architecture called “System Prevalence” (see explanation by Klaus Wuestefeld and on Wikipedia).
Essentially, System Prevalence is like CS except that every now and then you save a snapshot of the application state and truncate the commands before it. This is what it looks like:
This changes the trade-offs like so:
- The command log will take up less storage space.
- Startup time can improve, unless you already cache application state as discussed in “Startup time” above.
- You lose the benefits explained in “Decouple what to persist from what to process” above.
- You lose the benefits explained in “Decouple structure of persisted data from structure of application data” above.
- You lose some of the benefits explained in “Decouple the past from the present” above.
What does Command Sourcing mean for functional vs. imperative programming?
Take a look at figure 2 again:
CS is fundamentally functional. A state is a function of the last command and the previous state. It is also a function of all commands up to that point. So CS fits well with writing your application logic in functional style, i.e. with immutable data structures. I personally see this as a big win as I believe in the Functional Core - Imperative Shell pattern, and this is the view I’ve defaulted to when describing the up- and downsides. If you agree, you can skip the rest of this section.
Maybe you disagree and want to write imperative application logic. That’s fine. CS will still work, but the trade-offs change:
- Imperative programming can be slightly more performant than functional. That is something to keep in mind if the performance gains from keeping everything in memory aren’t enough for you.
- Consider how you would query your application state. When the application state is built with immutable data structures, it is likely to end up more relational in nature, with more collections and IDs and without cycles. On the other hand, when the state is built with mutable data structures that are operated on imperatively, it is likely to end up with more direct pointers and cycles.
- Traditional databases has extensive support for rolling back failed transactions. In CS, you need to choose a strategy for rollback. With immutable state, rollback is trivial since the application state as it was before a transaction began executing can be left as-is until transaction success is confirmed. If you use mutable data structures, you might need STM or some other kind of transaction control.
- Since immutable data structures typically have no cycles, saving and restoring snapshots is a simple (de)serialization. Immutability also means that one application state can be snapshotted independently of the process executing transactions. With mutable data structures, snapshotting could get a bit trickier.
In summary, CS and functional style application logic is a good match, but it’s still possible to use CS imperatively.
Summary
- Command Sourcing (CS) means you keep state in memory while persisting a log of commands. There is no need to persist state in a database since state can be recreated using the log.
- CS likely increases both developer productivity and run-time performance.
- CS has some implications for the structure of application logic and hardware requirements – it’s not suitable for all applications.
- Functional style programming fits better with CS, but isn’t necessary.
Discuss on Lobsters and Hacker News
Thanks to Robert Friberg for proof-reading and suggestions. A while ago I had the pleasure to attend one of Robert’s conference presentations on CS, which is what inspired me to write this article. He speaks from his experience with developing OrigoDB and then Memstate. (Disclosure: No affiliation apart from help with this article, and no implied endorsement of OrigoDB or Memstate.)
Thanks to Panagiotis Peikidis for proof-reading and suggestions.
Edit 2024-01-13: Originally published as “The Memory Image Pattern”. Thanks to Lobste.rs user hankstenberg for suggesting the better term “Command Sourcing”. Turns out this term already has seen some use. Search the web for “command sourcing vs event sourcing” for more resources.