Episode Summary
We tend to think about infrastructure in physical terms: wires, pylons, transformers, steel and copper. But modern systems—especially energy systems—are held together by something less visible and just as critical: the messaging layer.
In this episode, we trace the hidden history of how machines learned to talk to machines. From Wall Street trading floors to oil pipelines in the desert, from telecom switches in Sweden to rage-coded weekends in Silicon Valley, this is the story of frustration-driven innovation.
We explore:
* Why synchronous “telephone-style” software broke at scale
* How publish/subscribe became the software equivalent of a system bus
* Why RabbitMQ, Kafka, NATS, and MQTT exist—and what specific pain each one was born to solve
* The architectural tradeoffs between smart brokers and dumb pipes
* Why replayability, liveness, and reliability are fundamentally different goals
* How modern systems increasingly combine all of these tools
* And why the next architectural leap will come from today’s friction points
This episode isn’t about choosing the “best” messaging system.It’s about understanding why each one exists, and what happens when you use the wrong tool for the wrong kind of problem.
Key Concepts
* Messaging as the nervous system of physical infrastructure
* Subject-based addressing and decoupling
* Smart broker vs. dumb broker architectures
* Append-only logs and replayability
* Control planes vs. data planes
* Edge constraints and low-power networks
* Friction as a signal for architectural evolution
Mentioned Systems & Ideas
* TIBCO and the original information bus
* AMQP and the open-standard rebellion
* RabbitMQ and Erlang’s “let it crash” philosophy
* Kafka and the log as the source of truth
* NATS and the “dial tone” model
* MQTT and constraint-driven protocol design
* ZeroMQ, Pulsar, Redis Streams (briefly)
The Invisible Grid
How Messaging Systems Became the Nervous System of Modern Infrastructure
Close your eyes for a moment.(Not if you’re driving—but mentally.)
When we talk about infrastructure, we picture the physical grid: copper wires, transformers humming in empty fields, pylons cutting across landscapes. It’s tangible. You can touch it. You can see it rust. You can watch a tree fall on it.
If that grid fails, everything stops.
But there is another grid—one we almost never visualize.
An invisible grid, running inside software.A nervous system made of messages.
And just like the physical grid, when this system clogs, desynchronizes, or collapses, the lights go out anyway—no matter how much copper is in the ground.
Modern energy systems, financial markets, cloud platforms, and industrial control loops don’t merely use software. They depend on it at the level of physics. Signals must arrive on time. Control decisions must propagate. State must remain coherent across thousands or millions of moving parts.
This post is about how we got here.
Not as a clean, planned evolution—but as a genealogy of frustration.
The Original Sin: The Telephone Call
Early software systems communicated the same way humans did: by calling each other directly.
Application A opens a connection to Application B, waits for it to respond, sends data, and blocks until it hears back. This is synchronous coupling—the software equivalent of a phone call.
It works fine for two systems.
It collapses at scale.
On a trading floor—or an energy grid—one event must fan out to many consumers: risk engines, dashboards, control systems, settlement layers. In the telephone model, the sender must call each one, sequentially.
If any receiver is slow or unavailable, everything backs up.
Latency accumulates. Failure cascades.In finance, you go bankrupt.In energy, you destabilize the grid.
This brittleness created the first great insight.
The Software Bus: Publish, Don’t Call
In the mid-1980s, an engineer looked at a computer motherboard and asked an uncomfortable question:
Why is software dumber than hardware?
A CPU doesn’t “call” the graphics card. It broadcasts onto a system bus. Whoever is listening picks up the signal. The sender doesn’t care who receives it—or if anyone does at all.
That idea became publish/subscribe.
Instead of sending data to addresses, you publish it to subjects.Instead of knowing who consumes it, you just agree on what it means.
This decoupling was revolutionary. It gave us the first real software nervous system.
And it worked—so well that it created the next problem.
When Middleware Ate the Budget
By the early 2000s, large enterprises had dozens of incompatible messaging systems. Each vendor had its own protocol, its own servers, its own licensing model.
Banks were spending absurd portions of their IT budgets not on business logic—but on plumbing.
The rebellion that followed wasn’t technical at first.It was economic.
Why don’t we have a TCP/IP for messaging?
That question led to open standards. And open standards led to open source.
RabbitMQ and the Power of “Let It Crash”
RabbitMQ emerged from a near-perfect alignment between problem and tool.
The problem: routing messages reliably, flexibly, transactionally.The tool: Erlang—a language built for telecom switches that cannot go down.
Erlang’s philosophy is radical: don’t prevent failure—contain it.
Instead of one giant program sharing memory (where one bug burns the house down), Erlang runs millions of tiny isolated processes. If one crashes, a supervisor instantly replaces it.
Failure becomes routine. Boring. Managed.
RabbitMQ embodies this mindset. It is a smart broker: it routes, retries, buffers, tracks acknowledgements, and guarantees delivery.
It is a post office.
And like all post offices, it has limits.
Kafka and the Log That Changed Everything
When LinkedIn tried to track everything, the post office model broke.
Too much sorting. Too much state. Too much overhead.
The breakthrough was deceptively simple:Stop routing messages. Start recording history.
Kafka treats data as an append-only log—an immutable sequence of events. Producers write to the end. Consumers read at their own pace. The broker doesn’t track who’s done what.
This aligns perfectly with disk physics. Sequential writes are fast. Replays are free. History becomes an asset.
In this model:
* The log is the source of truth
* Databases are just materialized views
* You can replay the past with new intelligence
Kafka isn’t a post office.It’s a newsstand.
NATS and the Dial Tone
Then came cloud platforms, microservices, and another frustration.
Messaging systems had become pets—delicate, stateful, needy.But cloud infrastructure demands cattle—replaceable, disposable, boring.
NATS was born from that tension.
Its original design was ruthless:
* No persistence
* No buffering for slow consumers
* No guarantees beyond “best effort right now”
If you’re too slow, you’re dropped.If no one’s listening, the message vanishes.
This sounds dangerous—until you realize what it’s for.
Control planes. Heartbeats. Service discovery. Real-time signals where the latest state matters more than history.
NATS is not a database.It’s a dial tone.
MQTT: Innovation Under Constraint
The most elegant designs often come from the harshest constraints.
MQTT was built for oil pipelines in the desert, running over satellite links so slow and expensive that saving two bytes mattered.
The result was a protocol stripped to its bones:
* Tiny headers
* Persistent low-power connections
* Explicit handling of unreliable networks
* A “last will and testament” for dead devices
Years later, the same properties made MQTT perfect for smartphones.
From oil rigs to billions of pockets.
Today, MQTT is the language of the edge.
Synthesis: No Winners, Only Tradeoffs
There is no perfect messaging system.
Each of these tools exists because an engineer hit a wall:
* Too slow
* Too heavy
* Too expensive
* Too fragile
They encode those frustrations into architecture.
That’s the real lesson.
Modern systems don’t pick one.They compose:
* MQTT at the edge
* Kafka for history and analytics
* NATS for control and coordination
* RabbitMQ for transactional work
Different pipes for different fluids.
The Real Question
Every major evolution in messaging came from irritation.
A system that made engineers sigh.A component everyone dreaded touching.A piece of infrastructure that fought back.
So here’s the closing thought:
Where is that friction in your system today?
That’s not technical debt.That’s a signal.
The next nervous system will be built there.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com
Fler avsnitt av Coordinated with Fredrik
Visa alla avsnitt av Coordinated with FredrikCoordinated with Fredrik med Fredrik Ahlgren finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
