I recorded a podcast episode this week that I’ve been turning over in my head for months. It’s about where Sourceful is right now, what we’ve built, and why I think the next few months matter more than anything we’ve done so far. If you have the time, go listen. But if you want the written version, here it is.
The clean problem and the dirty one
Most energy tech companies pick a clean problem. They build monitoring dashboards. Analytics platforms. Pretty graphs sitting on top of someone else’s data. The physical world — the actual devices, the actual protocols, the actual hardware that lies about its own state — that’s always conveniently left to someone else.
We went the other way. Sourceful has always been about the physical layer. Talking to actual devices. Sending actual commands. Getting actual confirmations that hardware did what it was told to do.
This is an ugly problem. Every manufacturer implements protocols differently. Firmware updates silently change behavior. Documentation doesn’t match reality. The same device model bought two years apart can have different register maps. Nothing is clean. Nothing is standardized. Everything is edge cases stacked on top of edge cases.
We chose this on purpose.
What the grid actually needs
There’s a fundamental misunderstanding in energy tech about what “smart grid” means. Most people hear it and think monitoring. Data collection. Visibility.
That’s table stakes. Visibility is not a product. The product is control.
A utility needs to curtail 40 MW of solar generation in under a second. An aggregator needs to simultaneously discharge two thousand home batteries to meet a frequency regulation bid. A grid operator needs to shift ten thousand heat pumps off-peak — right now — and know with certainty that it happened.
Deterministic, real-time control of physical hardware. That’s what the flexibility markets pay for. That’s what grid operators need. That’s the product.
And that product requires solving the integration problem at the device level. There’s no shortcut. There’s no abstraction layer that makes the physical world go away. Someone has to write the driver that talks to each specific device, handles its specific quirks, and confirms that commands were executed correctly.
Why cloud coordination is a dead end
I keep seeing well-funded startups building cloud-first coordination platforms. Some of them are smart. Most of them will fail. Not because the teams are bad — because the architecture is wrong.
Cloud API round-trips take two to five seconds. Grid frequency balances every second. This isn’t a performance gap you can close with better infrastructure. It’s a physics problem. If your control logic lives in someone’s data center, you are structurally unable to participate in the energy markets that pay the most: frequency containment reserves, fast demand response, sub-second optimization.
These markets require local execution. The coordination logic has to run on the same network as the device. At the edge. In the building. Next to the hardware.
You cannot retrofit local execution onto a cloud architecture. The entire system has to be designed for it from the ground up. Most teams don’t realize this until they’ve already built the wrong thing and hit the ceiling.
We started local. That decision — made before it was fashionable, before “edge computing” was a buzzword in energy — is now an architectural advantage that can’t be replicated without starting over.
What we’ve been doing for the last six months
I want to be honest about the journey, because the honest version is more instructive than the polished one.
We didn’t start from first principles. We started the way startups start — shipping fast, accumulating debt, building what worked rather than what was architecturally right. That’s fine for finding product-market fit. It’s not fine for building infrastructure the grid depends on.
The last six months have been about stripping back to first principles. Rearchitecting the entire backend. Killing legacy code paths. Rebuilding the foundation from the physics up. We shipped NovaCore — not a feature, a new foundation — in the last two months. The identity system. The control pipeline. The telemetry infrastructure. All redesigned around how the grid actually works.
That work is done. The foundation is solid. And it earned us the right to tackle the hard part properly.
AI-first integration: the 10x and 10x
Here’s the thing that changes everything for us.
We’ve built an AI-first integration engine — we call it Hugin — that can point at an unknown energy device, scan it, figure out how it communicates, cross-reference our existing driver library, and produce a working, tested driver. In minutes.
The industry standard is weeks per device brand. One engineer, one brand, reverse engineering and manual testing. We’ve lived this. We have about thirteen OEM integrations today. The market has hundreds of brands. At the old pace, nobody wins.
With AI-driven integration, the first device is roughly 10x faster than manual development. By the fifteenth integration, the AI has seen enough devices from the same manufacturers to recognize patterns — it already understands 80% of a new device before it starts. By the fiftieth integration, it’s 10x faster again. That’s 100x the pace of anyone still hand-coding drivers.
And the curve doesn’t flatten. It steepens. Every driver teaches the AI about the next one. Every edge case gets encoded into the knowledge base. The messy physical world that everyone else avoids is literally the training data that makes our system smarter.
A new partner needs fifteen device brands supported? Today that’s months. With this platform at scale, it’s a day.
The people with skin in the game
Here’s what most platforms get wrong about scaling integration. They treat it as an internal engineering problem. Hire more developers. Grind through the backlog. Even with AI, that’s still one company trying to cover an entire industry.
What scales is when the people who need an integration the most are the ones building it.
Who cares whether a specific inverter model works with Sourceful? The person who owns that inverter. The homeowner with the device in their garage. The electrician who installs that brand every week. These people have the hardware physically in front of them. They can plug in and run a probe.
With AI-first tooling, they don’t need to be protocol engineers. They point the tool at their device, supervise the process, review the output, and submit a driver. That driver gets reviewed, signed, and distributed. Every Sourceful gateway in the world that encounters the same device model now has a working integration.
One person solved their own problem. The entire network got smarter.
The driver library grows faster than any internal team could build it. Coverage expands into regional brands and legacy models that no company would prioritize on a product roadmap. And we have the infrastructure to reward contributors when the time is right — we know who contributed what, how widely it’s used, and what revenue flows through it. We’re not building the incentive model today. But the pipes are ready.
Four moats that compound each other
I think most people in energy tech define moats wrong. A moat isn’t a feature or a patent or being first. A moat is something that gets stronger the more you use it, and that competitors can’t replicate without going through the same painful process.
We have four, and they reinforce each other.
The driver library. Every validated, production-tested driver represents real-world knowledge about how a device actually behaves — not how the documentation says it behaves. You can’t generate this from a spec sheet. You get it by connecting to real hardware in real installations. Every driver is a brick in a wall that competitors have to build from scratch.
The AI knowledge base. Every integration teaches the system about the next one. The compounding curve is structural, not aspirational. A competitor starting today begins at integration one. We’ll be at fifty by the time they’ve set up their development environment. Their integration one will be a hundred times slower than our integration fifty.
Local execution. Building local-first coordination is architecturally harder than building cloud. Most teams default to cloud because it’s easier. Then they hit the physics ceiling and discover there’s no shortcut past it. We started local. That decision is now an advantage that requires a full rebuild to replicate.
The network. Every deployed gateway is a node. Every node makes the platform more valuable to every other node. Utilities want one platform covering the most devices in the most locations. Aggregators want the largest pool of controllable assets behind a single interface. More gateways make us more attractive. More attractiveness means more gateways. Network effect, rooted in physical hardware on real grid connections.
These four moats compound each other. More drivers feed a smarter AI. A smarter AI drives faster integration. Faster integration deploys more gateways. More gateways attract more contributors. More contributors produce more drivers. The flywheel accelerates with every turn.
Why execution is the only thing that matters
The ideas in this post are derivable. Smart people will arrive at similar conclusions about local execution and AI-driven integration. Some probably already have.
What they can’t derive is the driver library we’re building right now. The knowledge base we’re training right now. The gateways we’re deploying right now. These are assets that can only be created by shipping. Not by planning. Not by raising capital. By shipping.
A head start in a compounding system doesn’t stay the same size. It grows. Every week of execution adds drivers, edge cases, knowledge, and deployed nodes. A competitor starting today isn’t six months behind us — they’re six months behind plus every driver, every edge case, every deployment, and every community contribution we’ve shipped in those six months.
The gap widens. It doesn’t close.
Where we are right now
We spent six months earning the right to do this properly. The backend is rearchitected. The legacy is dead. The foundation is at first principles.
Now we’re building the integration layer on top of it. The gateway software. The AI engine. The driver distribution system. Every existing driver across all our hardware is being rebuilt on the new unified architecture. No hybrid approaches. No legacy paths alongside new ones. Clean break. One platform.
When this ships, we’ll have something no one in the energy industry has ever had. The ability to point an AI at any energy device, produce a working integration in minutes, deploy it instantly, and have that integration compound the intelligence of every future integration across the entire network.
The coordination layer the grid doesn’t have yet.
We’re building it.
If you’re working in energy, building hardware, or just have devices at home that you think should be doing more — I want to hear from you. Reply to this post or find me on LinkedIn.
And if you haven’t already, listen to the full episode: The Mess Is the Moat — Coordinated with Fredrik, Ep. 74
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com
Fler avsnitt av Coordinated with Fredrik
Visa alla avsnitt av Coordinated with FredrikCoordinated with Fredrik med Fredrik Ahlgren finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
