Radio Infrastructure
A single tower or single carrier is a single point of failure. Distributed radio networks remain the most reliable way to keep a regional signal alive when something inevitably breaks.
The single-point-of-failure problem is older than radio. Telegraph operators in the 1860s already understood that one cable across one ocean was not a network; it was a single wire pretending to be one. The packet operators who built Southern Ontario’s 220 MHz backbone in the 1980s rediscovered the same lesson in their own context. The community broadcasters working today are quietly rediscovering it again, this time in the form of overdependence on a single cloud provider, a single carrier, or a single hilltop transmitter. The argument for distributed networks is not nostalgia. It is the same engineering reality every generation has to learn.
This piece makes the case in concrete terms: what a distributed radio network is, why centralised alternatives keep failing, and what the practical minimum looks like for a small Canadian broadcaster in 2026.
The word gets used loosely. A distributed radio network, in the sense that matters here, has three properties. First, the function of the network is spread across multiple physical locations rather than concentrated in one. Second, the failure of any single location degrades the network gracefully rather than killing it. Third, the operational control is also distributed — meaning that more than one human being knows how to keep it running.
A modern community FM station with one transmitter site, one studio, one STL path and one staff engineer is not a distributed network. It is a centralised system with a long thin signal chain. Any one of those four elements fails and the station is off the air, sometimes for days. By contrast, the SOPRA backbone in its working years had a dozen nodes, three or four operators in active rotation, multiple ingress points and a reasonable expectation that any single failure would cause inconvenience but not silence. The cost difference between those two architectures is smaller than people assume. The reliability difference is enormous.
If distributed is so obviously better, why does almost every new broadcast deployment trend toward centralisation? The honest answer is that centralisation is easier to procure. A vendor can sell you one big box. A consultant can write one big proposal. A board can approve one big capital line. Distributed networks require coordinated work across multiple sites and multiple relationships, which is harder to put on a single purchase order.
The same dynamic plays out in online radio. It is much easier to host your stream on one cloud provider in one region than to set up an origin with two geographically separate failover endpoints behind a CDN. So most small stations don’t. And then a regional outage takes them off the air, and they discover what the engineering textbooks have been saying for fifty years. We sketch the basic alternative in local streaming infrastructure for small stations.
None of these failures are exotic. All of them happen, on a long enough timeline, to every operation. The question is whether the operation has thought about them in advance.
The amateur packet networks of the late 1980s and 1990s did not have the budget to build redundant anything. What they had instead was an honest assessment of what would happen if a node failed, and a routing layer that would notice and adapt. NET/ROM nodes maintained tables of neighbours and known destinations. When a link dropped, the table updated. When a new node appeared, the table updated. The operator did not have to manually reconfigure anything. The network healed itself within the limits of its physical topology.
That self-healing behaviour is now standard in IP networks but still rare in audio distribution. Most STL chains are statically configured. Most stream origins do not have automatic failover. Most multi-site broadcasters cannot tell you, off the top of their head, what would happen if a specific path went down at 3 a.m. on a Saturday. The packet model would suggest building those answers into the network itself rather than relying on a human noticing.
For a small community station the minimum looks something like this. Two physical sites within the licensed coverage area, with the secondary running at lower power as a fill or as a hot standby. Two STL paths between studio and primary site, on different carriers, with automatic failover between them. A streaming origin in two cloud regions, with health-checked DNS in front. A site book at every location. A second engineer who knows the system well enough to log in and diagnose.
The capital cost of building this from scratch, in 2026 dollars, is real but not prohibitive. Most of the components are commodity. The expensive part is the time and relationships required to find the second site, and that cost is usually paid in volunteer hours rather than cash. The amateur tradition has always understood that volunteer hours are the limiting resource and has organised itself accordingly. The community broadcast world is slowly catching up.
Canada has specific reasons to care about this. The country is large. The population is concentrated in a few corridors but the broadcast service area for any meaningful national or regional system is sprawling. Single-site solutions fit Toronto and Montreal and almost nowhere else. Distributed networks — whether amateur, community broadcast, or emergency communications — are the only architecture that scales to the geography. The CRTC’s ongoing review of small-market broadcasting keeps surfacing the same point: the economics only work if the infrastructure is shared.
The corollary is that distributed thinking is a competitive advantage for any small Canadian operator who adopts it before their peers do. A station that can keep its signal up through an outage that takes the local commercial broadcaster off the air for two days will earn a generation of listener loyalty in one weekend. The investment to make that possible is not large. The investment to make it impossible — by going all-in on a single vendor or a single path — is exactly the same investment that most stations are already making by default.
Beyond the two-site minimum, the more interesting architectural conversation is about regional mesh. A handful of small stations in adjacent communities each run their own primary site, share a coordinated frequency plan, exchange programming over IP, and act as a backup distribution path for one another when something local goes wrong. None of the participants is large enough on its own to fund full redundancy; together they have it almost for free. The packet operators called this kind of arrangement a network of networks. The broadcast equivalent does not yet have a settled name but is starting to appear in pockets. The amateur EmComm community has been quietly running a version of this model under the Radio Amateurs of Canada umbrella for years.
Distributed networks matter because the alternative keeps failing in predictable ways. The packet operators learned this on shoestring budgets in the 1990s. The cloud operators learned it the hard way in the 2010s. The community broadcasters learning it now have the advantage of being able to read both histories. We continue this argument in the bridge from packet to digital networks and in the SOPRA history piece, which together give a fuller picture of what a working distributed network actually looked like in practice.