Radio Infrastructure
How AX.25 packet networks anticipated the design of today's digital radio infrastructure: store-and-forward logic, distributed nodes, IP backhaul, and software-defined transmitters.
If you stood in a Southern Ontario amateur radio shack in 1992 and watched a packet station push a message from Toronto out to Acton, then on toward Hamilton via the VE3INF node, you were looking at a working version of an idea that the broadcast world has spent the last fifteen years quietly rebuilding. The idea is straightforward: do not put your signal at the mercy of one tower, one carrier, or one path. Distribute it. Let it hop. Let the network heal around a failed node. Treat each site as both a destination and a relay.
The protocols are different now. The frequencies are different. The audio quality, the bit rates and the regulatory backdrop have all shifted. But the underlying engineering posture is recognisable. A modern community FM transmitter being fed by a unicast STL over a microwave link that fails over to an IP tunnel is doing the same job a NET/ROM node was doing thirty years ago: keeping a message moving when the obvious path is unavailable.
Packet radio in its working years — roughly 1985 through to the late 1990s in Southern Ontario — proved several things that the wider radio industry took a while to absorb. First, that you could carry data reliably across a noisy RF channel by sending it in addressed, error-checked frames rather than as a continuous stream. Second, that a chain of automated stations would route those frames without human intervention if you gave them a sensible addressing scheme. Third, that the cheapest way to build coverage across a large region was not to put up one enormous transmitter but to coordinate dozens of modest ones.
The third point is the one that matters most for what came next. SOPRA’s 220 MHz backbone was never a single high-power site. It was a chain of unattended nodes, each running on a few watts, each line-of-sight to one or two neighbours. The sum of the chain was a regional network. The cost of any single node failing was modest because traffic could re-route. We covered the operational reality of that architecture in our explainer on AX.25, NET/ROM and AXIP, and the modern parallels are everywhere once you start looking.
A small community broadcaster in 2026 is dealing with exactly the kind of constraint a packet network coordinator dealt with in 1992: limited budget, volunteer labour, multiple sites that must stay synchronised, and a regulatory environment that rewards keeping the signal up. The temptation is always to chase a single, expensive, integrated solution — one big STL link, one big transmitter, one big managed service. Packet networks rejected that approach for cost reasons and were forced to discover that the distributed alternative was also more resilient. The same lesson is being relearned now in the broadcast IP world.
Resilience comes from having more than one path. That sounds obvious until you cost it out. The financial logic only works if your nodes are inexpensive enough to put two or three of them where you would otherwise put one premium site. Packet operators got there by building their own equipment and donating their labour. Modern broadcasters get there by using commodity SDR hardware, open audio codecs, and IP transport that runs over whatever physical layer is cheapest in a given location — fibre where it exists, fixed wireless where it doesn’t, LTE backup where neither is reliable.
Software-defined radio is the single biggest reason this convergence is happening at all. A packet operator in 1990 was dealing with a hardware stack that did one thing: a TNC handled the modem and framing, a transceiver handled the RF, and the two were physically wired together. Changing modulation meant changing hardware. SDR collapses that stack into a generic RF front end and a software pipeline. The same chassis can be a digital broadcast exciter on Monday, a DMR repeater controller on Tuesday and an experimental data link on Wednesday, with no soldering involved.
For community broadcasters this matters because it lowers the cost of experimenting. The expensive part of digital radio used to be the radio. It is now the antenna system, the site lease and the feed network. The signal-processing piece is increasingly a commodity. TAPR, the Tucson Amateur Packet Radio organisation that drove much of the original packet hardware development, has continued to publish open SDR work that any small broadcaster can read and learn from.
If you redrew SOPRA’s old backbone map for 2026, it would not be a chain of 220 MHz links between hilltops. It would be a graph of audio-over-IP endpoints connected by a mix of dark fibre, point-to-point microwave, business-grade internet circuits and LTE failover. Each transmitter site would be both a consumer of audio and, potentially, a relay point for another site downstream. Each path would be metered, monitored and capable of failing over within seconds.
What has not changed is the discipline. You still need a frequency plan. You still need a clean reference clock at every site. You still need someone willing to drive to the hilltop at two in the morning when the LTE backup decides it is also unhappy. Distributed networks reduce single points of failure but they do not eliminate the operational work; they move it from one big problem to many small ones. We expand on that point in why distributed radio networks still matter.
Anyone building modern radio infrastructure in Canada is operating under two parallel regulatory bodies: the Innovation, Science and Economic Development spectrum branch for the RF side, and the CRTC for the broadcast content side. Packet radio operators only had to deal with the first. Modern broadcasters — community FM, low-power, online stations interacting with broadcast partners — have to read both. The good news is that the technical documentation on the ISED side has improved enormously since the packet years, and is now available online without a trip to a regional office.
What has not improved is the gap between what the regulations say and what a small operator actually needs to know on a Saturday afternoon. The packet community handled this gap by writing club-level handbooks that translated the official rules into practical advice for someone setting up a new node. The community broadcast world would benefit from the same translation layer. A licence document tells you what is permitted; it does not tell you how to coordinate a frequency change with the operator on the next channel down, or how to rebuild a duplexer that has drifted off tune. That kind of working knowledge has to be carried forward by the community itself.
The most useful thing the packet era left behind is not a protocol or a piece of equipment. It is a posture toward infrastructure. Build it small, build it redundant, build it so the next operator can understand it, and write down what you did so the network outlives you. Every modern community broadcaster building IP-fed transmitter sites is, whether they know it or not, applying the same posture. The hardware is unrecognisable. The thinking is the same.
For a deeper look at what packet actually was, see our primer on the original system. For where this thinking goes next, the rest of the infrastructure section picks up the thread.