Online Radio
A working overview of the streaming stack for independent Canadian community radio: Icecast, Liquidsoap, AzuraCast, bandwidth, codecs, redundancy, and what each layer actually costs to run.
The phrase "online radio" hides a lot of moving parts. From the listener's side it is one URL and a play button. From the operator's side it is at least four distinct pieces of software, two physical or virtual machines, a couple of accounts at outside providers, and a small but real ongoing bandwidth bill. None of these pieces are exotic in 2026, but they fit together in particular ways, and the choices made early tend to constrain what the station can do for years afterward.
This piece walks through the standard infrastructure for an independent local Canadian station — the kind of operation a town of three thousand to thirty thousand might run on volunteer time and a modest annual budget. It assumes you have already read our companion piece on how a small community can launch online radio and want to understand what is actually happening underneath.
A useful way to think about a streaming station is as three layers stacked on top of each other. At the bottom is the source layer: the microphones, the mixing board, the audio interface, and the host's voice. In the middle is the encoder and automation layer: the software that turns a continuous audio signal into a compressed stream and decides what plays when no human is at the mic. At the top is the distribution layer: the streaming server that listeners connect to and the network path between it and them.
Most operational problems live at the boundaries between layers. Audio that sounds fine in the studio but distorted in the stream is usually a level-staging issue between source and encoder. A stream that drops out for some listeners but not others is almost always a distribution-layer routing issue rather than anything to do with the audio. Knowing which layer to look at first saves a lot of time.
The source layer is the part of the stack that most resembles traditional radio. A microphone goes into a mixer. The mixer's main output goes into an audio interface connected to the studio computer. In a one-host setup this can be as simple as a USB microphone going straight into a laptop. In a multi-mic setup with phone-in capability, a small broadcast mixer with a mix-minus bus is worth the extra cost and learning curve.
Whatever the front end looks like, the goal at this stage is a clean, level-controlled stereo signal at a known reference level. If the source is too hot, the encoder will clip; if it is too quiet, listeners will reach for their volume knobs and then be surprised when an ad or pre-recorded segment plays at full level. A simple software compressor or limiter on the way out of the studio prevents most of this.
The middle layer is where most of the interesting design decisions happen. The dominant open-source tool here is Liquidsoap, a small audio scripting language built specifically for radio automation. A Liquidsoap script defines the sources that can feed the stream — live studio input, scheduled playlists, jingles, station IDs, an emergency fallback file — and the rules for switching between them. It then hands the resulting audio to an encoder that compresses it for transmission.
# Minimal Liquidsoap stanza: live source with playlist fallback
live = input.harbor("live", port=8000, password="redacted")
music = playlist("/srv/radio/rotation.m3u")
fallback_safe = single("/srv/radio/silence-replacement.mp3")
radio = fallback(track_sensitive=false,
[live, music, fallback_safe])
output.icecast(%mp3(bitrate=128),
host="stream.example.ca", port=8000,
password="redacted", mount="/live",
radio)
That fragment is doing a lot of work. It is accepting a live source on a known port, falling back to a music rotation if the live source disappears, falling back further to a single safe file if the rotation breaks, and pushing the result to an Icecast server as a 128 kbps MP3. A real station's script will be longer, but the structure is the same: a list of possible sources, a fallback chain, and an output.
/live). One server can host many mount points.The streaming server is the piece listeners actually connect to. Icecast has been the conventional open-source choice for two decades and remains the default for good reasons: it is small, well-understood, and supports most of the things a small station needs without configuration acrobatics. A single Icecast process on a modest VPS can comfortably serve a few hundred simultaneous listeners; for the audience size most small-town stations actually have, you will run out of patience before you run out of capacity.
Bandwidth is where the monthly bill comes from. The arithmetic is straightforward: each listener consumes the bitrate of the stream for as long as they are connected. A 128 kbps stream with an average of 25 simultaneous listeners over a month consumes roughly 1 TB of outbound traffic. A small VPS plan that includes 2 TB or 5 TB of monthly transfer absorbs that easily. A station that suddenly trends — a viral local story, a hockey playoff run, a regional emergency — can chew through a quota quickly, which is one reason a CDN or a metered overflow plan is worth thinking about before you need it.
If the prospect of writing Liquidsoap scripts and editing Icecast XML by hand is unappealing, AzuraCast packages all of this into a single self-hosted web application. Underneath it is still Icecast and Liquidsoap; on top is a browser interface that lets a non-technical volunteer schedule playlists, monitor listener stats, manage user accounts and configure most of the standard fallback behaviours without touching a configuration file. For volunteer-run stations where the original technical lead may not still be around in two years, this matters more than it sounds.
The trade-off is the usual one: convenience traded against transparency. AzuraCast is open-source and the underlying components are all standard, so nothing is locked away, but the abstraction means that when something does go wrong you may need to read about both AzuraCast's wrapper and the underlying Icecast or Liquidsoap behaviour to understand what happened.
A small station does not need carrier-grade redundancy, but it does need to think about a few specific failure modes. The studio internet connection drops. The streaming server reboots in the middle of a software update. The DNS provider has an outage. The volunteer who knows the password is on vacation. None of these are dramatic; all of them have caused stations to be off the air for embarrassing lengths of time.
The cheap, sensible answer is a second small VPS at a different provider, with a copy of the Liquidsoap configuration and the same fallback playlist, kept warm and ready to be promoted. Couple this with a documented failover procedure that any of three or four volunteers could run, and the station's resilience is meaningfully better without the budget growing materially. The general philosophy has a long pedigree in the radio infrastructure world — we treat it at length in why distributed networks still matter and again in the context of emergencies in radio streaming during emergencies.
The last piece of infrastructure that a small station tends to underestimate is logging. Both SOCAN and Re:Sound can require periodic music-use reports from licensed streamers, and the only painless way to produce these is to have your automation system writing a clean play log from day one. Liquidsoap can be configured to log every track change with timestamp, artist, title and duration. AzuraCast does this by default. Either way, set the log retention long enough — a year is a sensible minimum — that you are not scrambling when a reporting request arrives.
Listener statistics are the other side of the same coin. They are useful for grant applications, board reports and the simple programming question of which shows people actually listen to. They are not, on their own, a meaningful measure of the station's value; a small-town stream serving a faithful audience of two hundred people is doing more cultural work than a glossy national service that briefly trends to thousands. The argument for that view, and the broader case for local audio, is the topic of how regional audio builds identity.