Thank you for noticing this new notice. Your noticing it has been noted.

Radio engineering can be a tricky business.  It’s very “all or nothing.”  Either things work as they should, and nobody notices…or there’s a problem, and everybody notices.  Most things in life, you’d be happy with a 90% success rate, right?  But that means 10% downtime for us in the radio biz, and that’s two and a half hours of dead air every single day…which is wildly unacceptable.  Our watchword is “five nines”, meaning “99.999% uptime”…or less than 1 second (0.864 sec) of dead air per day.  Very, very little considering there might easily be 10 or 20 seconds of naturally-occurring silence every hour just as a part of regular programming.

Thank you for noticing this new notice. Your noticing it has been noted.

The sublime state of grace that is “five nines” is not always achievable, but it’s what we strive for.  And how do we get there?  In a word: redundancy.

90% uptime sounds great, but it’s the same thing as two and a half HOURS of dead air, every single day.

Strictly speaking, “redundancy” just means “backups”.  But it can get a lot more detailed and complicated than that.  So much so that I could write a whole book about it, instead of just this blog post.  So I’m going to focus more on helping you ask the right questions so you can better decide what are the right answers for your facility.

Paying For It All

First off, you’ll want to have some kind of guide for how you’re gonna pay for all this redundancy.  The short answer is: how much does it cost to be off the air?  Lost underwriting/advertising spots (or the cost of make-goods) is the easy per-minute metric.   What if the outage happens during a pledge drive?  How much per hour are you losing?  You might want to handicap that a little, since the highest-value hours in a pledge drive make up such a small fraction of the total hours in a broadcast year.  But don’t handicap it too much!  Even though the odds are that an outage won’t fall during the morning show of the first day of the pledge drive…it still could happen then, and that’s gonna hurt, financially.  More complex, but just as relevant, is the idea of dead air = lost listeners, and how hard it might be to get those listeners to come back.

Pick a Disaster, Any Disaster

Now that you have an idea of the budget, you’ll want to start thinking about what kind of disasters you might face.   For the moment, let’s not think about the types of disasters that tend to happen in your region.  Instead, let’s think about what you’re most vulnerable to.  Here’s a list of questions to get you started:

  1. If the utility-provided electricity (e.g. “street power”) goes out, how long can it be out before you’re in trouble?  Minutes?  Hours?  Days?  Weeks?
  2. How long can you go without physical access to your transmitter site?  Assume you still have telecommunications to the site.  Then, separately, assume you don’t.
  3. If your broadcast tower is physically destroyed, how long will it take to get back on the air in any capacity?  Separately, what about in a comparable capacity?
  4. Repeat questions 2 and 3, but this time for your studio/offices.
  5. If there is no telco-provided internet in your area, how does that affect your operations?  Not just the STL, I mean all operations…including staff doing their work?
  6. What happens if your staff can’t work?  Track this for each individual and then for each department.  Given recent history, assume they’re severely ill from a pandemic-related disease and can’t work for two weeks.  Then assume it’s a month.  Then six months.

So those are good questions about your vulnerabilities in vague senses.  Now we’ll start thinking about specific situations that might hit your area, and the challenges you’d face:

Hurricanes / Tornadoes

  • Staff may be evacuated for days or weeks.
  • Street power is likely out; up to 7 to 10 days.
  • Internet & phone service likely out; up to 14 to 30 days.
  • Potable water is inaccessible; up to 30 to 60 days.
  • Flooding destroys studios/transmitters, employee homes.
  • Fallen trees and/or flooding makes roadways impassable for 1 to 7 days.

Riots/Civil Unrest

  • Studios/transmitters may be inaccessible (either due to rioters or police).
  • Studios/transmitters may be looted/destroyed.  Fire is a concern.

Forest Fires

  • Staff may be evacuated for days or weeks.
  • Studios/transmitters may be inaccessible for hours or days.
  • Studios/transmitters (and employee homes) may be partially or completely destroyed by fire or smoke.
  • Street power may be out for days, weeks or months.
  • Internet & phone service may be out for days, possibly weeks.

Blizzards / Snowstorms

  • Street power may be out; from 1 to 5 days.
  • Internet & phone service may be out; hours usually, maybe a day or two.
  • Heating systems may not work (electric, street-gas lines).
  • Roads may be impassable for 1 to 3 days.

Tower Climbers

  • Humans have to be close to your transmitting antenna.
  • RF power must be significantly reduced, or eliminated, for worker safety.
  • Downtime of hours or days is possible.

“Backhoe Fade”

  • Construction workers accidentally use a backhoe to cut through a conduit containing all the fiberoptic links for internet, landlines and cellphones for 5 miles around you.  Oops.

There can be other issues that directly impact your ability to broadcast.  Earthquakes/landslides come to mind.  The COVID19 pandemic is another.   But these are enough to get you started in thinking about what could happen and how long it might last for.

Redundancy in Specific Aspects of Broadcasting

So we’re now asking the right questions, and thinking about the right answers.  Let’s start thinking about things specific to broadcast that you can do to achieve redundancy, and their pluses and minuses.

Backup Transmitter

The most basic of all precautions: a second transmitter to keep you on the air while your main is being worked on or tested.   Every radio station should have a backup transmitter of some kind hooked up to a four-port RF switch.  Even a fairly low-revenue site should.  It doesn’t necessarily need to be as big as the main; the difference between a 10,000 transmitter and a 1000 watt transmitter is only -10dB.  Most radios will be hard pressed to notice the difference.  Even a 100 watt transmitter can be better than nothing, and it doesn’t cost very much.

You can substitute having some standby portable transmitters to take to affected sites on an as-needed basis.  But this is not ideal.  Disasters have a nasty habit of making it very difficult to get from Point A, where the portable transmitter is, to Point B, where the off-line transmitter is.

Backup transmitters aren’t just for disasters, either!  Almost any transmitter is going to need maintenance of some kind on a roughly annual basis that’ll require the transmitter to be powered off.   Unless you rilly rilly like doing such work at 2am?  A backup transmitter helps a lot here.

Remember that, ideally, your backup is truly redundant.  Besides having the same (or close to same) power levels, it should also have the same audio processing, the same HD Radio abilities (if any) and the same RDS/metadata (if any) that the main transmitter has.

Unless you rilly, rilly like doing work at 2 AM? A backup transmitter is a must!

Backup Antenna

Almost as important as a backup transmitter is a backup antenna.  Lots of stations have their second antenna on the same tower as the main (hopefully at least 50 to 75ft lower) in an attempt to offer as close to the same coverage as possible.   I don’t agree with this approach.  Yes, it’s helpful for when tower climbers need to work on your tower, and you have turn off the main antenna to avoid barbecuing them with RF energy.   But for disaster recovery, it’s far less useful.  Still, it’s always a good idea to have at least one extra antenna, so you can stay on the air in case tower climbers need to get up there.   More useful is an aux site.

Auxiliary “Aux” Site

The FCC allows you to license your operation from a completely different tower so long as your aux site’s service contour does not exceed the main, licensed site’s contour.

While you probably will take some hit in coverage broadcasting from a different tower, overall it’s impossible to beat the redundancy a full aux site gives you.   You have a backup transmitter, backup audio processor, backup antenna, backup everything because it’s all at a separate site.

The downsides are, of course, that you’ve significantly increased your expenses (power, maintenance, rent, telco, etc) and also your complexity.  In essence, you’re paying for an extra broadcast facility that almost never gets turned on.   Ideally, you’ll try and find a situation for an aux site that minimizes your expenses.   For example, let’s say you’re a university-owned NPR station.  Your main Class B FM signal is on a tower across town, but the university has a dormitory that’s 10 stories tall.  Perhaps you can negotiate with the college to put a 1000 watt transmitter into a two-bay/half-wave-spaced (for reduced downward radiation) antenna on a 20ft tower on the roof of the building.  With luck, the university will give you an outlet on the building’s backup power generator, too, and you can set up a cheap-n-easy 5.8 GHz wireless ethernet path to your studios for the studio/transmitter link.

Redundant Studio/Transmitter Links (STL) 

In the old days, an STL was almost always either some 15 kHz leased program audio service lines from the local phone company, or a licensed 950 MHz microwave link.   Today, the former is hard to find and often being (or has been) phased out by the phone company.  The latter is still around and still useful, but not very flexible.

Nowadays, the internet has completely revolutionized data transport in every way.  Meaning you have many more options for your STL, but that things are a lot more complicated, and in some ways are more fragile, too.    We’ll take more about how important internet is to your business in general in a minute, but first we’ll just deal with STL’s.

The biggest change are “bit-splicing” STL devices that take audio and simultaneously stream it across multiple ISP’s, so if there’s an outage on one ISP?  The other seamlessly covers it.  This takes what was a pretty unreliable STL method…the public internet…and makes it almost as good as an enterprise-level data source like a private fiber line.   Comrex, Tieline, Telos, GatesAir, Worldcast and others all sell such devices.

But it’s not perfect.  Even enterprise-grade telco is vulnerable to major disasters, and consumer-grade ISP’s are even worse.  For example, most cablemodems require “power on the poles” to function, so if there’s a regional power outage?  Doesn’t matter if you have a generator on site; your cablemodem doesn’t work.  Lots of time different ISP’s using different technologies still have their cables strung on the same telephone poles.  Meaning one ill-placed tree blown over by the wind can easily wipe out both your internet sources! 

Cellphone-based data is pretty ubiquitous these days, and even has some redundancy via multiple towers in a region.  But each tower often is pretty “fragile”; they lack backup power or redundancy fiber connectivity.  And they’re easy to overload in a disaster when everyone is trying to use their phones after their home internet stops working.

So, how to get around these problems?  It requires logical thinking and a bit of cleverness!

Redundant ISP’s: first off, you want two (or more) ISP’s that are as different as possible: different technology (cablemodem vs fiber vs WISP), different connectivity paths (lines on telephone poles), and different support technician fleets.  

  • Don’t simply get two cablemodems, even from two different ISP’s.  Spread it out: cablemodem, FiOS/fiber, WISP, metro ethernet, 4G/5G cellphone, or even DSL.
  • One ISP that’s wired and one that’s wireless is a good idea.  Remember that you can use a friendly business a mile or two away to host a second ISP connection for you, and then beam that connectivity back to your tower using cheap 5.8 GHz wireless ethernet radios, like Ubiquiti LiteBeams.
  • Try to learn the details about exactly how your ISP delivers connectivity.  In Rhode Island, the nonprofit fiberoptic ring (O.S.H.E.A.N.) largely piggybacks on Cox Fiber.  So they’re vulnerable to the same weaknesses Cox Fiber is.  If you’re ever lucky enough to get a technician on-site, butter then up with some free swag and talk more about the infrastructure.  They can’t always divulge too much, especially to strangers, due to terrorism concerns.  But a friendly voice who knows the topic can go a long way to learning all sorts of “inside baseball” info.

Your Own Wireless vs Third Party Telco: Never rely entirely on third-party telco service, like a fiber link.  When you own and control it, you can make sure it works the way you need it to. And you can, in theory, repair it a lot faster if it breaks.  When an ISP’s service goes down, no matter how fancy your Service Level Agreement (SLA) seems, it almost always boils down to “we’ll get you up and running when we feel like it, and here’s some service credit for your trouble.”  Service credits aren’t much help when you’re dead in the water.

  • Licensed wireless gear is always preferable, but unlicensed is still useful…and often inexpensive enough that “get one and try it” is a viable option.
  • In the licensed wireless service (950 MHz, 6 GHz or 11 GHz) arena there’s Moseley, Cambium, SAF Tehnika, Mimosa, Ubiquiti and others.  Unlicensed wireless ethernet (5.8 GHz and 24 GHz) is mostly from Ubiquiti.
  • One of the more desirable options is true “full duplex” on the wireless path.  This can be achieved by cross-polarization, but usually requires two dishes at each end.  Full duplex means data can be transmitted in both directions at the same time, which means much lower latency.   Half duplex means the direction the data is being transmitted flip-flops multiple times a second.  This increases latency.  For general internet, half-duplex is sufficient, but for a lot of AoIP protocols (LiveWire, AES67, Dante, Wheatnet, etc) their latency requirements are too strict and half-duplex won’t work.

Special Note about Cellphone Internet: First off, if you’re a registered nonprofit, you probably qualify for very, VERY inexpensive 4G or 5G wifi hotspots from a company called Mobile Beacon, based in Johnston, RI.  Check them out on techsoup.org.  The hotspots typically are less than $20 each, and the service is either $10 or $20/mo.   The speeds tend to be pretty throttled, and you’re limited to the T-Mobile network, but the price sure is right!

Second, there’s a company called MaxxKonnect outside Birmingham, AL that offers an unique (as far as I know) service in 4G/5G internet service: their devices are registered with the wireless carriers (Verizon, AT&T, T-Mobile) to have a much higher QoS Strata than regular cellphones. This means whenever a given cell tower is overloaded with too many phones trying to connect?  Your MaxxKonnect device will get priority and kick off whomever it needs to in order to give you the connectivity you need.  These MK devices do cost more, usually $200-$400/mo, but the service is hard to beat.  I recommend every radio station have at least one of these to make sure you have some level of internet service.  Often after a disaster the carriers will roll in “cell on wheels” (COW) trucks within a day or two, maybe even just hours, to start providing connectivity to the first responders – and your MK device will benefit mightily from this dynamic. 

Backup Automation

Almost every radio station has some form of computer automation these days.  This means it’s pretty easy to have some kind of backup in place, ideally at the transmitter.  It really depends on your format and how fancy you want to get.

At its most basic, a backup can be a computer running Winamp, iTunes, or Groove Music that just plays MP3 files sequentially in a loop, and you have a short legal ID that plays every two or three tracks. Some kind of automatic audio switcher keyed to a silence sensor that switches to automation after 2 or 3 minutes of silence from the studio is a good idea.  Like the Broadcast Tools Audio Sentinel and Silence Sentinel line of products.   It won’t sound all that interesting to the listeners, but it’ll keep you on the air.  This is handy if you need to do some work in the studio or on the STL.

But this article is about dealing with disasters.  What good is your station if it’s just playing random music during a disaster?  Not very much.  So let’s think a little bigger.

If you’re not a news radio outlet, perhaps consider setting up a deal with one in town to rebroadcast them during a disaster?  This can be as simple as a radio tuned to the other station’s frequency plugged into an aforementioned audio switcher.  Although I’d suggest extending the silence duration (before it automatically switches) to a much longer period of time…15 or 30 minutes…to avoid a false switch.  Something that can be triggered remotely is also a good idea.   If you’re a non-commercial radio station, then rebroadcasting the local NPR affiliate is probably your best bet, here.

If you are a news radio outlet, you’ll want to get a lot more robust.   First off, your backup automation should be the same make/model as your regular automation.  You want something that seamlessly integrates with your staff’s normal operational workflow as much as possible.  Generally you’re going to want the backup automation at the transmitter site, but there are some exceptions:  if you’ve got a backup studio?  Keep the backup automation there.  If you air a lot of satellite-delivered content (like NPR stations)?  Keep the backup automation where the satellite dish is located.

Regardless of where it is, make sure there’s an independent path to put the backup automation directly on the air; bypassing your studios if needed.

You can opt to “link” your main automation to your backup automation, too.  There’s advantages and disadvantages to this.  Depending on how you run your station, linking them might enable your backup automation to sound near-indistinguishable from the main.  Besides covering yourself in a disaster, it makes it useful during more mundane times…like studio maintenance.

The disadvantages are that your backup automation can be more vulnerable, especially to computer viruses.  Keeping them “air gapped” (not networked to each other) means if your main studio gets taken down by ransomware, you’re not necessarily off the air entirely.

One last thought about a backup automation system: cloud computing.  There are some cloud-based radio automation systems these days, some of which are quite good.  They have the significant advantage of running on enterprise-grade distributed servers that are very hard to “take down”.  And they’re accessible from any computer with an internet connection, from anywhere.  But they also typically require some form of webcast audio feed from the cloud to your transmitter to work.   That could be highly problematic during a regional disaster like a blizzard or hurricane.

Backup Studio

Lots of radio stations simply don’t bother with backup studios anymore, which perplexes me.  It’s never been easier to run a radio station from anywhere, and to require less physical infrastructure to run it!   Unless your studio is already physically co-located with your transmitter, is already hardened against likely disasters, and can be stocked up with emergency supplies?  You should have at least some form of backup studio, although exactly what that form takes may vary considerably.

First off, a small mix board, computer and a microphone or two at the transmitter site is an easy way to keep something going for almost any station.   Transmitter sites can be noisy locations, so try to find a “room next door” you can use, or something to that effect.   If you’re at a shared site with other stations, consider everyone chipping in to build a basic facility that offers backup capacity for everyone.

Speaking of chipping in, what about setting up a reciprocating agreement with another station in town?   Either can share the other’s studios in an emergency.  Make sure you get the engineers and the lawyers involved here; you’ll want to spell out exactly what each side is obligated to provide, when to provide it, under what conditions, and how exactly these contingency plans will be put into effect.   You don’t want to be figuring this stuff out on the fly while a hurricane is blowing through.

Remember that if a disaster is bad enough that you have to evacuate your studios could be bad enough that the other guys have to evacuate their studios, too; choose your partner with an eye towards how robust/resilient their facility is.  And bear in mind that you’ll want to install some of your hardware on their site; a wireless STL if nothing else; you won’t want to rely on the public internet for STL’s in those situations!

Those concepts will cover most stations in a market.  But let’s say you’re a primary source of emergency information in your area, so you need a pretty robust backup studio. What do you need?  Obviously, you need space, first and foremost.  Ideally something that’s:

  • Physically close (ideally the same building) as your transmitter and/or satellite downlink.  Or to infrastructure that’s very robust and links to both/either of those.
  • Already has two or three rooms that are approx 10×10 to 20x20ft, and have been kept clean and usable for regular office purposes.
  • Is located somewhere a disaster won’t directly impact you: out of the floodplain (flash floods), away from the coastline (hurricanes), has a good basement (tornadoes)

Start with this list, but think hard about your specific risks and how to deal with them.  And bear in mind that in most cases, you could easily be stuck at your location for three to seven days before you’ll realistically be able to restock your supplies.  I suggest aiming for keeping a week’s worth of food, water and fuel.

Electric Power Generator, and with enough diesel/propane to go at least five days without refueling.  I don’t recommend a natural gas genset that’s powered by utility-provided gas lines; these are too vulnerable to disasters.   Make sure it’s big enough to run the studio, an emergency transmitter, the associated gear, and some modicum of HVAC as well. 

  • Don’t rely on a little portable generator.  At best they’ll run for 4 to 8 hours before needing refueling, and they’ll need regular (daily at best) oil changes and other maintenance…all during which the generator has to be turned off.  Get a proper whole-house generator with an automatic transfer switch.
  • Don’t skimp on the maintenance!  Hire a firm to perform twice-annual servicing and preventative maintenance.  And run the generator, under load, for 20 minutes at least once a quarter to test it.
  • Solar/wind power is a great idea, and as of this writing (2022) there’s often a lot of financial incentives.  But getting sufficient solar and battery capacity to not only run your facility 24/7 and charge it up during the day requires a whole lot of solar power.  Most solar installs are just designed to reduce your consumption of utility power during the day and lower your electricity bills.  To run solar 24/7 requires much, much more capacity.  Plan and budget accordingly. 

Food and Water, per day, per human, you’ll need 1800-2000 calories of food, and 1 gallon (3.7 liters) of potable water.  

  • Make sure to get nonperishable food; canned goods are a good start – don’t forget the can opener! Military MRE’s (Meal Ready to Eat) are also good and widely available online.
  • In addition to any airtight plastic storage, put the food into latched metal containers as well (to keep rodents out).
  • Get a bunch of plastic silverware, plastic plates/cups, and paper towels. 
  • Food & water don’t last forever!  Remember you have to replace this stuff, even the MRE’s, usually annually.  Even sealed water bottles will “sweat” out the water over time.

Microwave, if your generator will support it, a small microfridge (microwave and mini-fridge) is a luxury that’ll help your staff get through this emergency a lot happier.  A coffee pot falls under this rubric, too.

Sleeping, keep an inflatable mattress and three changes of fitted/top sheets, plus one or two blankets.   If you can, get a mattress that can be blown up by a powered air pump as well as by lung power.   If possible, arrange for a separate room for sleeping.  Or at least get a few extra sheets to hang up for privacy.

Sanitation, keep a stash of moist towelettes, toilet paper, and garbage bags handy.  Invest in a chemical and/or electric toilet.  Remember your male and female employees have different bathroom needs, too!  Besides feminine hygiene products, look towards sailing/marine stores for help in getting gender-appropriate portable/disposable urinals.

HVAC, make sure you’ve got some means of providing chilled and heated air (depending on the season).   Besides protecting your equipment, and keeping your employees happy, there are OSHA rules about how hot/cold a work environment can be.   Keep a small oscillating fan or two as well.

First Aid Kits / Medication, in general employees should be responsible for their own medication, but keep a stash of aspirin/tylenol/ibuprofen, benadryl (for anaphylaxis), and some all-purpose contact lens solution & lens holders.  Perhaps some “reading glasses”. All of these are “just in case”. In addition, a well-stocked first aid kit is a good idea.

  • Again: this stuff doesn’t last forever.  Replace it every 1 to 2 years.

Security/Keys, this is especially important if you’re sharing the backup studio with another entity: everyone needs their own set of keys and/or keyfobs.  (remember to put the keyfob system on the generator so it works when street power is down!)  Keep track of who has access and who doesn’t, and audit the system every three to six months.

Last but not least about backup studios: TEST THEM REGULARLY.  It does no good to have a backup if you don’t make sure it’ll work when you need it to.  And the time to find out the generator or HVAC doesn’t work is not in the days leading up to a storm, when every contractor within 200 miles will be booked solid for work.   At least once a year, during good weather, fire up the backup studio…generator and all…and broadcast from there for at least one hour.   It can be done “after hours” to minimize the impact on listenership, but it needs to be done.   Also, it gives you an excellent opportunity to check your written procedures and update them as needed.

Backup Brainpower

Finally, I want to talk about something a bit more nebulous, but just as important: knowledge.

Partly this is about documentation.  All your procedures for executing any backup plan need to be written down and easily accessible, both in cloud form but also (in the case of a backup studio) in paper form as well.  In fact, go ahead and have things printed out and laminated, and then stored in a three-ring binder for better durability.  Then hang the binder right by the front door as you come into the facility, and put up a big sign that says “How to Activate the Backup Studio” on it.

Partly this is about training, too.  When I say your backup policies and procedures need to be tested annually, it’s not just the job of the engineer.  At a minimum, the airstaff and some relevant technical/operations/production staff should be involved, too.  It’s going to be chaotic and stressful enough during a disaster for your staff; if they’ve at least seen the backup facilities and seen them in use, it’ll give them a much better idea of how to prepare.

But mostly I’m talking about making sure all this stuff doesn’t live inside the head of a single individual: your engineer.   Whether you’re the engineer, or the manager of the engineer, insist that the station provide that engineer with at least one other willing staff member to mentor/train them to have a reasonably solid, in-depth understanding of all this stuff.  There are a thousand scenarios where your engineer could be stuck 1000 miles away from your facility right when you need them the most.

Conclusions

This might be a ten-page document, but it still merely scratches the surface of what is, admittedly, a very wide-ranging topic.  Done right, having redundancy doesn’t need to cost a fortune and it can keep your station broadcasting when your listeners need you the most. Even doing it a bit half-assed is still better than not doing anything at all.

Take a few days with this document to start drafting up some ideas/policies for your own station, and then take another few days to come up with a budget for it all. Go to your head salesperson and head development/pledge drive person, and talk to them about how to put a dollar value on downtime. Wrap it all up and take it to your boss for adding it to next year’s budget. When the next disaster comes along, give that boss a nudge and a wink and say “aren’t you glad we all prepared for this?

And of course, if you have ideas, suggestions or corrections, please send them along to me at aread@landrbs.org 

Aaron Read Avatar

Published by

Categories:

Note: The Engineer’s Corner was an occasional column Aaron penned for
Rhode Island Public Radio before it was discontinued in early 2024.

Leave a comment