Superstorm Sandy Reinforces the Need for Business Continuity Planning
November 6, 2012 Leave a comment
- The arrival of yet another devastating storm reinforces the need for business continuity features.
- Business customers need to have a clear understanding of their carrier’s internal architecture for redundancy; they should also take secondary steps for further assurance of continuity.
In my August 28 blog, “UCaaS Can Be a Lifesaver in a Disaster,” I discussed the need for customers to have a clear understanding of their service providers’ business continuity features. When I wrote that article, nobody had any idea that a storm on the scale of Sandy, which ravaged the Mid-Atlantic seaboard, was on the horizon. Of course, the first and most important thing in these situations is the protection of family and loved ones. This was a storm like nothing else in recent history in the region, leaving devastation in its wake.
The size and ferocity of this storm is also a reminder to businesses that they need continuity plans. Data centers are designed and built to withstand most natural disasters, but we saw data centers in New York City struggle in Sandy’s destruction, defaulting to diesel-powered generators when electricity failed. Fuel quickly became a scarcity in the region, and access to data centers to re-fuel the generators was difficult due to the extensive damage. There were reports of at least one data center that ran out of diesel fuel.
I happened to speak with a hosted IP telephony/UC specialist last week, just one day after the storm. The date for this call was scheduled weeks ago, and it was a coincidence that the call came on the heels of Sandy. This provider does not operate its own network infrastructure; it relies on partnerships for transport, with service clusters located in data centers. When the conversation turned to business continuity features, I asked how the company fared in the storm. Like most service providers doing business in the Northeast, they rely on data centers in New York City. Fortunately for this provider, power was maintained at its data center location; its services were not disrupted. However, the executives were quick to point out that they have infrastructure in data centers in other parts of the country, and their service architecture enables them to reroute all traffic to an alternate data center if necessary. If this were to happen, calls in progress would likely drop, but customers would experience no greater disruption. As the storm approached, this provider also sent messages to clients, advising them to utilize the portal and proactively set up a redirect number to a mobile or an office location outside the region, thus ensuring calls were handled should power to the data center and/or the business location fail. The provider took these steps to ensure that customers were able to maintain communications as much as possible. However, there were other providers that did struggle, leaving even business customers located outside of the impacted region with limited, or no, telephone service.
Now, this may all sound like common sense, but is it? For those companies with locations along the Mid-Atlantic and Northeastern U.S. corridor, how did your service provider perform during the storm? Were the internal processes you put in place tested, and were they up to the challenge?