What could have been managed as a simple outage quickly took on a public relations disaster of significant proportions after the failure of service within their Highbrook Data Centre in Auckland. IBM took a pasting in the online community that is New Zealand Computerworld with a lot of the commentary from both the media and the anonymous commentators simply inaccurate.
The reporting was all over the place. The New Zealand Herald called it a hardware fault, Stuff thought it was an access issue, and Computerworld reported it as a virtual environment issue. Even ZDNet got in on the action posting a story “IBM New Zealand Delivers the Big Blues”
Worse, the New Zealand Herald then leveraged off the story to write that “IBM crash highlights cloud risks” with the media expert on technology Internet NZ weighing in to say “the outage was an example of what could go wrong with cloud computing.” Another person inteviewed in the same article noted “that he was surprised to find out from the incident that IBM did not have a backup data centre.”
The comments raged across a number of articles in Computerworld in what could only be called an absolutely frenzy of blood letting, old grudges, trolls, and rampant ignorance with the voice of any sense being lost in what has to be the most commented set of stories ever for Computerworld. The primary article is here, but you can find several other related on the main site.
The entire Cloud concept was called into question. People had all the “facts”on the outage one blaming it on a storage failure while others disputed everyone else’s facts. IBM had no quality engineers in New Zealand and this is why it failed. It was the fault of “dumb and dumber” accounts at IBM international. The resources were cheap, from India and China. It was the customer’s fault for not having business continuity and disaster recovery in place. All New Zealand based Cloud providers were rubbish. All international based Cloud providers were rubbish. It was a Cloud washed service, not really Cloud, but sold as such, did that make for false advertising?
Literally everything but the virtual kitchen sink was blamed for the outage and one thing is for sure, IBM got nearly a full week of multiple, high-profile news articles (not always correct) that were almost entirely negative.
The worst thing is, IBM could have prevented it.
The problem was that IBM New Zealand was reportedly not allowed to comment on the outage, which meant that all information came out of IBM international via Australia and that information was thin, far and few between, and not that helpful.
What IBM should have done to limit the damage was be immediately up front and transparent about what had happened, what services were impacted, what they were doing, and repeated that information frequently as the incident wore on.
It worked for Telecom recently with their Yahoo! email outages and when Azure fell over during the same week, Microsoft were quick to go public. In fact, Microsoft allows the world to see ALL the details of their failures of service, in real time as do most large providers.
This incident goes to show that an open, transparent, and honest approach to service incidents is the best policy. Failure to do so allows for conjecture and simply engages the detective in every reporter because they can smell a story.
The modern ICT world has embraced this course of action for not only service outages, but privacy breaches and other serious incidents, and guess what, it works. The modern world in general, is bringing their customers, via social media for the most part, very close to themselves in an effort to build a different relationship in an increasingly competitive environment.
IBM’s Cloud didn’t really fail, what failed was IBM’s approach to dealing with it, which led to a public relations failure that will hang around for some time.