With marginal (and some formerly healthy!) companies being forced out of existence these days, it’s important to assess any technologies you use as a software as a service (SaaS) model.
The beautiful pro’s of SaaS-based services is that companies and organizations can access technologies and services at an affordable price point, paying monthly rather than massive annual licensing and support agreements. You also avoid needing to run or lease your own hardware for the service. A lot of companies can use tech that wouldn’t be availalbe to them otherwise.
However (you knew that was coming!), there are some risks that are much greater today than when growth was much easier for everyone. The primary risk is: what if the company providing the service goes under or ceases to offer the service you rely on? The more central that service is to your core operations, the higher this risk becomes.
That’s not to say that SaaS is a bad model. It’s a great one for many organizations and situations. Given our current economic environment, it’s good to assess your risk and do what you can to mitigate it.
Here are a few suggestions for minimizing and controlling that risk:
- Back-up your data to your own storage. Your storage should be backed up as well
- Identify other vendors who can potentially step into the breach. If the worst does happen you’ll need to get back up as quickly as you can. Knowing who is out there in advance will save you some valuable time.
- Have a plan. What, precisely would you do if you had one week’s notice? How about no notice? For core services, this kind of disaster planning is critical.
- Talk to your current SaaS provider frequently. Touch base at least once a month to see how things are going. Firms that are going to fail often give some clues before it happens if you are paying attention.
Proper planning and preparation can help hedge your SaaS risk to a great extent. The worst will still impact you but you’ll be able to get back in business much faster than you could otherwise.
On a global consulting forum I belong to, Alan Weiss suggested that one thing to explore on a regular basis with direct reports is whether they are spending their time mostly fixing things or improving them. This is a great question for your Web and IT teams as well.
For teams that involve technology such as the Web and IT, it’s critical that they have a primary focus on improving the value of what others in the organization can achieve. IT and Web technology are pre-requisites for almost any endeavor these days. Teams that focus on improving and creating new value will increase the ROI of your technology investments and create an organization that constantly increases the value you offer to your constituents.
If your teams spend most of their time simply putting out fires, then they are leaving a lot of value on the table.
Don’t get me wrong, being able to troubleshoot, debug, and investigate the weird glitches and problems that technology entails is an important activity and skillset. However, it’s not why your teams exist.
This week I needed to use Internet Explorer in order to access some systems for one of my clients. This is not as much of a challenge as it used to be for someone based on a Mac now that Apple uses Intel chips in their hardware.
I am using Windows XP on VMWare Fusion, which allows windows applications to run as if they were native to the Apple operating system.
The following scenario made me think of Arthur C. Clarke’s famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
I dragged a file from Apple’s Finder into a folder in Windows Explorer. The directory in Windows Explorer was actually on a remote server I had connected to via WebDAV in Windows Explorer. So this file went from my native Mac filesystem to a simulated Windows operating system which then transmitted it to a directory on Windows Server hundreds of miles away from me.
All via a single drag-and-drop of the file.
This may sound like Greek to many of you but, believe me, so many different things have to be solved for that kind of operation to be feasible that I was awed.
Heathrow’s Terminal 5 opened last week in chaos:
It was 20 years in the planning, cost £4.3bn to build and its staff underwent six months’ training before it opened.
But none of that could prevent Heathrow Terminal 5 from descending into chaos on its opening day yesterday after the baggage system collapsed.
It’s always the baggage system, isn’t it? Denver’s new airport went through the same kind of issues when it opened over 10 years ago. Who could have predicted that Terminal 5 might have baggage handling problems? Apparently not the airport executives at Heathrow.
Look at what the article cited as the major issues:
- Baggage handlers’ IDs were not recognised by computers and they were not able to log on to the handling system, resulting in the three flights taking off without bags
- The handlers also could not get where they were supposed to go because they could not get into the car parks or get security clearance
- Amid confusion over the layout of the new terminal, bag handling teams were unable to make good the delays, which left passengers in arrivals while their bags waited on planes
- Problems were exacerbated by a lack of the baggage storage bins that are loaded on to planes. Carousels loading luggage also broke down
- By the afternoon, the already crammed system became overloaded and a ban on checking-in luggage was issued
- Delays in loading and unloading planes led to delays in departures and arrivals, forcing BA to cancel 34 flights to ensure that its jets start in the right positions to run a normal timetable today
The first four caused the effects described in the last two, none of which had anything to do with the baggage handling technology (that was Denver’s problem). A few dry runs of the facility under close to expected conditions would have easily revealed those issues. While individual staff were given significant training, according to the article, the results show that the system of the terminal as a transportation hub was not fully tested.
A full dry run would include all the staff, equipment, and passengers/baggage they anticipate handling. This is expensive but don’t you think the people in charge would happily pay for it now if they could back?
Here are a few things you can do to prevent this same kind of catastrophe striking your next major launch:
- Map out the steps of your launch and identify spots where significant problems are most likely to occur. Look at factors both within and outside your own control.
- Modify your plan to prevent those problems from occurring, if possible.
- If they cannot be eliminated, identify how you will contingently resolve a problem if it does occur.
- Test your plan via realistic dry runs to validate your assumptions and surface unanticipated issues.
No one’s perfect but we can all do a lot to eliminate potential risks with a little forethought and testing.