On the other hand, any production server in the data centre has some level of business value and will have some level of impact to the business if it becomes unavailable, and is therefore a server worth protecting. This results in an ongoing debate around the cost and value of disaster recovery, with organisations often asking how much is it actually worth?
Choosing the right solution for your business
With this in mind, organisations must allocate budget appropriately, aligning protection costs with the business importance of data centre components. The significant differences between the two major approaches to server protection further complicate matters. The first approach involves infrastructure mirroring: by mirroring the entire server environment, you can achieve the greatest degree of protection. The second approach costs significantly less because you simply back up just the data within the data centre.
Mirroring offers a fully redundant infrastructure, which provides the ability to meet near zero recovery time objectives (RTO, or the total time to recover a service after an outage) and recovery point objectives (RPO, or the tolerance for data loss) performance metrics. The problem with this approach is total cost of ownership (TCO). Duplicating server workloads doubles the initial cost of every server, and also adds to the costs of infrastructure components, bandwidth, implementation and maintenance.
Although you can sometimes justify the expense of duplicating business-critical server workloads such as customer-facing applications (web servers and online order processing, for example), it is harder to find sufficient funds to similarly protect workloads deemed less critical, such as email servers, internal web servers or batch reporting applications.
In comparison, backup solutions leverage everything from inexpensive tapes to increasingly economical disks. As a whole, these solutions tend to be very cost effective. The downside to this data-focused approach is its recovery performance. The RTO performance of backup solutions tends to be quite poor. It can be time consuming and complex to take backup data from a tape or disk and rebuild it into a usable server workload.
Think of how long it takes to reinstall and update a server operating system, install and update applications and middleware, and reconfigure all the networking connections, before finally restoring the data. Now imagine doing that not for one server, but dozens. The situation is complex and time consuming.
Maintaining peak performance
With only these two approaches, your organisation must choose expensive, redundant infrastructure or cheap but slow data backup, or usually some combination of both. Statistics show that organisations end up using 80 percent of their budgets on high-performance protection for only 20 percent of their servers; the ones that absolutely need uninterrupted performance. This leaves the remaining 80 percent of the server workloads under-protected.
To understand data recovery performance, you should break data recovery into three phases:
(or replication), failover and failback. In most solutions, organisations concentrate on backing everything up. Traditionally focused on the technologies and processes that keep data current, data backup solutions range from simple daily tape backups to sophisticated storage area network (SAN)-based replication. But a backup copy doesn't help if you can't actually use it. You should place equal if not greater importance on failover and failback.
Solutions offering the best RTO and RPO performance tend to take complex and expensive redundancy-based approaches. However, organisations that are already facing budget constraints, and have therefore implemented more cost-effective legacy backup solutions, will face lengthy and error-prone failover processes that miss the mark on performance. The processes they use to convert raw data to a useable server workload state are the problem here. Again, cost versus performance is the central challenge.
Modern failover solutions mean that recovery can be as simple as powering-on a virtual standby workload. Upon receiving a failure alert, workloads can be recovered with a single mouse click. One–click test failover also enables rapid testing of the integrity of workload replication to ensure it's ready for action when required. After all, there's no point having a spare tyre in the car if you discover it has a leak too when you try to use it.
When planning for disaster recovery, organisations frequently overlook the final phase of the disaster recovery lifecycle: failback. With many solutions—especially the more cost-effective data backup solutions—companies consider only a one-way trip. They have no plan in place to get "back to normal" from the recovery site. Obviously, this can lead to unexpected or unnecessary headaches as firms try to return to "business as usual." This is this area which can let down the implementation of disaster recovery solutions, regardless of cost.
Disaster recovery is a must for every modern business. Instead of asking whether they can afford disaster recovery, companies should ask whether they can afford not to deploy these solutions – whether they can withstand disaster-related downtime, and the accompanying rebuilding costs and loss of revenue. As with all IT measures, cost plays a key role in deciding which solution to choose. What is more important however is making sure that any investment is done strategically, minimising any impact that a disaster could have on the business.
Mike Robinson, senior solutions manager, NetIQ
Subscribe to our newsletter
Stay updated on the latest technology, innovation product arrivals and exciting offers to your inbox.Newsletter