Disaster Recovery: Best Practices and Leveraging the Cloud

Written by Alexander Shapero on Mar 27, 2018

It’s not a matter of if a disaster might strike, but when. Whether it’s a natural disaster like a hurricane, earthquake or flood, a system failure, human error or a cyber attack, organizations must have a solid disaster recovery plan in place to ensure business continuity. Yet, even though data loss resulting from a disaster can be extremely damaging—or even detrimental to a business—many organizations still don’t have adequate recovery (DR) plans and processes in place. In a 2017 survey conducted by Forrester Research and the Disaster Recovery Journal, in fact, only 18 percent of respondents claim that they are very prepared to recover data in the event of a disaster.

It’s not that business leaders don’t understand the importance of disaster recovery and business continuity, either. According to the Forrester and DRJ survey, respondents claimed that the top three factors driving efforts to improve DR include: the requirement be always-on to stay competitive; regulatory or legal requirements; and, the sheer cost of downtime. The risks of not having a plan in place are enormous, too. For perspective, Hurricane Harvey alone cost Houston businesses $15 billion in lost productivity. In addition, Statista conducted a survey and found that the average hourly cost of enterprise server downtime ranges between $300-400K.

Disaster recovery preparedness: Where businesses fall short

Studies show that the lack of preparedness is mostly due to insufficient planning, outdated approaches and inadequate testing. While it’s recommended that organizations conduct regular business impact analyses (BIAs) and risk assessments many fail to do so. In fact, only nine percent of businesses report conducting a BIA continuously, while a mere 11 percent perform a thorough risk analysis on a continuous basis. Testing is a whole other story. While more businesses are testing their respective DR plans at least once a year, only 19 percent are doing so two or more times per year. That’s a discouraging decrease of 11 percent from 2016.

Best practices for business continuity and disaster recovery

While there is no one-size-fits-all approach for all businesses, there are some key steps to take in developing a disaster recovery plan.

  • Business Impact Analysis: Identify critical business operations across all departments and what the potential impact on the organization would be if those become unavailable. Map IT dependencies and define recovery point objectives (RPOs) and recovery time objectives (RTOs). In other words, determine which business functions are critical and need to be restored as quickly as possible.
  • Risk assessment: Determine which business processes are most critical, the likelihood of an event occurring and what the threat of downtime poses to the organization’s overall operations. Develop a plan to protect critical data and systems from an adverse event through regular backups and snapshots. Establish ways to minimize the severity of any damage caused by an event and recover from it.
  • Test, test, test: Often the most overlooked step, it is essential to test your plan before an event occurs. After the first backup or snapshot is executed, make sure to test how accurate and fast the recovery process is. This should be done regularly, especially if any changes have been made to the backup process.
  • Implement Automation: Automating your disaster recovery and backup process not only saves time, but it reduces the potential for human error. After all, studies show that human error accounts for nearly 33 percent of all of industry disasters.
Snapshots and Disaster Recovery in the Cloud

Traditional DR strategies were focused on on-premises issues like server failures, power outages and data loss. In today’s hyper-connected and virtualized world, that approach is outdated, as it doesn’t deal with issues that arise when connectivity is lost. That’s why most businesses are leveraging hyperscale clouds, like Google Cloud Platform, in their Windows Server Backup and disaster recovery plans.

itopia’s cloud desktop and application migration, automation and management platform, Cloud Automation Stack (CAS) leverages GCP’s Google Cloud Automatic Snapshot technology for Windows Server Backups and data recovery. Snapshots are an exact copy of a persistent disk at any point in time and are differential. That means they work in the following way: The first snapshot of a persistent disk is a full snapshot that contains all the data on that disk. The second snapshot only contains any new or modified data after the first one, and so on. Compute Engine encrypts and compresses the data, and stores multiple copies of each snapshot redundantly across multiple locations with automatic checksums to ensure the integrity of data.

itopia CAS enables IT administrators to schedule automated snapshots, to not only save time and costs in the event of downtime, but to also ensure that data is always retrievable. In addition to snapshots on GCP, it’s important for companies to have a standard backup solution in place, like Volume Shadow Copies on Windows Server. That way, companies will be able to be able to recover specific files without having to revert the entire disk back to an older snapshot.

In the end, data is the backbone of any business operating today. When disaster strikes—in whatever shape or form that it may—organizations without a solid disaster recovery plan in place will suffer, to say the very least. However, while savvy business leaders know this, many admit that they aren’t well prepared for the inevitable. In today’s hyper-connected and competitive world, being unprepared is no longer an option.

Want to learn more about migrating desktops and applications to Google Cloud? Schedule a demo

Comments