There has been a lot written about backup. But, believe it or not, backup isn’t the be-all and end-all. Recovering after something bad happens is. Backup is just the first part of disaster recovery, and how well you or your clients fare when it’s actually show time is going to depend on more than just how you’ve backed up, but how well you’ve planned for disaster recovery.

Disaster can take many forms. It can be something as simple as a hardware failure — a crashed drive or blown power supply, or a burst pipe, hurricanes, floods, mudslides, and other nasty happenings that we don’t necessarily anticipate. Could your practice recover if a meteorite slammed through your roof and obliterated your server rack? Sure, the likelihood of that happening is pretty low, but so was the likelihood of billions of dollars in securities locked away in bank vaults being destroyed when the financial district of New York City was underwater several years ago.



That doesn’t mean that you should be lying awake in bed at night worrying about the desktop PC in your office. You just need to have a reasonable plan for the worst. And making that plan means knowing how you’ll back up, what you’ll back up, and what you’ll need to do in the event that you have to actually use that backup to recover.

When most people think of backup, they think of backing up data, rather than the entire operating environment. Sometimes that level and amount of backup is sufficient. But sometimes it’s not. And knowing whether it is or isn’t sufficient is critical when you get to the end of the journey — recovery.

For most of the time computers have been available, backup has been conducted in-house. Tape, disk, CD/DVD and Network Attached Storage have and are still major locations for backups. These days, however, backing up into the cloud is becoming increasingly attractive. It provides a valuable disconnect between the physical location of your and your clients’ hardware and the location where the backup resides.

There are many cloud-based backup services available designed specifically for business use. While Mozy and Carbonite offer cloud storage and backup for personal use, they (and other vendors) also offer professional versions. These have extended features like being able to back up servers and NAS, as well as individual PCs and laptops.

Jim Bourke, a CPA and partner with Top 100 Firm WithumSmith+Brown, swears by cloud-based backup and has his firm’s experiences with Superstorm Sandy to underscore why. “WS+B’s philosophy has been to migrate in-house technologies to the cloud as they become available. We focused on our mission-critical technologies, moving tax to the cloud first, followed soon by document management for file storage,” he said. “During Sandy, nine out of our 13 offices were directly impacted. The fact that we had all of our client and many of our own firm documents in a cloud-based document management system was priceless! Many of our clients and other local firms lost 100 percent of their documents that were stored on site. Clients reached out to us in a panic wondering about the implications of the loss of supporting documents for their previously filed tax returns. We were unequivocally able to assure them that all documents that they provided to us were safe and we could easily replicate those for them after they recovered from the tragedy.”

“Subsequent to Sandy we’ve been moving full steam ahead on migrating other existing technologies to the cloud,” he continued. “In fact, those technologies that are not in the cloud have been transitioned to our own outside data center where we basically support our own ‘private cloud.’”

One of the benefits of using commercial cloud vendors is that their data centers are very physically secure and, in most cases, spread out over a wide geographic area. With a privately maintained cloud, you need to make sure that the hardware your cloud resides on is located somewhere that will likely not be affected in the event of a disaster that nukes your own hardware, and that the physical location of the cloud hardware is someplace where power and Internet availability will also be maintained so that you can reach your backup for a restore.



Buying and installing backup and recovery software is just a piece of the puzzle. Developing and testing an organized backup and recovery plan is the whole picture, and this process should ideally start before you choose a service or piece of software. Scott Wegner, a partner in the technology practice at Top 100 Firm Sikich LLP, offers this advice: “Strategic planning is really important for backups. Many of our clients tend to be myopic, and look strictly at the backup of their data, rather than looking at the big picture. We like to follow a big-picture approach including business continuity planning. In general, we try and follow the standards developed by [the National Institute of Standards and Technology, a division of the U.S. Department of Commerce].”

These standards are spelled out in NIST Special Publication 800-34 Rev.1 Contingency Planning Guide for Federal Information Systems, which is available for free online.

Wegner continued, “These standards do get into a lot more depth than just the data. Data is one component, but there’s people, there’s process, and there’s access to that data and security.” Wegner’s approach starts with asking the client, “What does it take to recover the business?” and works from there. When Sikich does consult with its clients on backup and recovery, the firm always recommends developing a formal written plan, though clients don’t always follow this advice.

And that’s a great way to start your disaster plan. Look at the workflows that exist in your practice (or at your clients, if you are helping them with their disaster plan). Where does data come in? Where is it used? Where and how are the source data and the results of using that data stored?

Above all, look for the vulnerabilities. What data and applications are mission-critical to the continuity of the practice? Unfortunately, there have been many instances of accounting firms backing up only their servers, only to find out during the recovery process that important information was stored solely on the workstations, which weren’t included in the backup plan.



In many backup and recovery protocols, you’re creating an image of a desktop or server drive with the idea that restoring that image to a new drive will completely restore all of the data files and the applications installed on the original. Most of the time, this approach works just fine, but occasionally, if the hardware that the image is being restored to is so different from the original hardware and operating system that you are restoring to, a full image restore fails or becomes corrupted to some degree.

You can address this by using a backup application that has the capability of performing a restore onto a completely different hardware configuration than the original backup was made from. Some examples of this kind of application are Acronis True Image with Universal Restore, Symantec System Recovery 2013 R2, and backup software and physical and virtual backup/recovery appliances from Datto.

If you have the IT expertise in-house, another backup and recovery option is to virtualize your servers and workstations, store the virtualized files somewhere safe (such as in the cloud), and if you need to perform a recovery, recover them to a compatible hypervisor running on whatever hardware is available that supports that particular hypervisor application. This does, however, require a fair amount of technical expertise to accomplish reliably. However, free desktop virtualization software for different versions of Windows is available from Microsoft and other vendors. Microsoft server operating systems also have virtualization capability as a standard feature.

Another vulnerability in backup strategy is backing up just data files, figuring you can re-install the applications if you have to. But re-installing the applications that use that data depends on having access to the original installation discs. If you’ve been keeping up with the application upgrades, and lose access to the application discs, you may be able to get a set of replacements, or in the worst case, repurchase the application. But if you (or the client you are advising on backup/recovery) are using a legacy application, one with data formats not compatible with current application versions, or perhaps one that is no longer even available, recovery may be complex, or even impossible.

Given that there is a fair chance that the original install discs may be damaged in a disaster, consider backing up application discs separately. One workable approach is to gather all of the application discs you have, convert them into ISO image files with an application such as ImgBurn or Active@ISO Burner (both of which are free), and store the resulting ISO images in several places. An ISO image differs from a copy of a disc in that it is a clone of the disc itself, rather than individual files, though these are contained within the image. Using an ISO image, you can burn a CD or DVD that will be a copy of the original disc. This approach is a lot simpler than copying multiple files from a CD or DVD to a folder, then copying them back when you need a disc to re-install an application.

It’s a good idea to store these ISO files in multiple places, along with a text file that contains the product ID number or install serial number for each application. A Dropbox Pro account costs $10 a month and provides 1 terabyte of storage. With most DVDs having a capacity of either 4 or 8 gigabytes (and the ISO file you create can’t be larger than the original disc), you can fit a lot of application ISO images in 1TB of cloud space.

Having these images in multiple places is a good idea, and a 64GB flash drive costs about $20, so consider copying your application disc ISO images to a flash drive as well. You can put this drive in a zippered food storage bag and carry it with you in your pocket in case you are in a recovery situation where you don’t have access to the cloud.

Different strategies for backup and disaster recovery are detailed in the NIST publication mentioned above. Cisco Systems Inc. also has a great white paper, “Disaster Recovery: Best Practices,” available for download on their Web site.

Another excellent resource is the Disaster Recovery Plan Template available from the Info-Tech Research Group. This is primarily a resource for the consultancy’s clients, but they graciously make it available for free download, though you do have to fill out a short form giving them basic information about your or your client’s business. The template is 43-page document with highlighted and replaceable text, and also serves well as a best practices guide. When you are satisfied that the finished plan is viable and has been tested, it’s a good idea to print physical paper copies for each person in your firm who needs one, laminate every page, and put each copy in a three-ring binder. The laminated physical copy may survive a flood or other disaster that destroys an electronic copy or makes it unavailable.



Having a formal disaster recovery plan that includes backup planning and protocols is an excellent step, but it’s not the final one. Testing the plan at regular intervals to make sure it works, and make any necessary adjustments where need, is also vitally important. Helmuth von Moltke, a German general in the First World War, is famous for his statement, “No plan survives contact with the enemy”. Of course, that’s not always the case, but for purposes of this discussion, a disaster of any kind is the “enemy,” and you want to make sure that the first time your plan is needed it does survive the experience.

Mike Inkrott, senior product manager at Symantec Systems emphasized, “Testing backups on a periodic basis is considered a best practices staple. Only if you know your backup can be recovered can you be assured that you are truly protected. This can start with a simple verify operation to ensure that the backed-up data matches the source data, but to truly test the process, some data should be restored. Randomly selecting a backup to restore will not only help with determining the veracity of your data protection scheme, it will also familiarize you to the process of data restoration — an important skill during critical outages.”

“Testing can be done to alternate hardware which is kept separate from the production network,” Inkrott continued. “You can use a virtual environment such as VMWare or Hyper-V for test recovery purposes. Virtualization is a great help here, as virtual machines can be started, tested and torn down easily with no impact on hardware. You can also use Symantec System Recovery SRD to perform a test recovery of a system before a disaster occurs. A test recovery lets you familiarize yourself with the recovery process before it is needed! It’s important that you test your recovery procedure at least once a month.”

One final thought: It’s wonderful to have sincere concerns and intentions about developing a comprehensive and workable backup and disaster recovery plan. But concern is worthless if you don’t actually sit down and get it done and tested. Disasters happen and hardware breaks down. If you aren’t completely prepared, you could lose your practice, your clients and your livelihood.

Register or login for access to this item and much more

All Accounting Today content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access