Can Backups Be Made Obsolete?

Backups have long been a source of pain and frustration for enterprises of all sizes; they are constantly causing problems because the growth and value of data is increasing faster than the network's ability to deal with that data. The problem keeps many IT professionals awake at night and most surveys indicate a low confidence in the ability to recovery from a disaster, but how can backups be made obsolete?

George Crump, President, Storage Switzerland

June 1, 2009

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Backups have long been a source of pain and frustration for enterprises of all sizes; they are constantly causing problems because the growth and value of data is increasing faster than the network's ability to deal with that data. The problem keeps many IT professionals awake at night and most surveys indicate a low confidence in the ability to recovery from a disaster, but how can backups be made obsolete?A key step as we have chronicled in our article on "Deduplication Weaknesses" is to make sure you are focusing on only the active data set. One way to do this is through an aggressive archiving scheme that disk based archiving enables. With disk based archive, old data can be returned to users fast enough that they won't even know its not on primary storage, making a rule of migration of data from primary storage after 30 days of inactivity feasible.

The next step is to change the way backups are done. Most backups require moving all the data from each server across the network every time a full backup is run, and a significant subset of that data every time an incremental or differential is run. This puts tremendous load on the environment, but what if that could change? What if data could be written to backup storage as it changes on primary storage?

That is exactly what Continuous Data Protection (CDP) attempts to accomplish. Companies like InMage, FalconStor and Syncsort all have CDP or near CDP capabilities that as data changes it is written to a secondary storage device. Sometimes this is done via a write splitter that actually writes data to two locations simultaneously and often it is a scheduled scan that collects all the writes since the last scan and updates the target.

CDP is different than replication or mirroring because typically the application also has its own ability to create a snapshot of the secondary storage target prior to being updated from the primary data set. A snapshot is important because it provides a point in time rollback capability to the user. It is also different from a storage system snapshot because it is a copy of data that is on a different storage device than the primary copy which means it survives if primary storage fails.

Also different with CDP vs. Backup is that the secondary disk target is a useable copy of data, it is not in a proprietary format. This copy can be snapshot again and mounted to an alternate server for testing or even to a backup server to be backed up locally as opposed to across the network.

Most importantly some of the CDP applications allow you to directly mount this data from the original server, a recovery in place capability, even if the server was CDP protected via the IP network instead of the SAN, the server can still mount via a quick iSCSI connection. By directly mounting the data, the restore process is eliminated or at least done in the background. This is critical because even if restores from traditional backups work, many times the time to restore a server's data back across the network from that backup and write that data to a RAID 5 disk is rarely calculated correctly. Direct mounting of the data eliminates that equation.

All of this can be done in full compliment to the backup process and still leverage support of VTL and deduplication appliances. CDP has been available for years but maybe we are just now to the point that we really need to have it and when it is fully implemented backups can be made obsolete. 
 Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Read more about:

2009

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights