Failure To Move

Don MacVittie in his <a href="http://devcentral.f5.com/weblogs/dmacvittie/archive/2009/12/06/file-virtualizationhellip-the-short-primer.aspx">blog</a> over at F5 commented recently on an article that we have written <a href="http://www.storage-switzerland.com/Articles/Entries/2009/12/3_What_is_File_Virtualization.html">"What is File Virtualization?"</a> indicating that we missed a key issue in dealing with how to handle it when your virtualization box goes down. While my defense could be that th

George Crump, President, Storage Switzerland

December 7, 2009

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Don MacVittie in his blog over at F5 commented recently on an article that we have written "What is File Virtualization?" indicating that we missed a key issue in dealing with how to handle it when your virtualization box goes down. While my defense could be that the subject is beyond the scope of a primer, it is not beyond the scope of this blog. If you are considering a tiered storage model then what do you do when your data mover fails?A key consideration when moving data between tiers of storage is what to do when the box that is responsible for that movement goes down. As I have written in many blog entries, there are plenty of ways to move data between tiers of storage but the most common seem to be manual, automated data movement software and a file virtualization or global file system. How do each of these allow you to still get to your data if they have failed?

The manual method requires no change. You were manually copying data and I would assume telling users something like "if its not here, check there". Other than the storage system itself there is really nothing to break. The problem with the manual method is of course that it is manual and most IT professionals have plenty of things to do during the day, adding another manual task to that list is not going to be popular. Of course the manual method may not scale well either. That archive point has to remain basically the same.

The automated software migration application approach typically will move data based on file policies for you. To help users find their way back to their files typically means that the software will leave behind a stub file that will point to the new location of the file. If the application crashes or those stub files get corrupted, how do you get to your files? Depends in part on the application software. If it stores migrated files as blobs in a database, getting to that data could be quite challenging. If the data gets migrated to tape then you are probably going to need the application back up and running prior to getting to your data. If the stub that is left behind is leveraging shortcuts or symbolic links then they still should work even if the software has failed, but things tend to get messy with these approaches and you still have the issue of millions of small (now smaller) files on your primary storage.

Even if the automated software approach moves data to another disk and keeps it in native file format, it often is stored in a nonsensical manner. In theory you could manually path to the file system and find your data but the destination file system sometimes doesn't look at all like the file system you migrated from. Often its just a bunch of date stamped directory names with files dumped inside them. Essentially the application assumes that you will always have the application available to recover data. That may or may not be a good assumption.

File virtualization differs in that the meta data, the information about where the file actually resides, is stored within the appliance. Typically these appliances are highly available and can be implemented in redundant pairs. The file systems they virtualize remain untouched and they can be accessed manually if the file virtualization engine fails for some reason. Now you do need to know where the file virtualization system is placing data, so having a copy of your configurations can come in handy, but you can structure the target devices to have the same logical representation of the directory structure of the source devices.

Finally some file virtualization systems can help get around the storage system failure issue as well. They can replicate moved data to a second NAS system and then in the event of a failure on the primary archive reroute users to the remaining system. While file virtualization may not be the 'be all and end all', it certainly may play a role in making true tiered storage a reality.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights