DAS VS. SAN - Capacity And Performance Management

Capacity presents two challenges to the Storage Area Network (SAN) vs. Direct Attached Storage (DAS) debate. A traditional knock against DAS and a reason that many data centers get a SAN is because of these two capacity challenges. The first is can you get enough capacity and the second is can you use that capacity efficiently in a performance sensitive environment? DAS however now has the ability to address both of these issues.

George Crump, President, Storage Switzerland

May 7, 2009

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Capacity presents two challenges to the Storage Area Network (SAN) vs. Direct Attached Storage (DAS) debate. A traditional knock against DAS and a reason that many data centers get a SAN is because of these two capacity challenges. The first is can you get enough capacity and the second is can you use that capacity efficiently in a performance sensitive environment? DAS however now has the ability to address both of these issues.Typically there are three types of storage needs in primary storage. The first is very high performance, relatively small capacity for specific applications; a database application or messaging system are good examples. The second is high capacity with modest performance for a mixed workload type of server such as a virtualization host. The third is very high capacity with relatively low performance, it does not need to be to the capacity level that a disk archive or backup tier would be but it does need reasonable capacity; a home directory server is a good example.

The first type, high performance, low capacity has been the hardest to configure efficiently. Performance is certainly there. I recently sat in on an LSI Corporation demonstration that showed 1 million+ IO's performance with 6Gb's Serial Attached SCSI on an Intel based server running Windows. This low cost performance is part of the reason for DAS's resurgence.

However the challenge as we touched on in our last entry is getting this performance while at the same time using the capacity efficiently. These systems will certainly be configured with multiple drives to increase performance and to protect against drive failure. The problem comes in using that space efficiently. While small drives can still be purchased, the price per GB is not as attractive as a larger capacity drive and as a result you end up wasting capacity because in a DAS environment this capacity can not be shared with other servers.

DAS's lack of efficiency is further exposed by the fact that shared storage systems from companies like 3PAR, Xiotech and DataCore are becoming increasingly efficient through the use of thin provisioning or automated provisioning. As we describe in our article Converting from Fat Volumes to Thin Provisioning, now these companies are making the conversion to thin provisioning efficient as well so old volumes can start out thin when you upgrade to new storage platforms.

In addition SAN based systems are now virtualizing the spindles behind the array so all of the available mechanisms can be used to get maximum performance from the array as we detailed in a blog last year entitled Wide Stripping. Obviously SSD systems can be used and shared in this environment as well for the ultimate in performance focused capacity.

SSD in the form of PCI-E cards like those from Texas Memory Systems and Fusion-io may also be an option for a performance vs. efficiency problem with DAS and may be a viable stop gap on the way to a SAN for many IT departments. This cards allows you to bring RAID like protection with SSD performance at a relatively low cost and in fact may be less expensive than a performance configured RAID set. Factor in the power efficiencies of SSD and the case becomes more compelling. If storage IO performance problems are plaguing just a few of your servers, it may be more cost effective to investigate PCI-E based SSD, especially if you don't have a SAN already.

You can reach new levels of performance with DAS today, the decision to be made is if its enough performance and can you either efficiently use the capacity of that storage or can you justify wasting the extra capacity, keeping in mind that power and cooling of those drives needs to be factored into the equation.

In our next entry we will look at how DAS can address the challenges of achieving large capacity storage in a single server.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights