Organizations that have invested in multiple storage area networks (SANs) are still seeing their data – and storage requirements – grow. This is why storage virtualization is appealing, because it presents multiple storage devices as one, hiding their physical complexity from the storage administrator.It allows organizations to see if a SAN is increasing in capacity or if it has lots of room, therefore getting better storage utilization out of their current infrastructure, rather than buying yet another SAN.
But virtualization can be expensive, and organizations have to consider whether they’re better off investing in storage management to curb the growth of their storage. Or they can consider other options, such as turning to cluster-type storage offered by iSCSI vendors.
For Glendon College, part of York University in Toronto, virtualization made sense because it had near-obsolete technology and was looking to do a complete overhaul. “There’s not a lot of money in education to support IT, but these enterprise solutions are becoming more affordable for small non-profit organizations,” said Mario Therrien, director of information technology with Glendon College. Virtualization, he added, allows the college to compete with bigger IT shops.
It built an IT infrastructure using servers, storage and software from Sun Microsystems to improve scalability and reliability, while minimizing overhead costs. This included consolidating 13 servers onto five Sun Fire X4100 servers running VMware virtualization software, integrated with a Sun StorageTek SAN data management system.
“We actually didn’t make a conscious decision to go toward a SAN system,” said Therrien. “It was more a decision to adopt a particular architecture.” The college has one SAN serving multiple servers, but with VMware it was able to put more virtual servers onto the same box. “It was part of the whole architecture that really made us decide to go toward a SAN solution,” he said. “The solution is scaleable, so we can just add storage as we go.”
A co-op venture
There’s a big focus on enhancing research profiles at universities, he said, and more faculty members are putting server expenditures in their grant requests. So the college created – using this infrastructure – a co-op type venture where faculty members put the same amount of money into the co-op, which goes toward growing the infrastructure and maximizing storage. “Prior to that you’d have a single server in an office somewhere and they only needed five per cent of that server,” he said. “That hardware storage would be lost.”
The college is looking ahead at how it’s going to grow this infrastructure, anticipating it will have to add more storage. “The business of the university is to create and capture knowledge,” said Therrien. “We really need to look at a knowledge management system. We have lots of knowledge, but trying to find it would be a challenge.” Now that everything is electronic, information is at risk because nobody is able to control it – research, for example, may be stuck on someone’s hard drive.
With a foundation built on virtualization, the college is looking at changing policies and putting systems in place to help people keep better track of information.
“We need to get better at retrieving,” he said. “We’d like to move toward a multi-tier infrastructure.”
The two principal areas where storage virtualization makes sense are for SAN and Fibre Channel infrastructure, as well as network-attached storage (NAS), which is more traditional integrated storage. “If you’ve got a relatively small environment, virtualization is probably not going to add any significant value,” said Ken Steinhardt, chief technology officer of customer operations with EMC Corp. Those who will benefit most are corporate enterprises with large, dynamically changing environments, where there’s a need to keep critical applications live and continuously available at full performance.
One of the primary value propositions of server virtualization is lowering costs, typically by providing better utilization of existing hardware resources. The same idea applies to storage virtualization, said Steinhardt.
“If it’s done right I have the flexibility to move information between different devices, while the users continue to see them as available,” he said.
Virtualization also has the potential to simplify management by providing a single virtual file system in a NAS environment that transcends multiple physical storage platforms, even from multiple vendors, rather than several incompatible management approaches. Another potential benefit is increasing service levels. “It gives you the ability to maintain live applications even while the actual physical infrastructure could be changing right under (users’) feet,” he said.
Nova Scotia Department of Health’s HITS-NS data centre, which supports IT in 34 hospitals across the province, is another example. HITS-NS has one blade enclosure with a fibre connection to the SAN and another blade enclosure with iSCSI connections that serve as a gateway for storage. “We have flexibility in server virtualization as to what type of storage we need to have attached to the server,” said Gary Stronach, manager of technical services.
Today’s larger enterprises tend to have multiple SANs, and storage virtualization is a way of abstracting storage resources from the hardware so it can be treated as one resource, said John Sloan, senior research analyst with Info-Tech Research Group.
When it comes to server virtualization, it helps if the underlying hardware is standardized. “You can have boxes from more than one vendor, but if they’re all using the same class of processor, you can use them as a basis for virtual servers,” he said. “On the storage side it’s not that easy because storage is not nearly as standardized as servers are.” And that, he said, is the main hurdle that storage virtualization has yet to overcome.
Vendors such as EMC, IBM and Hitachi Data Systems have come up with different ways to attempt to overcome that. But the problem remains, said Sloan, that whichever one of these methods you use. They are expensive and add another layer of management.
Even though vendors talk about storage virtualization as the next big thing, he said, it hasn’t taken the world by storm – and it hasn’t been hugely successful to date. Another form of virtualization that may be closer to that ideal is clustered storage provided by iSCSI storage vendors, such as LeftHand Networks, EqualLogic and Intransa.
LeftHand, for example, takes a storage module (basically a storage server) and makes that storage available as an iSCSI target for setting up an IP-based SAN. As long as the software in question is running on that basic configuration, it will automatically recognize a second storage module and pool the storage from the two modules as a single virtual storage pool.
Large enterprises with multiple SANs from different vendors should look at what kind of utilization benefits they will get from applying a virtualization layer, said Sloan, versus the cost of investing in that.
Three approaches
If an organization can manage processing and storage as abstracted resources in a layer separate from the actual machines, there’s a lot of potential for getting the most out of their investment. “In practice it comes down to how easy it is to abstract that process,” he said. With servers, x86 infrastructure is more or less standardized, but storage is not quite at that same level of standardization and openness.
Many organizations are turning to storage virtualization to resolve a particular pain point.
“When it comes down to it, it’s mostly about making a lot of storage devices, whether disc drives or disc systems, look like a single manageable pool,” said Richard Villars, vice-president of storage systems with IDC Corp.
But there are different ways to approach virtualization. First, there’s the virtual tape library, which makes disc systems look like a tape library, the value being you can eliminate backup windows and enable faster recovery of applications.
Second is block-level virtualization, where you have a single environment. This allows you to set up servers, add capacity or migrate data without disrupting applications or the network.
“This is the foundation that makes the idea of tiered storage really work,” said Villars. “If all you want to do is have different storage for different apps, you can do that today – you just buy different boxes.” Third is file-level virtualization, which is similar to block-level, but rather than managing volumes, you’re managing files.
The area where storage virtualization makes a lot of sense is disaster recovery. One of the biggest barriers to disaster recovery has been setting up replication sites, which cost a lot of money. Since virtualization allows for tiered storage, an organization can deploy less expensive storage at a replication site. That site can then be used for recovery, in a matter of minutes, should a failure of some kind occur.
But virtualization is not a panacea, and shouldn’t necessarily be used across an organization. In some cases, it doesn’t make a lot of sense, such as with performance-intensive applications, said Villars. “But for 80 per cent of applications, if not more, it’s not a problem.”