2010 was the year “cloud computing” became colloquialized to just “cloud,” and everyone realized “cloud,” “SAAS” and all the other xAAS’s (PAAS, IAAS, DAAS) were all different implementations of the same idea — a set of computing services available online that can expand or contract according to need.
Not all the confusion has been cleared up, of course. But seeing specific services offered by Amazon, Microsoft, Oracle, Citrix, VMware and a host of other companies gave many people in IT a more concrete idea of what “the cloud” actually is.
Related stories
Newsmaker of the Year: Cloud computing
Social media in, cloud computing out for Canadian retailers
What were the five things even experienced IT managers learned about cloud computing during 2010 that weren’t completely clear before? Here’s my list.
1. “External” and “Internal” clouds aren’t all that different
At the beginning of 2010 the most common cloud question was whether clouds should be built inside the firewall or hired from outside.
Since the same corporate data and applications are involved — whether they live on servers inside the firewall, live in the cloud or burst out of the firewall into the cloud during periods of peak demand — the company owning the data faces the same risk.
So many more companies are building “hybrid” cloudsthan solely internal or external, according to Gartner virtualization guru Chris Wolf, that “hybrid” is becoming more the norm than either of the other two.
“With internal clouds you get a certain amount of benefit from resource sharing and efficiency, but you don’t get the elasticity that’s the real selling point for cloud,” Wolf told CIO.com earlier this year.
2. What are clouds made of? Other clouds.
During 2010, many cloud computing companies downplayed the role of virtualization in cloud computing as a way of minimizing the impact of VMware’s pitch for end-to-end cloud-computing vision — in which enterprises build virtual-server infrastructures to support cloud-based resource-sharing and management inside the firewall, then expand outside.
Pure-play cloud providers, by contrast, offer applications, storage, compute power or other at-will increases in capacity through an Internet connection without requiring a virtual-server infrastructure inside the enterprise.
Both, by definition, are virtualized, analysts agree, not only because they satisfy a computer-scientific definition, but because they are almost always built on data-centres, hosted infrastructures, virtual-server-farms or even complete cloud services provided by other companies.
3. “Clouds” don’t free IT from nuts and bolts
Cloud computing is supposed to abstract sophisticated IT services so far from the hardware and software running them that end users may not know who owns or maintains the servers on which their applications run.
Related story – Cloud control – Top cloud computing risks and how to handle them
That doesn’t mean the people running the servers don’t have to know their business, according to Bob Laliberte, analyst at the Enterprise Strategy Group. If anything, supporting clouds means making the servers, storage, networks and applications faster and more stable, with less jitter and lag than ever before, according to Vince DiMemmo, general manager of cloud and IT services at infrastructure and data-centre services provider Equinix.
Without bulletproof infrastructure, cloud computing is slow, he says, and end users won’t accept slow.
4. Tiny things make big differences
Virtualization enables many applications and operating systems to run on the same piece of hardware while thinking they each own the server themselves. The problem with that, according to IDC analyst Gary Chen, is that they all think they have the network interface and input/output bus to the processor to themselves, too.
On a server with a lot of guest OSes, the bottleneck to performance is no longer the speed with which data can move back and forth between the server and external storage; it’s the number of bits that can go through the data bus at one time, he says.
Related story –Mafiaboy warns about the security of cloud computing
That’s one reason Virtual I/O is becoming a hotter topic, leading to what Forrester analyst John Rymer calls “distributed virtualization” — in which I/O, memory and other components are abstracted from each other as well as the guest OSes, and the definition of “server” changes to mean whatever resources an application needs right now.
5. “Year of Virtual Desktop, wasn’t”
2010 was supposed to be the Year of the Virtual Desktop, as Microsoft, Citrix and VMware all competed to capture what analysts expected to be a wave of adoption from end-user companies.
Related story – ‘Year of the Virtual Desktop’ more hype than reality
Virtual desktops were a hot topic in 2010, but growth wasn’t nearly as big as analysts or vendors expected.
Instead of standardizing on virtual desktops and moving all their users immediately to make migration to Windows 7 easier, most companies adopted one of an increasing number of flavours of the technology, but only in places where it made most sense.
“We’re seeing a lot of tactical projects, but not a lot of strategic ones,” according to IDC analyst Ian Song.
That’s not to say there wasn’t a lot of growth or adoption of even DAAS versions. But 2010 was no tidal wave, Song says.
The two biggest reasons, he says, were the complexity and comparatively low ROI of desktop virtualization compared to virtual servers.
Another was the increasing focus even inside the enterprise of tablets, smartphones and other non-PC devices that have to be virtualized to become secure, reliable clients for enterprise applications.
“We’re expecting to hear a lot about that from Citrix and VMware and a lot of the phone companies after the first of the year,” Song says. “It’s going to be big.”