Do you look back fondly on those halcyon days when IT infrastructures often consisted of one-application, one- server, direct-attached-storage configurations? Ah, those were the days, weren’t they? Life was simple and easy. Put an application on a server, provision it and everything ran smoothly ever after.
|Photographer: Simon Howden|
Oh wait, that is not how it was. In those days, you needed to over-provision every application and utilization rates for IT resources were abysmal even as infrastructures, like Topsy in Uncle Tom’s Cabin, “just grew.” The whole setup was inflexible and lacked agility. And there were all sorts of hiccups and glitches that kept us all very busy. Oh yes, now you remember.
And then came along virtualization, which was supposed to fix all that. You know the drill. If only because I have been telling you about it for months (see the Virtues of Virtualization for Business Continuity). But as somebody should have said, because it would be a great quote, in IT, even the best solutions create new problems. Identifying the root cause of problems is one of the most vexing issues that confront people managing virtualized environments.
In the fourth annual State of the Network Global Study conducted by Network Instruments, 35 percent of the 265 network professionals surveyed worldwide indicated that troubleshooting problems increased with virtualization and 85 percent said that identifying the source of the problem in virtualized environments was the most challenging step. At times it can be like finding a needle in a haystack.
The problem only promises to get worse. While 80 percent of the respondents have virtualized servers, the process of virtualizing desktops (see Is VDI Cheaper or Not?) and storage is still in a much earlier stage. But not to worry—no doubt 10 years from now, these will be seen as the halcyon days when things were easy.