Working for virtualization focused companies for over a decade, I’ve seen first-hand that adoption rates for VDI continue to rise. When I think about the drivers behind this, I can’t help but think it has A LOT to do with the ever-increasing ability of the technology to meet the performance needs of a wider range of workloads. A few years ago, VDI was considered only well suited to a few simple use cases, but today, there are numerous organizations using VDI to serve their entire workforce, from CEO on down. There’s a lot that contributes to the increase in VDI performance over the years – some of it has to do with software manufacturers improving protocols, platforms and adding features. But there is still a lot to be said for the way in which the underlying components are configured by a VDI service provider (or a IT department). Huge performance gains can be seen by ensuring that you’re paying attention to some basic best practices that yield huge performance gains, four of which I’ve detailed in this blog post. If you’re not doing these things, your VDI environment(s) definitely aren’t performing as well as they could be.
I have seen countless organizations adopt virtualization technologies, but, at the same time, fail to change the way that they approach malware protection and antivirus. Physical servers and virtual servers require VERY different approaches to AV. In a physical environment, if you have, say 30 physical Windows servers, you’d install a unique instance of AV and antimalware software on each server (I’ll use AV as a term to encompass both antivirus and antimalware software from here forward). In this scenario, each instance of our AV software would have access to the full resources of the physical machine on which it’s installed. Sure, AV scans cause some performance degradation, but it’s something we’ve gotten used to as a bit of a necessary evil.
In a virtual environment however, we’ve seen big performance issues when servers are virtualized but still run AV scans the same way they did when they were physical. When we virtualize servers, they share the resources of a single physical host, and those AV scans become BIG resource burdens, especially when multiple scans happen at the same time on the same physical host. Performance suffers terribly unless you change the approach to AV.
Virtualization-aware AV software addresses this issue. The AV products and solutions offered for virtualized environments are mature and widely deployed within IT organizations. Each manufacturer approaches the problem a little differently, from very basic solutions (building intelligence into the AV software so that no two VMs perform AV scans at the same time on the same host), to running AV at the hypervisor level, eliminating the need for AV on the VM itself (or at the very least taking most of the burden of the scan away from the guest OS). Dizzion leverages virtualization-aware AV and malware protection software that runs at the hypervisor level, which means we’re never over-taxing hosts, but we’re still able to provide VM protection that meets (or exceeds) PCI and HIPAA compliance standards and provides the protection our customers expect. The performance gain from this strategy alone can significantly improve the performance of any virtualized environment, whether you’re hosting VDI or standard Windows servers and services.
The impact that an underlying storage system has on VDI performance is HUGE. Yeah, I know I said AV was huge, and it is…. but, this one is huge too. We use the acronym IOPS (Input/Output Operations Per Second) to quantify the performance of a storage system, or specifically, a disk. It’s a measure of how fast reads and writes can be made to that disk. As a general rule, you can expect a hard drive in a PC is capable of about 75-100 IOPS (for a typical 7200 RPM SATA HDD). Contrast that with the IOPS capability of an SSD which is measured in the tens of thousands. The time it takes Windows to boot up, the time it takes an application to start, the time it takes to install a new program, apply a patch, perform an AV scan, these are all affected by IOPS, and the more we give to our desktop, the better it will perform. Yes, I know that desktop performance won’t scale linearly with an increase in IOPS… while SSDs can be a thousand times faster than spinning disk, we’re not necessarily going to see our PC run a thousand times faster. What we will see for certain is a significant performance increase, as anyone who has swapped a spinning disk for SSD will attest to.
In a VDI environment, where we’re leveraging shared storage across multiple virtual desktops as well as the supporting VDI component infrastructure, our total available IOPS are going to be split across all the VMs. There are a few things that we can do to ensure great performance, the first of which is to create a dedicated set of disks for the virtual desktops. In other words, carve out some number of physical disks in your disk array to dedicate to your virtual desktop VMs, and dedicate other physical disks for the supporting component infrastructure and all other workloads. This will ensure that your virtual desktops won’t ever have to contend with a connection broker or a security server, or any other component for its share of IOPS. The second thing we can do is obvious… always use SSDs! Guidance from VMware to ensure a usable virtual desktop calls for providing 26+ IOPS per user for users that they categorize as “Power Users Plus,” using five or more compute intensive applications at a time. Can you imagine using a desktop with only 26 IOPS? That’s about 1/3 the speed of a dedicated SATA drive! Can you imagine your computer running 66% slower? That’s just not a good experience.
While this may provide adequate performance, Dizzion has found that we can further improve the performance of our virtual desktops by providing hundreds of IOPS per user, even for task-workers who don’t engage in intense computing tasks. Just like moving from spinning disk to SSD in your laptop will make you a happier user, moving from 26 IOPS to hundreds will make VDI users ecstatic. We know this because Dizzion regularly provides our customers with desktops that exceed VMware’s baseline recommendations, and the performance is unlike anything most of our customers have experienced with VDI in the past. We’re able to achieve this due to our decision to use SSDs exclusively, and the use of storage segmentation to ensure that our desktop workloads have dedicated storage resources.
Lack of Endpoint Monitoring Tools
I have heard some people refer to VDI as a “black box” when it comes to troubleshooting. The analogy paints VDI as a mystery object into which we have no visibility. If we can’t see inside it, we can’t tell how it works; if it breaks, we don’t know why. For years, that has been a pretty accurate way to describe it. It can be a complicated technology, and some people have just come to expect a poor, frustrating user experience. Without the proper tools to monitor performance and guide us towards issue resolution when issues arise, all we can do is shrug our shoulders and say, “let’s just reboot it and hope that fixes it.”
VDI no longer has to be a black box. Several software manufacturers in the VDI space have come to market in the last few years with very robust monitoring tools for VDI environments. The most significant attribute of many of these tools is the focus on end user experience. Sure, it’s nice to know when a connection broker is having a disk issue, or when a service has failed (hopefully you’ve built redundancy into your VDI solution so this doesn’t cause an outage), but what largely drives the success of a VDI environment is the end user experience. If our users’ virtual desktops don’t perform at the same level or better than a physical PC, you risk poor adoption, frustrated users, an increase in help desk tickets and, perhaps even worse, a compromised reputation.
The Dizzion Control Center is included with all our desktop deployments, eliminating the extra cost and effort traditionally involved with accessing this vital monitoring data. The Control Center constantly evaluates thousands of different metrics which lets us understand the end users’ desktop experience. Our tools provides real-time as well as historic insight into CPU load, memory usage, active applications, network latency and more. I’m not going to pretend that VDI environments never suffer from performance issues, of course they can, just like physical desktops do. But in a virtual environment, Dizzion provides our customers with actionable insights into what is causing issues, which can guide you to a resolution! See which apps on the endpoint are causing CPU spikes or what processes are memory hungry and guide the user as to how to improve performance. With the insights provided by the Control Center, you can identify network latency issues and bandwidth constraints, and so many more things that could negatively affect performance. And because it’s monitoring the user experience in real time, performance issues are often identified proactively, before an end user ever calls the help desk.
Ensuring optimal experience for your end users is key to a successful VDI deployment. Take advantage of the advances in VDI monitoring software and ensure that you have insight into endpoint performance.
In much the same way that disk segmentation (dedicating a group of physical disks to virtual desktop workloads) provides performance benefits, dedicating network segments to VDI workloads does too. This is pretty basic stuff if you’ve been in the virtualization space for a while, but it’s worth mentioning again because VDI performance can really be affected by a lack of proper network segmentation.
The network is constantly active in any virtualized environment, acting as the backbone on which VMs are migrated between hosts, providing connection to management, monitoring and administrative tools, responsible for communication to network attached storage, providing a secure connection to the user’s endpoint, and ensuring a fast, reliable connection to the internet. These activities all take up “space” on the network, and we need to ensure that a management task, like a VM migration or log shipping, doesn’t negatively affect user experience. We can do this by ensuring that, at the very least, our management traffic, our storage traffic, and our VM traffic (traffic from the virtual desktop to the internet) are all on separate network segments.
Could you do this with VLANs? Yes, if your switches/routers support rate limiting and you configure it to ensure that one VLAN can’t take another VLAN’s bandwidth. But really, I’m talking about dedicated physical networks. VLANs by themselves (without rate-limiting) won’t stop a network heavy management task from taking bandwidth away from our virtual desktops. But placing our desktops on a different physical network than the one on which our management tasks are happening certainly will.
It’s a good idea to also understand the characteristics of your network prior to implementing segmentation (i.e. how much total bandwidth is available, how much is being used, are there spikes that occur due to scheduled tasks like backups, etc…). Knowing these types of things can help you understand what impact VDI will have on your network and what limits and thresholds to put in place when segmenting networks.