Many people joke about it and mock with the fact that it has been The Year of VDI for many years. But, is it finally here? Will 2019 finally be The Year of VDI? And what does that mean? Does that mean we could see VDI being a mainstream technology? Have hyper-converged stacks and automatic deployments made the technology a commodity and might this be a reason why 2019 finally is The Year of VDI?

Let me first start with a little disclaimer. This blog post contains my opinion and my opinion only. No rights can be derived from this information.

What is VDI?

VDI, Virtual Desktop Infrastructure, is a technology in which a single desktop, running in a single virtual machine can be provisioned to an end user on a 1 to 1 basis. This is something completely different from Remote Desktop Services (RDS) as this provisions multiple desktops from a single (virtual) machine. RDS provisions sessions on a 1 to many basis.

Technologies like VDI and RDS are used for Application and Desktop Remoting; this will enable users to run applications that are sensitive to latency/bandwidth from a remote location. The applications or desktops run inside a datacenter, next to a backend of those apps or desktops. A user sets up a remote connection through remoting client and is able to work. The only things that are transferred are rendered video output and mouse/keyboard data.

A solution called Citrix WinFrame was one of the first to enable users to run remote applications/desktops with low bandwidth connection with a possible higher latency, almost 25 years ago. You’d think a lot has happened since 1995, but I’m sorry to knock you down from the cloud you were floating on. The main reason why VDI in many situations is still the way to go hasn’t changed: traditional applications.

WHY DO WE STILL NEED A VDI?

I have been working in the EUC space since 1999. One of my first projects was a mass migration of customers from CTOS mainframes to a Windows NT based infrastructure containing “modern” applications. I was responsible for installing the new hardware and software and other team members focussed on installing applications and migrating the data from the old to the new system. Back then, that application was one of the most modern developed ones in the area of client/server apps. An already 4GL based app running on a Progress DB with a bit of middleware and a frontend.

Let’s take a step forward to 2018. Imagine this 4GL app still being used in the travel industry and It being the core app of a big travel agency with over 2000 employees. 99% of the users are heavily depending on this app and if it has an outage of 4 hours, the outage could have a huge negative impact on the business (both financial as well as image). The business has some other apps as well, which are partly SaaS-based and some apps could be replaced by modern apps (such as the Office Suite or file sharing solutions).

If the company is going to invest in a VDI, it’s fairly easy to create a high-level overview of costs:

  • The company would probably require a multi-site VDI because of the risk of outage
  • 2000 users should be able to work concurrently
  • I’m assuming a maximum density of 80 users per VDI host
  • I’m keeping GPUs out of scope (as it might complicate things)
  • We’ll use HCI in the calculation

2000 / 80 = 25 hosts for a single site

2 sites * 25 hosts = 50 hosts

50 hosts * 20K (price per host) = $1,000,000 just for host hardware.

Take VDI licenses including HCI software, layering technologies, and a UEM into account and add another $1,500,000 to the calculation.

Without any consultancy (it’s not going to design and deploy itself), network hardware (such as load balancers), network infrastructures, Microsoft licenses and costs for a migration, you will have to invest $2,500,000 into a platform that will last for 4 years and needs to be replaced if you aren’t changing a thing to your application landscape. Unfortunately, we have seen what the impact of Microsoft’s update to Windows 10 and newer Office releases have done to resource usage on a VDI platform.

What should a CIO do in this case? As a VDI Ambassador, I should always advise the company to design and deploy the VDI. But be honest here, would I invest 2,5 million in a platform that is solely there to offer me remote access to a poorly built app? What is the alternative?

The most common alternative is to keep running like it is now (physical desktops). That won’t solve my availability issue though. A better alternative would be to re-platform the application to a Cloud-Native one. Use a platform like Amazon Web Services or Microsoft’s Azure and rebuild the application from scratch. 2,5 million dollars will get you a long way…

Is this all based on common scenarios? To be honest, not really. Most projects I am working on, have a great number of traditional applications including a lot of shadow IT. 2500 apps for 3000 users is quite common. And that’s where the answer lies to the question: Why do we still need a VDI? If in the past 25 years, a Windows OS was required to run the majority of those apps, what are the chances this will change in the next 10 years? And since the business is demanding agility, performance, availability, mobility and maybe the most important: Security, it is a challenge to take the business to a higher level without VDI.

But, is VDI that bad?

6 years ago, I would probably have answered with “Yes”. 3 years ago, I would probably have answered with “Maybe” and today I will answer you with either “No” or “It depends”. It will always depend on your specific situation if VDI is suitable for you. The funny thing is though, that projects that would probably have failed 6 years ago because of the complexity or bad user experience, would be highly suitable to run on the matured VDI technology that we have today.

If you look at VMware’s Horizon Suite as an example, we had tons of challenges years ago (which other vendors like Citrix and Microsoft also had to deal with).

First look at the way we built virtual desktops. Building a base image, take a clone, provision a pool of 2000 desktops in 12 hours and repeat this process every time you have an update. During the update, you had to plan downtime and the pool was partly available, but with limited resources. Nowadays a desktop is provisioned as soon as a user requests it. Within a couple of seconds, a fresh, new desktop is ready for the user to work on. Does a new update need to be enrolled? No worries, this can be done during business hours and without the user hardly knowing it. It’s just a matter of signing out, signing in et voila!

When looking at user experience, especially with difficult use cases like developers, designers, and other power users, we have seen some major improvements as well. Assigning a user administrative permissions on the desktop may have been the way to go in 2012, nowadays an App Volumes Writable Volume is capable of offering the same user experience as admin permissions on a full clone but with non-persistent InstantClones instead. The same with remoting protocols. Modern remoting protocols are capable of handling a smooth user experience at lower bandwidths and latencies even up to 300 ms. Especially if you would like to work from a mobile connection, you are now still able to work pretty smoothly.

Afbeeldingsresultaat voor app layering memeThe last improvement I would like to highlight is the way we deliver traditional applications and base images. Years back, we just delivered a ton of apps in the base image and hope that no conflicts with either apps or the Operating System were created during updates or during the addition of new apps. The virtual desktop was delivered as a monolithic object and in case of one of the components inside that object causing an issue, the chance of your user not being able to work was quite realistic. By moving away from a monolithic object to a layered architecture (with technologies like App Volumes, FSLogix or Liquidware FlexApp), will introduce a lot of agility in the way you manage your base image and distribute apps to the end users. The base image should only contain the OS and maybe a couple of apps that aren’t capable of running in a layer or bubble. By taking out application dependencies, you will be able to reduce the number of base images you have to maintain and this will lower your OpEx. By putting the apps in a separate layer and the user profiles in another, you basically create different interchangeable objects which are managed and updated individually. If you need to update the base image, just do so. The apps (in general) won’t be depending on this. I have seen customers migrating from Windows 10 1703 to 1709 without any downtime, complaining users or failing apps. Using a User Environment Management solution (like VMware UEM, Liquidware ProfileUnity or Ivanti AppSense), you are able to create a Zero Profile strategy that lets you even move from a monolithic profile to a modular profile. If a specific application breaks the profile, only that part of the profile failed without the user being unable to work due to a corrupt profile.

Does this make it the Year of VDI?

How would you determine this? And what does this mean? Let’s start with answering some questions:

Is the technology mature enough to be implemented in large enterprises?
I think it is. Deploying a virtual desktop to thousands of end users can be done with ease of use and without an enormous impact on your admin team.

Does the technology offer a feature set that lets you deploy it for most use cases?
Again, I think it does. Together with a couple guys from the ITQ team and help from the NVIDIA team, we managed to deploy an application to a virtual desktop that is extremely intolerant to latency and will have a negative user experience if frames drop. It uses weird USB hardware that isn’t really built for redirection. The application: a Formula 1 simulator 🙂

We managed to get this run pretty smoothly just to show that even complex use cases are able to run with a virtual desktop.

Is the financial impact still an issue?
The answer here is: it depends. Depending on how you manage your current workplace and what the driver is of a VDI project, will determine the impact. If your company will lose millions of dollars for every hour your applications are unavailable, the business case for a VDI can be easy. Managing the VDI isn’t that heavy on OpEx than it was 5 years ago. This won’t mean that it is a cheap solution, but it makes a financially driven business case easier to become solid. The costs of hardware totally depend on your apps, users, and expectations. Not having to use GPUs and expensive multi-core CPUs with a high clock speed will save you a lot of money. A proper assessment will help you gather these numbers.

What’s the probability that my business will invest in a VDI this year and completely go for a modern application an digital workspace approach at the next lifecycle replace?
Of course, the answer is: it depends. You should look at your application landscape. If you have 500 traditional apps which need to be migrated to modern apps, how long would it take you? If you could do 1 app a week, it will take you 10 years. If you could do 1 app a month, it will take you 41 years. In the meantime, there aren’t a lot of alternatives that you could utilize. As long as traditional apps require a Windows OS to run and aren’t capable of working with high latencies and low bandwidth, you need some kind of solution for remoting.

Based on these answers, I would say that VDI is here to stay. It can even imply that VDI might take off as a preferred solution for certain use cases. If you take a look at a blog post that Brian Madden wrote 9 years ago about VDI and Gartner’s Hype Cycle, the technology has surely moved to Phase 5: The Plateau of Productivity: Mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off.

I’m seeing a increased adoption in some specific industries like healthcare and finance because of the benefits of the technology. Next to that, the limitations have gone away.

Conclusion

Afbeeldingsresultaat voor yes memeIs 2019 finally the year of VDI? YES! I think it is. The technology is over 10 years old and has matured to a level where I think it might become a mainstream technology. Sure, this is if your project involves traditional apps and a desire for taking the organization to a next level in terms of security, agility, mobility, etc. Of course, there are always caveats, but they should be taken into account when designing a VDI. The importance is that you don’t start your project with VDI as a goal, the goal should be to create mobility, agility, etc for your organization. If you can do so without VDI and invest in re-platforming your traditional apps instead, that might be a better idea.

I would like to take it a step further and declare 2019 to be “The Year of VDI”. In order to do that, I need your help. Please join me in filling in a short (5 minute) survey. It will ask questions about your experience as a VDI enthusiast. It will help me to gather as much information as possible which I will use to launch a global initiative to declare 2019 as the true “The Year of VDI”!

You can find the survey here. Thank you!

 

The post Will 2019 finally be the Year of VDI? appeared first on vHojan.nl.

The original article was posted on: vhojan.nl

Related articles

  • Cloud Native
  • Application Navigator
  • Kubernetes Platform
  • Digital Workspace
  • Cloud Infrastructure
  • ITTS (IT Transformation Services)
  • Managed Security Operations
  • Multi-Cloud Platform
  • Backup & Disaster Recovery
Visit our knowledge hub
Visit our knowledge hub
Johan van Amersfoort Chief Evangelist

Let's talk!

Knowledge is key for our existence. This knowledge we use for disruptive innovation and changing organizations. Are you ready for change?

"*" indicates required fields

First name*
Last name*
Hidden