After what feels like a lifetime spent trying to make virtualization work for cybersecurity, it is with a heavy heart that I must pronounce virtualization based cybersecurity dead. Virtualization based solutions vary, but the issue I have is that they just do not work at scale in a cost effective way and this is a huge problem when you consider that we need to protect millions of internet users, we need to protect the many and not just the few.
Back in 2009 it seemed like a brilliant idea to leverage virtualization for cybersecurity. In those early days of remote browser isolation cybersecurity, we developed the Safeweb remote browsing model with Lawrence Livermore National Laboratory. We were learning how to leverage desktop virtualization technology to deliver a remote browsers to 5000 federal government users, a model that eventually became WEBGAP.
Isolating your users browsing activity away from your internal networks by putting a WEBGAP between your users and the internet and it is a fantastically good idea in general, primarily because the web browser is the primary attack vector for cyber attacks. If you provide your users with remote browsers, you isolate the associated risks and shut down the most common infiltration points on your networks.
Virtualization is an inefficient vehicle for handling the browser compute load at scale.
In early implementations of the WEBGAP model, desktop virtualization technology was used to deliver remote browsers, we gave 5000 federal government users a non-persistent virtual desktop upon which they were free to remotely browse the internet. Their local machines were totally locked down and disconnected to the outside internet and this model worked fantastically well, it was particularly loved by the users.
We realized back in those early days that you simply cannot shut down the internet and investigate every time a breach is detected, your users need the internet and they freak out when it's not there. So we gave each user a virtualized remote browser and let them use the internet to their hearts content, away from the valuable IP on federal government networks.
When we built our first remote browsing platform for LLNL, it dawned on me during the deployment that the physical isolation of browsing activity (what we now call browser isolation) was a completely new model and the only other group I knew about who were using the same model at the time were Los Alamos National Laboratory, but they called it an ‘internet glovebox’, remember that these people are nuclear scientists.
I remember looking around for competitors at that time and after a year or two they appeared on my radar as different implementations of the same model, I saw a number of different approaches to isolating a users browsing activity and I found different flaws in each of them which I will outline below.
I found them to be hugely inefficient and this becomes obvious the second you start playing them at vast scale. They leverage virtualization instead of containerization and they leverage a centralized SAN based architecture, neglecting the obvious costs efficiencies around browser compute isolation that distributed architectures can bring to the table. I think virtualization based isolation technologies are dead because they are unable to to cost effectively protect large amounts of users at once, failing the market test by default.
Cyber attacks are OUR problem, its a problem that affects millions of normal internet users and while browser isolation protects the privileged few right now, it is still too expensive to protect the many.
All those years ago at LLNL, Robin Goldstone the 'mother of Safeweb' said something to me that looking back seems almost prophetic. She told me that unless we could get the price down to single digit dollars per user per month, browser isolation will never be adopted on a mass scale by the mainstream.
She was right.
When we talk about isolating browsers, we are talking about millions of browsers and virtualization based solutions built around centralized architectures offer no cost effective way of isolating that many browsers, they are just too expensive. Once upon a time leveraging virtualization to deliver remote browsers seemed like a really good idea, but that was before we understood the the browser compute.
Now we understand that if you really want to accommodate a large amount of users on your platform at once, if you want to isolate each individuals browser tabs into their own disposable containers, then virtualization is a really inefficient vehicle. I have been isolating browsing activity longer than most, I was present at the birth of the browser isolation cybersecurity space and I hereby declare virtualization based platforms to be legacy. May they rest in peace, for they have served us well.
Like the things we write? Follow @WEBGAP on Twitter for more!