In our previous post about the relationship between complexity and security, we waxed nostalgic about personal computers and how even they can sometimes be tricky to set up and configure properly in a secure way. In this post, we can explore how this can make security difficult in consumer compute scenarios, but my intent is to show how in an enterprise environment, the complexity (and security difficulties!) are orders of magnitude higher than they would be in any consumer scenario. Why? One word. A word that every savvy enterprise technology practitioner will shudder when they hear:



When I built my Windows 98 tower almost two decades ago, just like the computers and laptops of today, it was an entirely standalone system where all of the various components functioned within a self-contained system. The storage, memory, compute, input & output devices, and networking modules all were installed from a relatively consistent generation of technological development and therefore were roughly well paired in terms of overall power and performance. There was no ‘legacy’ hardware or software; it was all modern and installed new. 


For example, in 1999, I used Windows 98 as an Operating System, which was only a year old at that point, and all of the different components software and hardware could not have been much more than one year removed from being cutting edge.  This meant that nothing was problematically outdated relative to the rest of the system, and, roughly, everything could work in harmony and be well paired together.


This has great implications for ease of use, since it means that the quantity of available permutations, or potential combinations of different types of technology, is large but not unmanageably so.


Now – imagine a world where you bought a cutting edge computer in 1980, and were entirely unable to replace more than a small percentage of its components at any one time. Imagine that it had to operate continually, for decades, and that at any moment in time it was expected to perform to current expectations and standards. Imagine that the performance requirements on the system were highly dynamic; they grew and shrank in terms of size and complexity, and they fundamentally shifted in terms of geographic needs and underlying form.


The accumulation of legacy software running on legacy infrastructure has snowballed to enormous proportions within any IT department due to this phenomena.


Imagine that the consequences of any system error, downtime, or security breach were immense and measurable in the billions of dollars. Oh, and also: imagine if this computer’s components were spread around in different locations, each with its own particular degree of nuance.


We touched on the permutations of potential combinations in building contained compute systems for consumers (PCs and Laptops) but now imagine the vast, unmappable, and incomprehensibly complex array of permutations that would exist in this legacy-filled enterprise IT world.


This scenario is analogous to the hurdles faced within IT organizations worldwide. I am always shocked to hear about how many 10+ year old mainframe systems are in use within large companies’ IT environments, and how many layers of successive generations of systems have been built up around them that are also still in use.


Building these IT environments has happened over decades and has resulted in extremely complex clusters of different types of systems, of different vintages, built by different staff, often for different purposes and in geographically distributed locations.  It is unsurprising to have entire software clusters which ‘do what they are supposed to be doing’ but nobody actually knows how or dares to change any configurations because the employees that set them up and understood them have long since retired. 


How can any company possibly think it is properly secured given this reality? Each and every IT component mentioned above could potentially be used as an entry point or relay point for a cyber-attack, and each and every component has its own eccentricities when it comes to securing it.


This is the true reason why security is so difficult in Enterprise IT. Complexity, caused by accumulation of legacy hardware and software, is the root cause. It is impossible to ignore that, despite the massive increase in attention and budget directed at Enterprise IT Security, the quantity and severity of breaches has been on the rise.


With this massive complexity, how are IT departments supposed to keep their companies safe? Up until now, the answer has been to throw staff and money at the problem. More and more attention has been paid to the CISO role as well as the roles of the IT employees whose focus is IT security.


However, most of the engineer-hours spent on this issue have been focused on playing catch up and trying the fight fires as they emerge. This implicitly acknowledges that the enterprise is not secure.

This is, essentially, a loser’s approach to security – it is akin to admitting defeat and being in permanent ‘damage control’ mode.

We realized early that many of the integration and maintenance tasks that ate up so much of these employees’ time were now at the point where they were consistent enough, and the technologies to instantiate them were powerful enough, that much of it could be automated. The basic premise of our SkySecure platform is that the core functionality of all of the products necessary to keep an enterprise secure could be modularized, written in software, and baked into a compute platform. This new compute platform would allow software to be packaged and run in the same format as has become ubiquitous in today’s virtualized compute environment, and the very act of deploying it on the platform would incorporate security and manageability functionality by default.
This evolution in tooling or Enterprise IT will be an interesting thing to witness as it expands into today’s companies. Massive cost savings can be reaped, and IT can finally use a no-compromise compute platform – one that is agile, secure, and cost-effective.