With all of the talk these days about how much security has changed, you’d think something extraordinary has happened.
I’ve worked in the security industry for 16 years. Way back in 2000, we were already past the initial viruses that created the industry for companies like Symantec and McAfee. There were also a lot of virus writers pursuing underground fame. That changed, and more hackers then worked to steal.
While the bad guys’ intent evolved, what they attacked and how they attacked stayed relatively similar. In fact, recent reports by Verizon, HP and others about exploited vulnerabilities highlights that the approaches taken by the bad guys don’t need to change because the environments they attack don’t change. There is plenty of innovation going on, but for the vast majority of attacks, they still use the same old stuff.
What has radically changed is who cares about security. When I started, it was the proverbial “guy in the basement,” typically a network engineer or ex-auditor charged with security. Now, almost every organization has a chief information security officer (CISO). And, the CIOs, CEOs and boards of directors care and ask about security, too.
Cybersecurity Ventures predicts the world will spend $1 trillion cumulatively on cybersecurity products and services over the next 5 years from 2017 to 2021. One recurring question is: “Why am I spending so much more money on security, but the results don’t get any better?”
This question is strikingly simple, but fundamental to how enterprises invest and manage risk. It all boils down to two fundamental issues. First, the bad guys have an important advantage that we cannot overcome easily—if at all. Second, the technical approach and architecture to protect enterprise environments is fundamentally flawed, error prone, wastes scarce resources and will not work in our new IT reality.
The Bad Guy Advantage
So what is the bad guy advantage? It’s simple. They are running amid a criminal economy where participants buy and sell services. They are highly motivated to make money. They operate as profit centers. All investments made increase their returns. They invest heavily in tools, automation and skills. As profits pour in, innovation and investment only increases.
Also, they can fail and fail and fail, but if they succeed just once, they win. They also have a low risk of failure. They work from homes and offices far from danger, often in countries where it’s difficult, if not impossible, to initiate legal proceedings.
In contrast, the teams that defend the organization are cost centers. They face tight budgets, an ever-expanding number of projects and new work areas and must make tough choices: continue to fund existing programs or invest in new technologies. Many projects that need to get done never get done, whether it be for lack of money or lack of talent.
We also make it hard on ourselves. We document the frameworks we follow to protect ourselves. We publish the models as standards and follow and audit ourselves against them. These standards, like ISO 270001 or NIST Cyber Security Framework, provide the adversary a clear guideline to the approach that we will take.
We also list our priorities annually through surveys. So, we not only give adversaries the map to what we think we need to do, we also tell them how far we’ve gone and what we still need to accomplish.
Even though the economics favor the adversary, we still have the advantage of knowing our environment better. This should give us the ability to build great defenses. Sadly, we typically do not leverage this advantage.
That is because our security architecture is flawed. Over the last decade, the way in which organizations consume and deliver services has massively changed. Cloud, mobile and social radically changed how our infrastructure operates. Still, our security approach is essentially the same as it was 25 years ago.
Architecturally, we build walls at the perimeter and post guards (agents) inside the applications. The challenge is that many of our systems don’t live within our walls anymore and agents are as vulnerable as the applications; they also consume the same resources that applications need to perform. As a result, security suffers.
The Failing Perimeter
In the past decade, there have been many statements that the perimeter will disappear. Yet firewalls and firewall companies continue to grow. Why? Perimeter-based controls are powerful. They implement the right kind of security that is enforced outside and away from the application that you are trying to protect.
This means that enforcement doesn’t consume the same resources that power the workload. Also, any vulnerability in the operating, system or underlying technology hosting the workload doesn’t impact the protection.
Even so, the network perimeter is failing to protect the organization from breach. It fails because organizations often open the edge to cloud, mobile and third-party vendors, and so that strong protection is not always where you need it.
The perimeter is also constantly shifting to accommodate changing internal requirements. This creates complexity and opportunity for administrators to make a mistake. It is no accident that misconfiguration and errors is one of the largest contributors to breaches.
The sad truth is that perimeter security is powerful, but organizations cannot deliver it effectively because they are focused on solving for the needs of the whole enterprise.
The Agent Trade-Off
One approach to delivering security for any one workload or system is to deliver it with an agent or small software application that runs on or in the same environment that is being protected.
The benefit of this approach is that the protection follows the environment and it can be configured to the needed and appropriate level of protection. This has benefits over traditional perimeter-based approaches. It also has serious flaws.
There are two primary challenges with agent-based approaches.
- First, the agent is as vulnerable as the application. As I mentioned earlier, the agent often runs within the environment it protects. So, if a system is compromised and the attacker achieves the privileges of an administrator, the attacker can simply turn off the protections. Also, if attackers get below the system with hardware-centric attacks or attacks like “rootkits,” which run below the operating system, the system will just “lie” and say everything is good even when it is not.
- The second challenge with agent approaches, and arguably the more significant, is that agents share resources with the system they are designed to protect. While some sophisticated attacks use this resource sharing as a way to compromise systems, that is not the primary challenge. The primary challenge is operational. As agents become more sophisticated to process needed information, they slow down the systems. This typically results in users disabling the protections they need. In some cases, failures or conflicts of the agent can crash the entire system. This creates a lot of resistance internally to agent-based technologies, especially since many of them will be deployed to provide the required level of management and security.
What Needs to Change?
The adversary is professional. Our architecture is not designed for the way in which we run and manage our environments today. So, what do we do? There are no cure-all solutions. Anyone who tells you different is lying or delusional. But by implementing the following five approaches, you’ll significantly improve your chances.
- Build a security program focused on adversary disruption. To do this, you need to think like your adversary and focus on the process they use to attack and breach an organization. In security circles, we call this process the “Kill Chain.” It enumerates the steps that an attacker follows to steal or damage an asset.
- Protection for services in and from the cloud need identity-based perimeters. As more organizations consume services or infrastructure from SaaS and cloud providers, the need for a different security model becomes important. The most interesting architecture for solving this challenge is the advent of what Gartner calls the Cloud Access Broker. They are essentially gateways that implement policies on interactions between users and the cloud. The trick that they have to solve is that they need all users—not some or most, but all—to route through their policy engines. There is a way to address this challenge, and we talk about it more in our latest ebook.
- Protections for services in and from “on-premises” need individual trust zones. While many workloads are moving to the cloud, others will continue to be managed and owned by the enterprise. For these workloads, the most promising new architectural approach is in the creation of individual security perimeters around each and every workload in the data center. This approach is often referred to as micro-segmentation and represents the separation of the network trust zones into units of a single zone of trust for each and every application or workload.
- Make alerts and analytics simple and easy—or outsource. The adversary is patient, talented and will find a way in. After they do, you need to find them fast. Building and monitoring alert systems will help. There are also interesting “honey-pot” technologies that can help make searching for adversaries easier. The biggest goal is to create a system that doesn’t need constant tuning or management. Don’t waste your time or money on technologies that are only going to tell you to do more work. Invest in basic alerting and work with partners to ensure it continues to operate effectively.
- Encrypt without breaking stuff. Stopping the adversary is goal one. However, if they steal stuff, wouldn’t it be nice if the information was rendered useless, so it can’t be monetized? Broad use of encryption is needed. The trick is understanding what to encrypt and how to do it, especially since much data lives in applications we don’t own in the cloud. Start with the stuff that really matters and work from there. It is not necessary to encrypt everything at once. Second, find technologies that can encrypt data without breaking applications. Approaches like tokenization and format-preserving encryption can protect without breaking the existing environment.
A Changed World
The way in which organizations deliver and consume services has changed. It is now based on a blending of cloud and on-premises services. This hybrid looks radically different than organizations of the past, but yet we build and defend our environments with much the same approach.
The adversary has adapted to this new environment and now we also need to change. Organizations that do will be more secure. The goal is reducing risk. In some cases, all that is required is being faster and/or better than the bad guys.
For more detail on what needs to change, download our ebook: “The IT Security Spend Dilemma: More Isn’t Always Better.”
-Art Gilliland, CEO, Skyport Systems