Last week we at Skyport hosted Tech Field Day’s group of leading networking bloggers for a two-hour overview and discussion of Skyport’s technology. You can view all of the Skyport presentations here.

The discussion part–both between the bloggers and presenters, and amongst peers–is always the most interesting and valuable part to me, as I typically walk away with at least as much food for thought as our guests do. So I was pleased that this time the group did a series of roundtable discussions to record their thoughts about the presentations while it was all still fresh in their minds.

At the beginning of the security roundtable, Greg Ferro (@etherealmind) posited that there’s no true penalty for poor IT security, even with breaches happening regularly. Stephen Foskett (@sfoskett) added, “You only have as much security as you can tolerate pain.” There was a general consensus among the group that IT security is more often about maintaining appearances than any serious concerns–“check box security vs real security”, as one member put it.

Terry Slattery pointed out that most of the investment is misdirected: many breaches now come from insider threats–some genuinely malicious internal actors, more often as the result of social engineering. But the vast majority of IT security spend is on firewalls located at the network perimeter, guarding against penetration from outside. (Another important consideration of putting all of one’s eggs in the perimeter basket is that it puts all internal assets on equal protection footing. Which means that once the perimeter is breached, they’re all equally accessible. Yet some of your assets are clearly more important to protect than others.)

In a follow-up blog, Phil Gervasi (@network_phil) points out another part of the problem: security is great until it gets in the way of user experience. Then it gets turned off. He concludes that we really just want security theater, not real security.

I don’t think that’s true for most people. (True, there are always the Wallys of the world.) The problem is that year in and year out, the security “best practices” promulgated are always the same–and rarely take into account either shifts in attack vectors (to Terry’s point) or the practical challenges involved in following the “rules”.

Let’s take a look at a copy/paste “best practices” list drawn from a “2016 Security predictions” article that a coworker sent me:

1.”Patch. Everything. On time.” Except that certain apps and OS’s have been certified and even though they’re no longer supported, doing the recertification would be a bigger undertaking than it’s worth. Or an app that has several major corporate processes built around it was written in-house 15 years ago and the people who did it are long since gone. What then?

2. “Protect your hosts. Do application whitelisting.” This assumes that the biggest threat is malware being placed on your hosts, vs exfiltration of data by seemingly legitimate actors.

3. “No admin rights for anyone who can access production data.” and conversely…

4. “No one with admin rights can access data.” which both really come down to…

5. “[Do] Role Based Access.” But what happens when there’s been credential theft, which is at the base of almost half of all attacks, according to the 2015 Verizon Data Breach Investigations Report?

6. “Segregate your networks.” Sure.

7. “If you create code, do solid code assurance.” When would anyone say they don’t do thorough QA in this, the best of all possible worlds? (Musing: It’s a shame Voltaire’s not around to comment on modern InfoSec.) Bugs happen even with the best of intentions and discipline. So what can you do, proactively and reactively, to deal with that reality?

8. “Test and Audit.” Also, brush your teeth twice daily. This assumes, of course, that you have ready access to a toothbrush, toothpaste, clean water and a place to spit. But serious auditing (vs some of the practices Phil makes note of) is a notoriously difficult and time-consuming activity, since it involves pulling together metadata from wildly disparate sources. That’s why there’s a whole industry of audit tools and services.

What it really comes down to is that security won’t be done well unless it can be done easily and non-disruptively: “easily” in terms of set-up (simpler architecture, fewer rules to be individually handcrafted, etc), and “non-disruptively” in terms of application performance.

Technology and tools are never the sole answers to problems that derive from depending on top motivation and perfect execution at all times from ordinary humans. But instead of having unrealistic expectations of human behavior, we can expect more from tools. Limiting fat-fingering opportunities, removing options that are likely to get people into trouble, and certainly redoing hardware architectures to address performance hits can all go a long way toward removing the temptation to take the superficially easier (but more danger-fraught) path.