As I’ve indicated on Twitter to a few folks we are getting ready for the ‘launch’ of Skyport formally – this is really a busy time as you get to tell your story a lot more often and find yourself constantly refining it and it seems to get more comfortable and more effective. Although you do find the occasional reporter or analyst who throws you a curve-ball question you didn’t expect – those are actually more fun.
I was asked an interesting question today — I will admit I was ~50% prepared for it, but in the course of the conversation I learned a lot and after a spirited conversation (I wouldn’t say debate – we were pretty much in agreement) firmed up a few more opinions and thoughts. The main thrust of the question was – how do you secure a server today? Giving it some thoughts a few things came to mind that are table stakes – then there are a whole bunch more that are, of course, necessary depending on where the server is located, how it is connected, what application and operating system it runs, etc.
Some of the more baseline capabilities identified were:
- If the device is in a location that is possible to be compromised like a remote branch with relatively weak physical security I would look at tools to reduce the physical attack surface of the node. This could be physically or logically disabling things like console ports, out-of-band Ethernet interfaces, USB ports, etc. If you opt for logically disabling a USB port I would also be sure the BIOS cannot be field reset as most of the time people are just turning them off in the BIOS.
- NIST has some good guidelines for things like signature signing virtual machines, and if the device is running anything approximating a critical system I would recommend signature signing the BIOS, firmware, boot images, etc. These should also be verified throughout the lifecycle of the server.
- Any on board hard drives and storage should be encrypted, this would prevent the physical theft of the media causing direct data loss – ideally the key should be stored in some form of hardware security module.
- I like the idea of putting passive monitors/probes on interfaces to critical systems so that I can filter and capture traffic in and out of the server. This gives a highly verifiable trace of what is happening to the device – is invaluable in troubleshooting, but also as an evidentiary log that can capture attacks, expansions, and data exfiltrations.
- The system should also probably be on its own interface on an advanced and high performance firewall. If the system is hosting virtual machines each one should be on its own VLAN on the Virtual Switch and they should be directly bridged to the same VLAN ID on the network uplinks. This will give reasonable assurance that one VM cannot ‘hop’ to another bypassing the firewall or policy enforcement point. I suppose this can also be accomplished with some number of virtual firewalls – although it is probably a shade expensive from a CPU standpoint to make these 1:1 with hosted VMs.
- The firewall would want to be setup a shade differently than end-user computing – it needs to have a zero-trust assertion: basically that it needs to be as diligent about what gets it as what it allows out. Locking this down into a whitelist model is a good tactic for business applications that have a lower rate of change.
- If the system is hosting VMs there are some new solutions out for micro-segmentation that can further reduce the number of hosts a given VM can reach over the network. Some are vSwitch based, others are agent based.
- Logging needs to be addressed – as accurate timing and tamper-resistant logging is critical to determining what has happened on any system as well as being a data set that can feed an advanced analytics capability or heuristic model. The log store should be remote, and ideally physically and logically secured in a way that no single admin has a credential that can modify the log.
- This gets to credential management – and again the thesis that no single credential should be able to compromise your network/systems. Smart operational practices such as two-man procedures, two-factor authentication, check-in/out procedures for super admin credentials, sunsetting older NTLM systems in favor of Kerberos, and so on are all part of a strategy to mitigate the likelihood of credential escalations.
- Honeypots and APT detection/trapping systems are always a good idea to have around, give a soft juicy target to the bad actors to more quickly identify the sloppy or greedy ones.
- If the server uses HTTP/s technologies such as Web Application Firewalls are very useful to police HTTP traffic as it is one of the most versatile protocols. Technology such as data loss/leakage prevention via REGEX parsing and such is quite useful here as well to ensure the file being sent to ‘the trusted cloud service’ is really what you want to be going off-premises.
- Let’s assume that there is an entire set of operational hygiene that needs to be in place around the operating system version, patch management, anti-virus/malware scanning, use strong passwords, console monitoring, separate IPS and SIEM systems, encrypted backup, etc. This section alone could go on for another interminably long post.
I am sure there are more and more – are there any glaring ones I missed? Especially from a network/infrastructure perspective?
I’ve come to the conclusion it is hard, if not impossible for many organizations to handle the operational complexity around securing even what should just be ‘a simple server’.