I was riding the train to work reading through recent security articles when I came across one of the best articles I have read on cyber-defense in a good while, courtesy of Robert Bigman, the former CISO for the US Central Intelligence Agency. Robert, got it right:
“The problem is “trustability.” What sophisticated hackers understand – and our policymakers don’t – is that regardless of the amount of security products and services deployed, Internet-connected systems remain vulnerable to exploitation. Success is only a matter of time and possibly the right zero-day payload. The only solution to this dilemma is to raise the “trustability” level of our computer systems high enough to make even sophisticated hacking riskier and more susceptible to easier identification. As they say in the military: Reduce the attack surface.”
Robert goes on to offer a simpler method to secure a DMZ and to send all traffic in/out through a smaller number of policy enforcement points, and then to couple that with the use of Trusted Computing capabilities like those espoused by the Trusted Computing Group and Intel’s Trusted Execution Technology which provide a combination of signature signed hardware, software, firmware, BIOS with the tools to measure them at each boot and throughout the runtime to constantly reaffirm that the system has not been compromised by a rootkit or other low-level malware. As Robert says:
“To strengthen such an approach, we must establish a new partnership between the U.S. government and industry to establish standards that exploit security features in existing computer architectures. While there will never be a 100 percent secure computer system, we need to reduce dramatically the attack surface by requiring the use of more trusted firmware and software like those espoused by the Trusted Computing Group community.”
We know how difficult it is to build and manage these types of highly secure systems, it is almost impossible today in a production environment. The extra added complexity is often deemed hard to manage, and if it threatens uptime and an operators ability to manage the day-to-day environment the extra security and assurance capabilities are often deemed unnecessary overhead. This is why we built the SkySecure system: to deliver a secure-by-default system that is easy to operate, yet does not compromise on the security and assurance capabilities necessary to prevent rootkits, malware, and other sophisticated attacks.
If you are not using signature signed images/BIOS/firmware, running systems with a validated supply chain in exposed areas and remote/hostile deployments, monitoring/analyzing all network traffic in/out of each server proactively, and logging all policy changes and root/super-admin logins we would love to have a conversation with you about some things you can do to improve the security posture of your exposed systems.