For twenty years now I’ve been building and operating IT products: switches, routers, secure enclaves, etc. The overwhelming majority of vendors I know have a goal of building great products that meet or exceed customer expectations in performance, reliability, quality, etc. With the trend towards the consumerization of IT and an increasingly competitive marketplace I’ve witnessed an increased focus on time-to-market. There have been many examples cited where there is a significant first-mover advantage and a winner/fast-mover-takes-all type of outcome, whether we are discussing car sharing, home sharing, tablets, or the latest enterprise technology. This unerring focus on speed may be having some unintended consequences.

When you hurry a product schedule along you aim for the MVP – the minimum viable product. In short, what is the least we can do that will fulfill the customer’s expectation and hopefully enable this product to sell, capture some market share, and then based on real-world customer feedback add incremental capabilities to the offering to further expand the market and adoption rate. It is a model that has been getting a lot of press as it is more agile and nimble than the take-your-time, ‘nail it and scale it’ model. The costs, though, may be not only in quality and testing, but as we have seen more recently, cyber security is often being ignored in the overall product design and architecture. There have been too many reports over the past year of companies who should have known better shipping products with gross flaws and then ignoring the feedback from trusted security researchers.



The Regulatory Risks of Insecure Products.

My worry is that if we, as an industry, continue to ship bad products to consumers we could find ourselves in the same situation as other industries: regulated. Regulation is a dirty word to many. It implies process, auditors, lengthy policy documentation, lawyers, and everything that spells disaster to a small and nimble business. If you’ve ever sold an IT product to the US Government, especially the DoD, compared with selling it to a medium business, you’ll know what I am talking about. With the Internet of Things a myriad of critical infrastructure systems are now getting connected to Internet and cloud-based management systems; the risk of vendor missteps now goes up, and enters the same level of product safety responsibility and liability we have historically seen in industries such as financial services, automotive manufacturing, and pharmaceutical development.

I had the pleasure of educating an attorney the other week about the impact of human error on networks. We talked about how with a few bad routes you can crash financial trading systems across major markets; how heart monitors and drug dispensaries can be disabled and black-holed, how every user/machine/application at a bank could be effectively disabled. These were for accidental outages. When we think about purposeful attacks — exploiting vulnerable and legacy systems — the risk gets much higher: the more connected we are and our systems are, the broader the surface we need to protect is, the greater the potential impact of any compromise.

Let me be very clear about my opinion here: I do not want to see government regulation of IT product development. I also would prefer to not see self-regulation via committee-based standards bodies dominated by big vendors preserving market-share. Either would dramatically stifle the technology industry’s ability to bring new products to market, delight customers, and continue to innovate.

If we, as an industry, do not step up and start being better citizens and better partners to our customers I am fairly confident we will see some form of regulation within the next 3-5 years.

So what can we do to forestall this?

The obvious answer is to build products that are inherently secure, but our actual responsibility is greater than that. We also have an emerging responsibility to train and educate the users of our products on their proper usage. As many vendors shift from one-time enterprise site licenses to ongoing recurring revenue in SaaS models our responsibility to provide not only a secure product but a secure service increases further. When I wrote about the SWIFT breach, I was not casting aspersions at their product, or its security model – although like any product there is always room to improve – I was instead noting that they did not provide prescriptive and deployable guidance to the least-common denominator of their user community about how to properly secure their product and service.

Vendors have a new, higher level, of responsibility:

  1. We have a responsibility to provide our end-users, operators, and administrators prescriptive guidance on how to secure our products and services. This is not a point in time guidance, but needs to be treated as a living document. It should also include audit and evaluation criteria so the customer can validate their implementation against the reference model. Kudos to Microsoft for doing this with their Active Directory security guidance – this helps everyone.
  2. Guidance should always strive to be vendor agnostic, although can and should include any specific SW/FW/HW versions and specific configurations that have been tested and verified. The more prescriptive and exacting the guidance is the better. One example is to not just say ‘segment this application’ but instead provide a list of ports and protocols used and explicitly state which ones need to be allowed in/out for the application to properly function. Even better is stating where each port/protocol needs to go to.
  3. A method for feedback from and engagement with security researchers – this should also have a named and responsible party for implementing the feedback in a timely manner. These “bug bounty” type programs should always accompany an internally implemented and documented secure coding process. Organizations should be prepared to share the documentation of their process and customers should routinely to see it.
  4. It would be nice for there to be a consistent format that a vendor could document the ports and protocols required to go to their product and why each of them is open and necessary. If this could be done in a consistent format and stored in a central repository it would be rather nice for simplifying deployment of in-band security protections and also monitoring for unauthorized traffic types.
  5. Signature signed images and secure software distribution mechanisms for updates and patches to critical components such as crypto libraries and for CVEs.

For each of the above it is equally important to have a test so the organization can verify, “Ok, I thought we did X, how do I know for a fact I am doing X properly?” Some common examples that jump to mind are managing ACL lists in switches and routers but never performing garbage collection, or name chasing where you accept an FQDN but only resolve it once and cache it until reboot.

Am sure there is much more that should be included here – would love to invite any feedback or discussion on what could probably evolve towards an ‘Enterprise IT Bill of Rights’ that we would hope vendors would subscribe to: we have to do better.

Next Steps: What Happens With Continued Weak Security and Vendor Inaction

In conclusion, let’s talk for a second about what happens if we don’t step up our game: In the 1950’s automobile manufacturers had incredible freedom and flexibility to do what they wanted, each make and model could vary greatly year-by-year to innovate and differentiate from each other. It was a golden-era of mechanical and industrial innovation.

However, there were cars with no seat belts, cars that caught fire when rear-ended, cars that had sharp instruments that literally impaled people upon impact. In 1966 the US enacted the National Traffic and Motor Vehicle Safety Act initiating a federal regulatory agency and practices for vehicle design. There are now literally thousands of pages and hundreds of separate regulations that an automotive manufacturer must adhere to from safety, fuel economy, hazardous material usage, down to consistent operations such as requiring turn signals, and rear-view mirrors. There was a subsequent uproar from the manufacturers when airbags were mandated and fuel economy requirements tightened. There is no doubt that the regulations improved the product safety, but I wonder if the industry had established their own standards, self-policed and self-regulated, would we have had need of these government regulations and the taxpayer burden they carry.

An example where an industry self-policed is the payment card industry with the adoption of the PCI standards for systems accepting credit/debit cards. Not only are there fairly clear guidelines on how to deploy the systems and protect them, there is also an integrated audit process and guidelines for self-assessing the security posture of the environment. The challenge, as seen in the recent PF Chang’s ruling, is that self-regulation does not always equate insurance indemnification.

The products we build, as an industry, run power grids, financial trading systems, life support, and air traffic control. When these systems are at stake our customers deserve more than a ‘minimum viable product.’ Our customers deserve our best efforts in security as well as agility.

dg – @dgourlay