Sunday, December 15, 2013

Insider Threats vs the Customer

Earlier today, on David Farber's Interesting People mailing list, Robert Anderson wrote (in part):
A quote like, “We weren’t able to flip a switch and have all of those changes made instantly,” strikes me as indicating gross incompetence by security professionals at NSA. They have known practical mitigation steps for over a decade, and didn’t take the care to assure that they were implemented in all relevant sites. Almost all writings on the subject have stated that the insider threat is the greatest threat to information security, so it should have been extremely high on anyone’s priority list.
Bob is exactly right. One reason that people on the NSA side of Snowden disclosures are so eager to pillory Mr. Snowden is that he had the temerity to point out what we in the security community have known for decades: the emperor had no clothes. The internal security model at NSA has long been "you're on the inside or you aren't", because actually implementing "need to know" would hamper speed of response. But also because it would require making much more credible assessments about which documents are sensitive. I'm sorry; a document drawn from an open, public source can't rationally be labeled secret in any responsible approach to security management. Yes, the fact that you are focusing on that document may provide information to an opposing force. The problem is that you end up labeling everything sensitive, with the result that it becomes impossible for your team to treat the notion of sensitivity appropriately. But you can't admit that, which drives the participants to an insider/outsider bunker mentality and an ever growing pool of "cleared" people. You eventually end up in a mindset from which it appears justifiable to archive the metadata of your entire country without a warrant, because it has become necessary to destroy the constitution to save it.

But that being said, it has been my experience that there are two kinds of "good guy" security professionals:
  1. Those who actually care about making things fundamentally (exponentially) harder for attackers. As near as I can tell, these either burn out or they convert to "type 2" (below). They burn out because fundamental solutions of this sort don't lend themselves to gradual deployment, so no individual customer or reasonably sized set of customers have any hope of making progress even when a technical solution exists. The result is that nobody pays for security that works, so most people don't believe that workable security is possible. The customers come to see security as an ever-increasing tax with no discernible benefit. The people with foundational technical solutions come to feel marginalized. They either give up in frustration and burn out, or they somehow acclimate themselves to the view that "patch and pray" is monetizable and better than nothing.
  2. Those who promulgate the "patch and pray" model of security. These are the folks who sell antivirus tools, packet inspection tools, firewalls, and the like. It's not that they don't care for fundamental solutions - some do, some don't. It's that they've come to recognize and accept that the customer's human nature largely precludes deploying those solutions. And however much I may hate the fact that the "patch and pray" approach extends the life of fundamentally flawed platforms, it has to be said that the customers are making the right economic decisions in the short term. As a customer, I can either buy your patch with low, known risk to my operations and some temporary benefit (however small), or I can buy a deep fix whose technical effectiveness is rarely easy to predict and whose deployment is expensive, highly disruptive, and places my business at significant risk.
The hell of it is, the customers aren't wrong in their assessment. Worse: the kinds of security standards (TCSEC, Common Criteria) that have been promulgated in the past don't offer a particularly useful framework for a solution, so nobody really knows what a "gold standard" should look like. From this perspective, it's pretty easy to see that the NSA has acted just like any other customer might act in failing utterly to deal with insider threat. Which is tragically funny, because the NSA had the mandate to develop effective secure computing standards for 40 years, and did almost everything imaginable to ensure that no success was possible.

Meanwhile, for all the other customers, the "one of the good guys" agency that promulgated key elements of our cryptographic infrastructure is now revealed as not such a good guy after all. How does the poor customer decide who to trust in the future?

The answer, for better or worse, lies in open processes, open source code, and open validation. Solutions in which a customer (or a set of customers) can pay a second party who works for them to validate vendor assertions. Systems in which the validation of those assertions is wholly or in substantial part automated. Systems in which, by construction, the loud brayings of vested interests are unable to drown out the truth in the way they managed to do with cigarette smoke, asbestos, and global warming.

The really unfortunate part of this is that it isn't enough to create and deploy a defensible technical framework at great expense and development risk. You also have to have a strategy to get the message heard while you fight a patent system that stands squarely in the face of technical innovation by non-incumbents.

So the NSA does nothing effective about the insider threat and the good guys continue to burn out. Nothing to see here. Move along.

1 comment:

  1. OMG this is depressing. I wish I could find something to disagree with.

    ReplyDelete