(In)security through secrecy

Software development is arguably the most difficult form of creative expression current practiced. Unlike work in the arts, or practices like civil or mechanical engineering, software cannot be seen — a worker cannot generally look at their creation and observe directly how it behaves. The best empirical feedback comes from end results, logs, and the occasional core-dumps.

And yet, it can be argued that software has become as critical to our society and its infrastructure as traditional engineering. But for some reason we accept software crashing on a regular basis, where we wouldn’t accept the same from our buildings, bridges and vehicles.

So far the most successful method of turning software into an engineering discipline is a practice called Extreme Programming. This involves having two programmers working together to produce code, looking over each others’ shoulders, pointing out errors and planning algorithmic strategies. While an improvement, it still doesn’t produce error free programs, and few software developers utilize the methodology.

Another, related method involves peer review. One or more experts in a domain either look over the source code, if available, or perform reverse engineering operations, if the source isn’t provided, to independently confirm that the code is save and performs correctly. This has proved to be a highly successful method of finding bugs, or errors in programming.

Unfortunately, the Digital Millennium Copyright Act (DMCA), enacted in 1998, is being used to prevent the reverse engineering of commercial software (among many other things). The most recent example is HP using the act to threaten a group of researchers who reveled a bug in Tru64, previously HPUx, which could allow a remote attacker to gain control of a Tru64 machine.

This, of course, is not the first time the DMCA has been used to suppress the fact that software being sold to consumers with the promise that it does what it claims, in fact, doesn’t. I’ve written before of the case of Dmitry Sklyarov, a Russian security expert who was able to demonstrate that the document protection provided by Adobe’s Acrobat eBook encryption was seriously lacking.

Another example is the case of Professor Edward Felten, and the Secure Digital Music Initiative’s (SDMI) attempts of creating a way of protecting music that couldn’t be broken. Dr. Felten was able to circumvent the protection, but was prevented from discussing it with others in the security profession by way of the DMCA.

As an analogy, think of the exploding Firestone tyres on Ford Explorers. Imagine that they were covered under the DMCA. Rather than the public learning of the problems with the product, and independent researchers exploring the cause and potential solutions, everyone involved would have a gag order implicitly imposed. Who does this benefit? No one, except the party at fault.

Now, admittedly, software don’t generally have the same life-and-death role as tyres, but at times it does. General-purpose software does at times find itself (often inappropriately) running life-support systems or controls for navy ships, as examples. And ignoring such extreme examples, any business person who relies on a software system to keep their business running will want the assurance that the software performs as it’s been promised to.

Fortunately, it appears that some sanity is beginning to return to the US. Richard Clarke, President Bush’s computer security adviser, recently told the Black Hat Conference that third-party “hacker” audit of commercial software is a “Good Thing” and should be allowed. He did say that any problems found should be first reported to the publishing companies, and if they don’t respond, the crackers should escalate to the government (we’re here to help you….)

The users of said software, however, remain out of the loop — they’re the last to know they’re at risk of remote compromises. The thinking is if the general user learns of their exposure, the “bad guys” will learn too, and will take advantage of this knowledge.

The unfortunately reality is that exploits for software bugs are available to 31337 (code for “elite”) crackers well before they come to the attention of the “good guys”, so users need to know they’re at risk as soon as possible. With this information, patches can be applied (if available), or services disabled before they used to leverage against a system.

Personally, I find the idea of trusting commercial software providers, or the government, to keep my computing infrastructure secure untenable; their objectives are simply too disparate from mine. This is why I use, and recommend, open software, which brings peer review into the loop at the earliest possible point to ensure the code does what it claims, and doesn’t have any “stupid programmer errors.”.

Until such time as software producers are held to the same level of responsibility and liability as providers of other services and products, users of Information Technology must take on the responsibility for their own infrastructure. It is clear the current market won’t do it for them.

Published in the Victoria Business Examiner.

Write a comment