Neil Hopcroft

A digital misfit

sploit

“CVS is a source code maintenance system used by many open source development projects, raising the prospect that the exploit may be used to spread compromised code to developers and end-users who download files from hacked servers.”

This is an interesting development – what happens if you can’t trust your version control system? The implication of the above statement is that the exploit could be selfpropagating, adding itself to repositories to be checked out along with a projects source code. If this were possible (it is but its unlikely to happen unnoticed and automatably) there is a lot of potential for damage.

For reference I would urge everyone to read Reflections on Trusting Trust by Ken Thompson – “The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.”


7 comments

  1. “The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.”

    This applies to everything, though. You can’t fully trust anyone other than yourself in any arena. However, for society to function, you have to. I could never use a computer if I took this statement to its full extreme.

    • Its worse than that, though, because, theoretically at least, with computer code you can get someone who does know about it (and who you trust…) to take a look at the code. However, in the case descibed by Ken Thompson that doesn’t actually help because the exploit isn’t apparent in the code, only in the binary. Extending this to source code control systems it would be feasible to perform the same kind of hack, looking out for certain code patterns (open socket for instance) which could then be expanded to include an exploit. The problem with source control systems is that people expect them to keep code safe, if the control system itself was compromised then the source code could, potentially, be altered by the source control system itself in such a way as to allow a hacker to open a backdoor into any system compiled from sources extracted from the compromised source control system.

  2. The moral is obvious. You can’t trust code that you did not totally create yourself.

    I can’t trust the code that I did totally create myself. Or is that just me? :-)

    Seriously, though, if your source code is properly controlled (and I’m not sure that cvs would qualify in this respect), then you can at least trace the dodgy code back to where it entered your repository and then spend many a happy hour ripping it out. And all those efficient people who refresh their copy of the code regularly will have a clean set of code. As for the rest of the user base…

  3. Hmmmm. I’ve read the article that refers to, and you’re probably right. Although I must say it’s not exactly clear what the actual exploit is. One bit seems to imply a vulnerability to the OS as well. But with anonymous users of the repository, you don’t really need an exploit, you just load code in which contains vulnerabilities and away you go. Which is what my simple brain assumed was the problem in the first place…

    But if it allows you to put untraceable code directly onto another box, then that’s very serious. I think most client/server s/w would potentially be open to this.

  4. Anonymous

    Attack of the clones… and Crypto AG

    In Bruce Schneier’s book there is a very interesting passage where he mentions that an executive from Crypto AG was arrested and jailed by the Iranians because they found out that Crypto AG had sold them pre-hacked encryption devices.

    _After_ being released he admitted that Crypto AG encryption devices are modified according to directives from the USA government.

    Now that is interesting: if there are backdoors even in the products of a private company in a neutral country, a company that depends on its reputation for security for its sales, then probably there are government designed backdoors in just about every piece of software, and note that most software is developed in the USA or by USA companies.

    Also note that a lot of software development is moving to India and China, and a lot of indians and chinese are working in the USA software industry, in particular at Microsoft.

    Just wonder how many subtly designed backdoors are in popular sw products like MS Windows/Word/… and how many countries have had their agents hired by USA sw companies in order to create their own backdoors…

    PG

    • Re: Attack of the clones… and Crypto AG

      That story rather shook the security industry at the time, leading to quite a sense of paranoia….I’ve reviewed security code from US companies for this kind of back door, there weren’t any obvious doors, and we replaced the RNG, which is one of the main targets for that kind of nobbling.

      But, lets face it, if the security services want to ‘find out’ things about people who they don’t like, they don’t need this kind of back door, it just makes it easier for them.

Leave a Reply

Your email address will not be published.