These are the notes and some links for a brief talk I gave a few weeks ago to my classmates in the summer CS project class I’m taking at U of T. We’re working on the Basie and Markus projects. Both are web apps; Basie is a software project management app built on Django, and Markus is a CS-specific marking / grading app built on Rails.
The debate over full disclosure goes back hundreds of years in the locksmithing world. Locksmiths were historically very secretive about weaknesses in their products; interestingly, they still are – here‘s an interesting note on the subject from a few years ago.
There’s nuance and detail to the recent history of disclosure practices which Wikipedia does a good treatment of, but it’s fair to say that today there are three broad categories of practices:
- silent patching (no disclosure) – this is a bad idea for fairly obvious reasons, except (some argue) in edge cases like the Linux kernel (the “every kernel bug is a security bug” argument) (one discussion of this, another)
- partial disclosure, where one issues the patch before explaining full details of the vulnerability
- full disclosure, where vulnerability details (and sometimes exploit code) are released at the same time as the patch is issued
Aside from how much is being disclosed, there’s the question of responsible disclosure on the part of security researchers, which is in a nutshell the idea of giving software vendors a set amount of time to respond to security issues before going public with them.
How to Screw Up Disclosure
- don’t give credit in your vulnerability advisories
- don’t even bother publishing advisories (silent patching)
- be unresponsive
- demand excessive, unreasonable timeframes for patching (this is of course subjective)
- make people sign NDAs (!)
- threaten to sue people
The last two aren’t generally screwups committed by Open Source projects, of course 🙂
How to do it right – best practices
- have a clear security contact on your site, no more than a click away from the homepage, and easily googlable with the string “$projectname security”
- have a gpg key posted, with a good web of trust, for that contact
- have email to that contact go to an email list with a clear process for dealing with it so that you don’t drop the ball, or have it filed into the bugtracker automagically (in a private bug!!11)
- have an announce-only security mailing list for your users, and post issues to it ASAP when they come out! An RSS feed works too. Do both!
- ensure that someone in your project monitors lists such as full-disclosure and bugtraq for issues in both your project, upstream frameworks, and your infrastructure. For just monitoring your project, a Google Alert works well too. “project name + bug or vulnerability or security”. People sometimes announce vulns without disclosing at all; you want to catch these.
- if the project ends up getting abandoned at some point in the future, at the very least post a warning that it’s deprecated and unmaintained even for security issues, and possibly take down the code.
Specific Issues for web apps
- you may have a widely deployed base of users. An auto-update system such as WordPress’s is awesome for getting them to $%^$&&* patch!
- the framework you’re building on may have (security) bugs too.
- your code may be customized by users, which makes them lazy about patching – a good plugin architecture can help mitigate this.
Since leaving the world of (proprietary) medical devices (labels and user doc, not building), I’ve been wondering about how the open source movement might make its way into this and other regulated areas (e.g. aerospace). The handling of disclosure in those areas will not doubt be an important consideration.
If you’re interested, Salon ran this piece on open-source software for medical use: http://www.salon.com/tech/feature/1999/08/05/anesthesia/
(also, good meeting you the other night at the Hacklab; thanks for the tips re: incorporating a non-profit group)