Bounty Launch Lessons

Cross-posted from Ryan over on Medium.

@magoo and @hypatiadotca

You’re thinking of launching a security bug bounty program where you pay researchers cold hard cash to report their security bugs directly to you.

Great!

But before you write your shiny announcement blog post and collect the precious retweets, let’s do some thinking and build a launch plan so you don’t drown yourself.

You Get What You Ask For

The root cause of many bounty problems is launching too fast. Bug bounties aren’t going to be the right move for every organization when there’s a lot of work that could be spent on the fundamentals. While you should always have security in mind developing software, bounty programs require particular ways of prioritizing work and allocating staff that may not be right for your team right now. Let’s walk through each burden a bug bounty program will bring, and then you’ll be ready.

Before the Bounty

Before you even consider a program, your engineering house should be in reasonable order. A bounty program will always give you more engineering work. It is not a development framework, firewall, network appliance, or other magic security product. It is a program of “exercise and vegetables” that will succeed with care and attention.

Here are the attributes we’ve seen make for successful programs:

  • Reality Check: Do you already have extensive security debt — known vulnerabilities and creaky infrastructure?
  • Triage: Are bugs centralized, tagged, prioritized, and do they generally find an ultimate owner? Are security bugs reasonably prioritized versus their owner’s other work?
  • Culture: Is there an appetite from your engineers to fix security bugs? Do you have leadership buy-in to prioritize externally reported issues, and executive air cover for the PR noise that will come out of a bounty program?
  • Backstop: Who will be fixing bugs with no clear owner, or project manage systemic issues requiring substantial engineering effort?

If you aren’t already treating security issues like you would treat scale issues, you risk creating a new problem. Some teams have used bounties as an instigator to fix these issues, but that’s your own call — it can be a risky move. Having the above items in place is your best chance for success.

Starting Slow

Don’t make the mistake of launching a bounty program in one fell swoop. You do not want public commitments and press haunting you while running headlong into signal versus noise issues and internal engineering drag.

Instead, design your launch properly.

Minimize the scope so that you’re only receiving reports for areas you are confident are more robust, have lots of room to mitigate issues, or are under active development and able to make changes with the most agility. Grow the scope as you become confident until your program has broad coverage.

Start with lower bounty amounts to keep things lower-stakes. Increase the amounts as you become confident, until you’re competing with going rates in your industry.

Invite high quality researchers in a private program so you’re not suddenly following up with a horde of mixed quality researchers. As your spikes become lulls, start inviting more researchers until the program is public.

Having a project manager onboard to attach milestones to your program rollout can be very helpful, for example “Full scope within 3 months” or “$1k bounties by EOY”. It’s all a matter of your own pace.

Running Smoothly

The bulk of the work of running a bounty program can be broken down into the following parts: triage, engineering fixes, and public communications.

Incoming bugs will need initial triage for signal. Submissions will include product misunderstandings, disagreements on risk trade offs and best practices, and your standard OWASP Top Ten bugs. Reputation systems can be an important tool to keep noise under control, and give the folks running triage a good metric for how much to dig into a particular bug.

Having security and non-security engineers collaborate on first level triage can be a great way to promote awareness and empathy towards the security mission. As your security team grows and specializes, you’ll likely move from an all-hands-on-deck rotation to dedicated triage, with higher level triage going to engineers who are closer to mitigation.

Contracting can also make sense for initial triage. Budget 2–5 hours per week for first level triage for a startup with a small attack surface, or work out a per-bug fee with a reputable vendor.

Be wary of burnout when it comes to triage, particularly if one person ends up handling most or all of it. However you handle triage — collaborating with non-security engineers, sharing a rotation, or contracting it out, it becomes easier to involve others if your culture can appreciate a nasty bug. The folks interfacing with researchers will spend a lot less time crafting the perfect “sorry but not a bug” message if they know that their company has their back.

Engineering fixes for bounty bugs will range from changing one line in a config to substantial coordination between multiple teams, outside vendors, or upstream open source projects. Treat this like you would any systemic problem within your engineering organization and have project management capability in place to shepherd the complex bugs.

When it comes to public communication, don’t let your submission threads turn into typical internet debate. Some researchers will permanently disagree on the severity of bugs, submit low quality bugs, or threaten hostile blog posts. It’s important to expect this and involve level headed teammates who are well practiced in empathy. Try to be payout-lenient and encourage good research (even when off the mark), but have clear policies for what constitutes a bug and what you’ll pay out for when you find research that is way off the mark. If there’s a conflict over the severity of a bug and you’re certain it’s low risk… disclose the details. With the specifics out in public, misrepresentation can’t happen.

Bounties Forever!

Once you’re paying public bounties under a wide scope, you should have plans to track the interaction of your bounty program with the engineering team on a regular basis.

Here’s some basic questions to ask regularly:

  • Budget: How much are we paying weekly / quarterly / annually?
  • Risk: What are our top five bugs in severity this quarter?
  • Regression: What issues keep recurring? What tools or practices can we deploy to prevent them in the future?
  • Hiring: Which researchers should we recruit?
  • Atrophy: What bugs took too long to fix?
  • Happiness: Are researchers happy?

We should all aspire for a million dollar bounty. It should be so hard to score a bug in a product that we’ll someday be willing to put a million dollars behind it.

Someday, your security program could be described by expensive bugs.

Good luck.


@magoo

I’m a security guy, former Director at Facebook, Coinbase, and currently a HackerOne founder / advisor and consultant for a handful of startups. Incident Response and security team building is generally my thing, but I’m mostly all over the place.

@hypatiadotca

I’m a security engineer at Slack. Prior to Slack, I worked at Salesforce.com, Microsoft, and Symantec. I care a lot about building sustainable, healthy security cultures that help developers ship code with confidence.

Leigh’s Informal Security Salary Survey 2014

The ISC2 is running their annual salary survey, but I want something a bit more personal, and hopefully, localized. So I’m going to run my own survey. Hopefully I don’t end up regretting this 🙂

Send me as much or as little of the following via FB message, email (salaries at hypatia dot ca), Twitter DM, or carrier pigeon, and I will collate, anonymize, and publish the results:

  • Company
  • Title
  • City/Country
  • Base Salary
  • On hire stock and cash
  • Annual bonus (stock and cash)
  • Education level
  • Years at current job
  • Total years experience
  • Gender / ethnicity if you’re comfortable sharing – I will only use these in aggregate because they are so identifying when one is a minority 😦

Anonymization-wise, I will bucketize the titles so they aren’t too specific. I will report all figures in $10k bands and years of experience in ~3-year ranges for obfuscation purposes. For companies with fewer than 5 reports, I won’t mention the company. I’m really good at de-anonymization; I’ll apply that level of expertise to anonymizing your data. And I will delete your data once I’m done this project.

Security at the Ubuntu Developer Summit

I’m attending the Ubuntu Developer Summit this week in Budapest, and I wanted to share how to participate in the security track remotely.

You’ll want to look at the schedule of security track sessions, and the icecast streams for the various rooms we’re in.  Each session in the schedule has one or both of:

  • an etherpad for recording discussions
  • a blueprint which is the “working document” for that particular portion of the project

Both have little icons in the schedule.

You may also want to join the IRC channel for the session; there is one per room, with naming scheme #ubuntu-uds-$room_name_without_accents .  There’s also #ubuntu-hardened, for general discussions and continuing to participate in the Ubuntu security community after UDS.  If you’re not a big IRC user or your network blocks it, you may prefer web-IRC, which is available here for freenode.

This stuff of course generalizes for any other topic at UDS; check out the schedule for tracks on other topics.

Opting out from naked scanning – Canadian edition

Short version of this post: you have the right to opt for a physical pat-down instead of scanning in Canadian airports.  Here’s the PDF you need to print out in case CATSA gives you a hard time about this.

Last February, on my way to PyCon in Atlanta, I had the displeasure of being bullied into going through one of the new naked scanners at Pearson International Airport.  I’ve filed a complaint with CATSA about this incident, as the screener should have given me correct information when I asked if I had the right to opt out.

Yesterday, I contacted CATSA’s media office in Ottawa and spoke with a fellow named Mathieu Larocque.  I asked where the specific policy was regarding opting out of the millimeter wave scanning.  He said that it was indeed the policy that one could opt for a physical search, and  pointed me at the same PDF flyer that Peter had pointed me at last week on Twitter.  As far as I’ve been able to tell, the language in that flyer about the scanners being an alternative to a physical search appears to be the only policy information on the entire CATSA website indicating that one can opt out.  Mathieu himself seemed surprised that there wasn’t an item in the FAQ to that effect.  If you’d like to see their policy clarified on the website, please leave a comment via their form.

The machines in use in Canada are ProVision Advanced Imaging Technology millimeter wave scanners.  As I understand it, these are different from the backscatter scanners being deployed in the US.  I looked over the product documentation and it indicates that recording or immediately deleting images is a customizable option.  When I brought this up with Mathieu, he said that he wasn’t sure of the technical details, but explained that there’s some additional piece of hardware which CATSA has not implemented, which is required for storing images.  One assumes it’s some kind of hard drive or flash-based storage setup.  I’m working on filing an Access to Information request to obtain the procurement information around CATSA’s order for these machines to confirm this as well as hopefully obtain more specific information about the implementation details of these machines.

That said – it doesn’t really matter how the storage stuff is implemented.  A malicious agent with a cameraphone will still be able to snap a photo of the screen,

So yes, scanning is here in Canada, and yes, you’re entitled to opt out and have a physical search instead, even if CATSA hasn’t bothered making that very public on their website.  If you opt for a private search, you have the right to ask for the search to be conducted in private, with an agent of the same gender as well as a second (same-gender) agent witnessing.  There are plenty of good reasons to opt out – radiation, religion, privacy, being creeped out by the process, or just not thinking it’s an effective method of doing security, as pointed out by a leading air security expert.

The Privacy Commissioner of Canada has weighed in on this and other issues of travel privacy in a lengthy, informative post, which Mathieu confirmed is an accurate assesment of current CATSA policy.

Thanks to Lisa for pointing me to the Privacy Commision report, and to Peter for pointing me at several of the CATSA links.

Oh, and if you’re thinking of commenting?  Please don’t post stuff about US TSA policy, it’s offtopic, and I’ve seen it already, trust me.  More importantly, I really don’t want to hear about how you think this is not a big deal, so don’t even bother with comments to that effect 🙂

Vulnerability Disclosure for Open Source projects

These are the notes and some links for a brief talk I gave a few weeks ago to my classmates in the summer CS project class I’m taking at U of T.  We’re working on the Basie and Markus projects.  Both are web apps; Basie is a software project management app built on Django, and Markus is a CS-specific marking / grading app built on Rails.

The debate over full disclosure goes back hundreds of years in the locksmithing world.  Locksmiths were historically very secretive about weaknesses in their products; interestingly, they still are – here‘s an interesting note on the subject from a few years ago.

There’s nuance and detail to the recent history of disclosure practices which Wikipedia does a good treatment of, but it’s fair to say that today there are three broad categories of practices:

  • silent patching (no disclosure) – this is a bad idea for fairly obvious reasons, except (some argue) in edge cases like the Linux kernel (the “every kernel bug is a security bug” argument) (one discussion of this, another)
  • partial disclosure, where one issues the patch before explaining full details of the vulnerability
  • full disclosure, where vulnerability details (and sometimes exploit code) are released at the same time as the patch is issued

Aside from how much is being disclosed, there’s the question of  responsible disclosure on the part of security researchers, which is in a nutshell the idea of giving software vendors a set amount of time to respond to security issues before going public with them.

How to Screw Up Disclosure

  • don’t give credit in your vulnerability advisories
  • don’t even bother publishing advisories (silent patching)
  • be unresponsive
  • demand excessive, unreasonable timeframes for patching (this is of course subjective)
  • make people sign NDAs (!)
  • threaten to sue people

The last two aren’t generally screwups committed by Open Source projects, of course 🙂
How to do it right – best practices

  • have a clear security contact on your site, no more than a click away from the homepage, and easily googlable with the string “$projectname security”
  • have a gpg key posted, with a good web of trust, for that contact
  • have email to that contact go to an email list with a clear process for dealing with it so that you don’t drop the ball, or have it filed into the bugtracker automagically (in a private bug!!11)
  • have an announce-only security mailing list for your users, and post issues to it ASAP when they come out!  An RSS feed works too.  Do both!
  • ensure that someone in your project monitors lists such as full-disclosure and bugtraq for issues in both your project, upstream frameworks, and your infrastructure.  For just monitoring your project, a Google Alert works well too. “project name + bug or vulnerability or security”.  People sometimes announce vulns without disclosing at all; you want to catch these.
  • if the project ends up getting abandoned at some point in the future, at the very least post a warning that it’s deprecated and unmaintained even for security issues, and possibly take down the code.

Specific Issues for web apps

  • you may have a widely deployed base of users.  An auto-update system such as WordPress’s is awesome for getting them to $%^$&&* patch!
  • the framework you’re building on may have (security) bugs too.
  • your code may be customized by users, which makes them lazy about patching – a good plugin architecture can help mitigate this.

meshU 2009 – writing (more) secure software

Today at the meshU conference I gave a talk about secure programming, with a focus on the web.  There were 2 token slides for the C and C++ devs out there, which ended up working perfectly because there were only two people in the room who wrote C/C++ 🙂

I mostly touched on stuff from OWASP‘s vast collection of resources,specifically their top ten principles of secure programming, and their top ten web application vulnerabilities.  Slides are after the jump, but I wanted to include some related links to things which came up during the talk:

Enjoy the slides!  Slideshare messed up the formatting of the additional notes, so for full effect I’d download them from here.

Continue reading “meshU 2009 – writing (more) secure software”

Career talk at SpoofIT

I gave a talk a few weeks ago at SpoofIT, the IT Security club at UOIT.  I referred to a number of links and resources during the talk but haven’t had a chance until now to post a list of them.  I’ve also written up a little summary of the talk for those who missed it.  I owe a huge debt of inspiration to James Arlen’s talk at The Last Hope, which you can download at the hackermedia archive or on bittorrent at the HOPE tracker.  It’s the one titled “From Black Hat to Black Suit”.  He’s been doing this a lot longer than I have, so go watch his talk too 🙂
Continue reading “Career talk at SpoofIT”

25C3 Day 3

Paul and I turned in pretty early on Day 2 and managed to make the first talk on Day 3, though not without the assistance of Club Mate and Starbucks.  Day 3 was where things started to get really hairy in terms of being able to get into rooms to see the talks I wanted to see; I ended up missing the RFID talk I really wanted to see in favor of getting to the room for the Storm talk half an hour early.  But that’s what conference recordings are for, isn’t it!

As before, be sure to also check out Security4All’s post on Day 3 for a more Belgian perspective on things.

Continue reading “25C3 Day 3”

25C3 Day 2

Continuing on from my post from a couple of days ago, here are my notes from Day 2 of the 25th Chaos Communications Congress in Berlin.  I’ve been slow with getting these posted – Day 2 was December 28th.  Better late than never, right?

soviet unterzugedorf represent
A lighthearted moment from Soviet Unterzoegersdorf

As with the previous posts, for a different perspective and selection of talks I highly recommend checking out Security4All’s blog post about Day 2 as well.

Finally, if you’re particularly interested in anything I’ve written about, you should check out the official recordings here.  Most of the talks have been posted both as direct downloads and torrents.  I can’t even begin to say how amazing this is given that the conference is barely over.  From what I hear as well the live streams coming from the conference while it was running were also totally solid.

And now for the actual comments about this day’s talks!

Continue reading “25C3 Day 2”

25C3 Day 1

Finally sitting down at Paul’s laptop to write up some notes on the talks I’ve seen so far.  I’m going to break it up into days becaus eI’ve taken a lot of notes 🙂  Here goes, with comments in brackets:

Gadi Evron on Cyberwarfare

  • EU security operations / CERTs are not very organized
  • cyber warfare is mostly bull****

iPhone hacking

  • They’ve fully soft-unlocked the phone, but it’s been done in such a way that Apple can still fix it with a software update

Memory Forensics with the Cold Boot Attack

  • attack has been fully weaponized to USB keys (or functional iPods) and PXE boot
  • Jake has found a somewhat unrelated bug in Mac OSX’s Login.app which results in logged-in users’ passwords being stored in RAM; Apple is aware of the issue and not fixing it.  Same for FileVault keys [o_0]
  • Linux dm_crypt is vulnerable
  • loop_aes devs thought they weren’t vulnerable because of some key-shifting stuff they do, turns out it just means that they store twice the keydata 🙂
  • Co-author of USENIX paper Nadia wrote an awesome keyfinding tool which can grab keys from RAM even with something like 75% corruption
  • Bitlocker default / simple mode is totally pwned
  • Even with TPM in use Bitlocker is still vulnerable if precise timings are used

Dan Kaminsky – Why were we so vulnerable to the DNS vulnerability?

  • random person named Paul sitting beside me on the couch by the Go boards describes it as “+5 insightful”
  • My Paul is all excited that Dan is now publicly in favour of DNSSEC 🙂

dns pwnage

Edited to add:  For some additional perspectives on Day 1, have a look at my Belgian friend Security4All’s blog post, which has a different selection of talks.