TryChooser is a simple extension that makes sending to TryServer simple. Here’s a transcript of sending some mq patches to Try.

$ hg trychooser
Run everything?
[Ynh?] n
Both optimized and debug?
[Ynh?] n
Just optimized?
[Ynh?] y
All platforms?
[Ynh?] n
[Ynh?] n
[Ynh?] n
[Ynh?] n
[Ynh?] n
[Ynh?] n
[Ynh?] y
[Ynh?] n
[Ynh?] n
All Unit tests?
[Ynh?] y
All talos tests?
[Ynh?] n
Any talos tests?
[Ynh?] n
The following try message is going to be used:
try: -b o -p android-r7 -u all -t none
Creating the trychooser mq entry...
pushing to ssh://
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 1 changes to 1 files (+1 heads)
remote: Looks like you used try syntax, going ahead with the push.
remote: If you don't get what you expected,
        for help with building your trychooser request.
remote: Thanks for helping save resources, you're the best!
remote: Trying to insert into pushlog.
remote: Please do not interrupt...
remote: Inserted into the pushlog db successfully.
popping trychooser
now at: fix_android

Get it here.

Fork me on GitHub
Posted in Uncategorized | Tagged , , , , , , , | 4 Comments

CC-lists and openness

Should “private” emails be CCed to public mailing lists? [1]

Suppose I email Jon about the register allocator in the DragonMonkey branch [3]. I’ll just email him directly, and that discussion won’t be open. This happens dozens of times a day—one colleague emails another asking a question or proposing a solution [4], and nobody else gets to see it [5]. We should fix this and open that stuff up.

Time wasters

The closest thing that currently exists is that I could CC the relevant mailing list. I don’t want to do that. When I mail a list like dev-platform, I consume the time of hundreds of people who read that list. Often, I write and rewrite an email to ensure clarity [6] before I send it. It’s important to explain context, history, project roles and aims, and all the other things people need to understand for my email to make sense [7]. Actually, most of the time, I end up not sending that mail.

Cheap is good

So we need something which isn’t so expensive. In the forum discussion, I suggested a CC-list – an address that you CC on emails that should be open, but which you don’t want to send to the relevant mailing list. Interested parties following the CC-list can then read the conversation, skim it, ignore it, or participate in it.

So if I want to understand how that thing works, and I need to ask Jon about it, I would send Jon an email as normal, and throw in the CC for good luck. I’d ignore clarity, wouldn’t address it to a wider audience, or do anything any different than normal. Just “Dear Jon”, and be done with it [8].

Practical considerations

So if we build it, will they come? I don’t really know. I see lots of problems. Maybe the way is to just try it.

One CC-list for the whole project is too few. If you work in marketing, you don’t want to lurk on the engineer list, no more than I want to lurk on Metrics or Graphics or Jetpack [9]. So I would say that if you currently have a niche list, the CC-list should just be an extension of that [10].

Ridiculous alternative

You know what would rock? If all mail to was automatically public, and anyone who wanted to send private mail (my boss, HR, legal, etc) would send it to Default to open! I think that might actually be too hard to make work, but it would be too cool. I’d definitely support creating a account though [11].

[1]Thanks to Joe Drew for starting this discussion on the intranet forums [2].
[2]An employee-only link? I know right? A discussion about openness on the one part of the entire Mozilla infrastructure which isn’t open! But he, I, and pretty much every Mozillian (employee or not) wants everything we do at Mozilla to be open, so please forgive us when we occasionally fail at it.
[3]Neither Jon nor DragonMonkey exist.
[4]Note that this isn’t strictly an employee thing, maybe non-employee core-contributors are talking.
[5]Note that this is not the same problem as when we want to quietly do some design work on something before announcing it more widely. That’s a different thing.
[6]At least, I should.
[7]Mostly, I don’t want people to think I’m an idiot.
[8]Why is this cheap when sending it to dev-platform isn’t? By construction. If we define the list to be one which should be skimmed at best, then we treat it that way regardless of how it’s used in practice.
[9]Fine people though they are, I don’t need to read their semi-private email.
[10]We can’t just repurpose the original niche list either, since that’s not the contract the subscribers agreed to, and also it would drown out other email.
[11]I could do that right now with, but if it isn’t a widely used convention, it won’t happen.
Posted in Uncategorized | 2 Comments

Helping new contributors – Part 2 – Mentoring

Getting into the Mozilla code base can be pretty hard. Even experienced developers may need help finding a good place to start.

Up until now, we’ve been using the good first bug whiteboard annotation in bugzilla to tell contributors where to start, and we’re trying to iterate on the model based on the experience of Mozilla developers. After some discussion, we’ve started using a new whiteboard annotation, which we hope will come to replace the existing good first bug annotation:


Add this to a bug—with your own bugzilla ID of course—to tell a newcomer that you will mentor them through this bug. By having existing contributors take personal responsibility for new contributors, we hope to make it easier to shepherd them through the difficult early stages of contribution.

When marking a bug as mentored, you should also add a comment, telling newcomers where to start, where to look for docs, etc. Avoid jargon if possible; newcomers are going to have trouble understanding things we take for granted. Some community members have already started creating mentored bugs, which I hope demonstrate what we’re going for [1].

When a contributor starts working on a bug you mentor, be as friendly and outgoing as possible. Email them privately to thank them, and tell them to contact you with any problems they have or any jargon they may not understand. Your aim should be to help them with anything that they wouldn’t know about Mozilla, which is quite a lot.

Find a reviewer for them, push their patches to tryserver if necessary, and ping their reviewer if required [2]. Get them onto IRC so they can get help when you’re not around, and point them at the right documentation. If they produce a good patch, apply for level 1 access on their behalf, and vouch for them. After review, push their patch, and comment to mention what a great job they’re doing. If you can, suggest a slightly harder a bug for them to do next.

As well as adding new mentored bugs, it’s useful to triage old good first bugs. If they’re still useful, convert them to mentored bugs. If they’re out-of-date, close them, and optionally start a new bug with updated information.

What makes a good mentored bug?

Mentored bugs are basically the same as good first bugs, but with a mentor. That means we can use the same criteria as before:

  • Easy:

    The best possible first bug, from a procedural point of view, would be a typo fix [3]. It would be trivial to fix, but would serve to teach you bugzilla, build firefox, meet other developers, format your patch correctly, ask for review, and so on. These procedural issues are what makes contribution difficult, and a perfect first bug would just guide people through these steps.

  • Self-contained:

    Bugs which depend on other bugs, or which span many modules, are probably not great first bugs. The contributor will be unused to Mozilla’s practices, and having to split work across multiple trees, figure out super-review, etc, are not going to encourage them to stay.

  • Well-understood:

    You should be able to guide the contributor down the right track, which means you should understand the bug yourself. Similarly, the bug should have a good start and finish – it should be obvious when the bug is fixed. Bugs which contain a reduced test case are perfect for this; intermittent oranges are not. In general, you should know how you’d approach it, and you should add that as a comment: how to reproduce the problem, the file and function to fix, how you’ll recognize the required fix, etc.

  • Apolitical:

    The last thing you want is for the contributor to pour hours into fixing a bug, only for it to be nixed by the reviewer or module owner. You should be 100% certain that fixing the bug is desired, and that there won’t be any dissent at a later date. We don’t want newcomers leaving because their time was wasted.

Although we say “easy” above, we can also mentor bugs that would serve as good second and third bugs. Second and third bugs can add a little more difficultly and discovery. They should still be conceptually easy, but hold a small challenge for someone who doesn’t know the codebase. They’ve jumped through the hoops, now reel them in with something slightly meaty and interesting [4].

What kinds of bugs are useful to mentor?

In general, they should be bugs you’d like to see fixed, but ones that aren’t so important that you’ll do them yourself. Good examples are:

  • Small bugs you don’t have time for:

    For example, the JS team wants to integrate our two testing frameworks, but we probably won’t do it ourselves.

  • Bugs you’ve started, but won’t finish:

    I’ve started a few bugs that I ran out of time to finish, but I still want them to get done. In those cases, I attached what I had done and marked them as mentored. These bugs are particularly good since they are really close to being finished, and the trajectory a new contributor needs to take is unambiguous.

  • Broken windows:

    Mozilla is an old project, and has accrued many broken windows – things which are an incredibly low priority to fix, but make the place look ugly and run-down. These are ideal starting points, especially if they only involved deleting code. Beware that this can lead to scope creep: a new contributor recently removed WinCE support from SpiderMonkey, which had to be split into 5 separate bugs requiring 5 separate reviewers. This can scare new contributors off [5].

  • Refactorings:

    Refactorings are a very good way to learn a code-base, though they won’t always be good first bugs (perhaps second or third bugs). It’s always great when a contributor can add something of beauty, instead of a hacky fix.

  • Local optimizations:

    If you have a single function that could be optimized, that makes for a very self-contained bug. The new contributor can write a micro-benchmark and optimize it, and really see the results of their work.

Next steps

We’re currently directing newcomers to an introductory page, where we link to mentored bugs, and encourage newcomers to pick a bug they’d like to work on. It seems that giving newcomers bugs directly is far more effective than leaving them to find one they like, so we’d like to move to that if possible. Unfortunately, we don’t have enough mentored bugs, or a wide enough variety of them, to make this work just yet.

This is particularly true of non-C++ bugs. We have lots of newcomers who don’t know C++, or who barely know it. We currently have an introductory page for people who don’t know C++, but we’d like to do a lot better here. If you work on a aspect of Mozilla that doesn’t use C++, whether it’s RelEng, a website, or even a small tool, we’d love feedback on how to help newcomers work with you.

Speaking of feedback, we want lots of it. Mentored bugs are new to Mozilla, though they’ve been used in other projects before. We’d love to hear your experiences with them, here and elsewhere, to see if we’re really improving things. Our contact details are in the first post, and we’ll keep the mentors page updated as we go.

[1]We’re in early stages here, so feedback welcome.
[2]We don’t want new contributors to be frustrated by having to wait days or weeks for a review. “John here is a new contributor, would you be able to prioritize this review?” works wonders; even if it doesn’t speed up the review, it makes the contributor feel special. I’ll get into this more in a later installment.
[3]However, since it wouldn’t give the contributor a warm fuzzy feeling, probably best not to file these.
[4]For example, here’s a bug I wrote specifically for a new contributor. It’s a challenging bug, but is self-contained and well-understood.
[5]Though it didn’t in this case, cause he is awesome. He actually then went on to delete WinCE from the whole damn tree!
Posted in Uncategorized | Tagged , , , , | Comments Off

Helping New Contributors – Part 1 – Meta

There has been a lot of recent interest in helping new members of the community contribute to the Mozilla code base. A few of us met up during the All-hands, and have started doing “Coding Contributor Engagement”, in cahoots with the existing Contributor Engagement team.

The direct goal is to substantially increase the number of contributors to the Mozilla code base. In the short term, we’re looking at removing bottlenecks to bring in new contributors, ensuring that we aren’t making life hard for them, and avoiding losing them once they arrive. For example, we’re trying to fix small nuisances, write documentation aimed at new contributors (and improve existing docs), and create a mentoring system to help new contributors with their first steps.

An indirect goal is to also improve things for all contributors. Removing bottlenecks to contribution, for example by improving processes and documentation, helps everybody. By examining where we lose new contributors, we can learn how best to retain existing community members. By improving communication for newcomers, we help improve communication throughout the community. Finally, we want to ensure that information and resources available to employees of Mozilla Corp are available to all contributors. These goals are shared by most of Mozilla, many people across the community already work on this, and we’ve already had great support in this area.

I’m putting out a series of posts over the next few days discussing specific short-term actions that everyone can help with. First, here’s the meta-information around what we’re [1] doing:

[1]“we” in this context means anyone who will help, so “we” really intend this to mean “you”.
Posted in Uncategorized | Tagged , | Comments Off

Summary of Contributor Engagement threads on dev-planning

There have been a few threads regarding Contributor Engagement on the dev-planning list, focussing specifically on contributions to the tree. In order to follow up on these ideas, and others that different folks have been brewing, we’re discussing this in a session during the All-Hands next week.

Challenges for contributors

  • What does a new contributor have to go through to build FF:
    • no readme
    • —enable-application=X
      • no warning
      • needs default
    • directory is called mozilla-2.0, not firefox4.0
    • cvs is mentioned in the docs!
    • build instructions prose is poor
    • not configure/make based
    • ac_add_option / .mozconfig is confusing
    • patch submission is tortuous
    • even simple stuff has jargon
  • Getting feedback:
    • there isn’t a good wasy to ask “what do you think of this approach?”
      • f? only works on attachments
      • some people put the proposal as an attachement, then ask for feedback
      • there was a suggestion of a ‘request review’ mailing list.
  • Tinderbox is scary:
    • new contributors are scared by oranges, and we currently have a problem with random oranges
      • there is automated oranging, sufficient?
      • ArbPL might be easier in this regard
  • Submitting bugs:
    • finding a committer is tough
      • we should have a script/query/dashboard following [commit-needed]
      • add a “job” like sherriffing, for the person whose job it is to land patches
    • getting tryserver access takes a day
      • arbitrary try server access
        • solvable, but security concerns
    • we’d like to prioritize landing from new contributors and “people who aren’t paid to work on this thing”
    • currently work has to be ‘negotiated’ into the tree
      • new contributors should have ‘greased path’ into the tree
  • So many tools to learn:
    • tbpl
    • tryserver
    • qimportbz
    • bzexport
    • gdb-archer
    • lithium
    • irc
    • mercurial queues
    • bugzilla fastness
    • weird unique build system – autoconf213, mozconfig
  • Time zones:
    • hard to get an answer if you contribute outside 9-5, Monday-Friday PST
  • Working independently: Problems should be solvable for external contributors independenctly. Each new person who is introduced can potentially scuttle what you’re doing. If you email someone and wait a week for a response, it makes it less fun. If they never respond you’ll probably stop contributing. Related: tryserver access, reviews, approvals, commit-needed

Experience reports

  • Long form:
  • briefly:
    • seamonkey does a lot of hand-holding, and has been successful
    • l10n too
    • Gnome Love project
      • (gnome-women is also copyable)
    • maybe get reports from professors, Seneca, etc
      • relationships are more important than resources
      • you need to “protect” new contributors from “normal” interactions
  • Experiences summary:
    • idiosyncratic build infrastructure
    • submission process (including bugzilla)
    • contributor agreement
    • r? with no target are ignored
    • requesting reviews
    • learn irc, our channels and etiquette
    • blame logs
    • finding related bugs in bugzilla
    • [checkin-needed]
    • project branches
    • watching the tree + tinderbox
    • approvals process
    • blocking/wanted flags
    • forgetting the -u flag [editor’s note: NEVER EVER FORGET THE -u FLAG!!!]


  • Contributor engagement team (ref David Boswell)
    • high level, need coding-specific help
    • 100 people contact per week to help
      • -> list [needs volunteers to answer the queries - contact]
      • contribute@ list and canned responses
      • poor success ratio (10% is a good result)
      • getting started information
      • #mozillians channel
      • track what works and doesnt
      • point them to first bugs
      • contributor directory can help
  • Mentoring:
    • we should have an outreach team
    • mentors would guiding patches through the process
    • danger: watch out for people not going the distance
    • people who learn processes will be good in the long run
  • What do we do when potential contributors email existing contributors directly?
    • mentor?
    • send canned response?
    • redirect to Constributor Engagement
  • terminology is important:
    • “internal” vs “external” – “internal” might have bad connotations
    • volunteers is bad
    • “greased”?

Tools to help us

  • we need dashboards to find:
    • unspecified r?
    • forgotten [checkin-needed]
    • patches from new contributors
    • [good first bugs]
  • identify new contributor (metrics):
    • age of bugzilla account
    • number of patches submitted by this submitter
    • whether a submitter is employed by Mozilla Corp
    • whether a submitter is paid to work on Mozilla
    • age of submitter’s first-ever patch
    • timezone of submitter
    • native language of submitter
    • age of submitter
    • number of bugzilla comments written by submitter
    • whether a submitter has commit privs
    • fuzzy heuristic combinations of the above to bucket people into “new unpaid volunteer” or whatever
    • dmose/metrics have built something like this
  • development appliance?
  • documentation:
    • live help?
    • feedback button (there is a talk page – is that the same thing>)
  • a stackoverflow instance like infomonkey for mozilla questions


  • Is this just a documentation problem? [editor’s note: No]
  • many docs are obselete
    • the important ones are the ones which appear high on google
    • hard to update
    • updating guidelines
Posted in Uncategorized | Tagged , | 2 Comments

Great commit messages

roc and jimb write great commit messages. As well as fulfilling the basics requirements of listing the reviewer, bug number, etc, they add enough context that you don’t need to crawl through bugzilla to get a reasonable understanding of the problem.

Not that linking to bugzilla is bad – there is all manner of context in there that is really important to track. But there are also dead-ends, dups, noise, discussion, politics, reviews, and other things that don’t really help get a high-level overview of the problem.

I used to write better messages in projects I committed to in the past (mostly phc, following to a lesser extent the example I saw on the gcc-patches list), but since it wasn’t the Mozilla way, I felt like I was freed from the burden. And it is a burden: writing concise but descriptive log messages takes time and effort. But considering the work that goes into a patch, the few minutes to edit the message seem worth it.

The gcc-patches list is the best place I’ve seen this done; for an epic example, consider a 2004 post by Jakub Jelinek. Bear in mind that long messages like this are for code reviewers, not for posterity, so it’s solving a slightly different problem. However, for people following along at home, they have the same benefit.

One problem with writing better messages is that they aren’t really obvious. In hg log, you need the -v flag to even make them appear, and the default view over at doesn’t show them. However, it’s pretty apparent in the log view how superior these messages are.

Good revolutions start from the bottom, so I’m going to follow roc and jimb’s examples.

[1] For the non-Mozillians in the audience, Mozilla’s coding standards require that commit messages include the bug number, the reviewer name, and some other information such as a super-reviewer, if they exist.
Since they don’t specify that you need to describe the bug in any detail, we end up with terse, one-line descriptions of the commits.
However, our tools link to bugzilla number, so we do have easy access to the full history of the problem.
Posted in Uncategorized | Tagged , , , , , , , | Comments Off

Why we shut NewsTilt down

Last updated: October 7th, 11:55pm GMT

NewsTilt was a YCombinator-backed startup which aimed to provide services to help journalists become entrepreneurs and earn a living off their work online. It closed down in July 2010, a total of 8 months after it was founded.

A while back, we announced to our journalists that we shut NewsTilt down, only two months since we launched. I think people are interested in why it failed, and there are some interesting lessons in our demise, at least for me.

This piece focuses on my own view of why it failed, as opposed to answering the questions and comments which appeared following the publication of our announcement by Romenesko. I’ll probably take some time in the future to respond to critics in the future, and to get back to those who contacted me for comments.


Following the launch, everything started going to shit, and a huge number of challenges to the success of the company had arisen. The biggest of these were the lack of traction from launch, that we had lost the faith of our journalists, and because there were communication issues between Nathan (my co-founder) and me. This combination also killed our motivation.

As a result, I made a carefully thought out decision to shutdown the company, and return as much money as we had left (about 50%) to the investors. Nathan believed the best thing to do would be to pivot our company, and so I agreed to step down to allow him do that. After some work, he agreed that it was best to shut it down (hence the email above), and we are currently going through the steps of winding down the company, and returning the remaining money to investors.

Problem overview

NewsLabs failed because of internal problems and problems with the NewsTilt product. NewsTilt failed because:

  • journalists stopped posting content,
  • we never had a large number of readers,
  • we were very slow to produce the features we had promised,
  • we did not have the money to fix the issues with NewsTilt, and it would have been tough to raise more.

None of these problems should have been unassailable, which leads us to why NewsLabs failed as a company:

  • Nathan and I had major communication problems,
  • we weren’t intrinsically motivated by news and journalism,
  • making a new product required changes we could not make,
  • our motivation to make a successful company got destroyed by all of the above.

Overall, the most important of these are that Nathan and I had difficulty communicating in a way which would allow us save the company, and that this really drained out motivation.

NewsTilt problems

NewsTilt wasn’t a bad idea, but we certainly faced a ton of problems with it. Most of them could be overcome, but its instructive to go through what they were.

Who are you writing for?

NewsTilt was a news destination website, but we very quickly ran out of news. We relied on journalists posting news, and they stopped posting because they largely no longer believed that NewsTilt was good for them.

Journalists felt that they were writing for us, instead of writing for themselves, for their own brands. How could they feel anything else, since that’s the impression we gave them by the design of The most important part of NewsTilt–that journalists would have their own brands and own domains–got cut from the minimum viable product in order to make launch date [1]. It was never re-added because of technical issues with Facebook.

As a result, it looked like NewsTilt was trying to be another Huffington Post [2], that is, a news company to compete with existing news organisations. As well as convincing the journalists that they were contributing to us for nothing in return, this had knock-on effects: other venues to which a journalist might sell their news refused to allow the same stories to be posted to NewsTilt. This is obviously the right thing to do if you perceive NewsTilt to be a competitor. If they had perceived each journalist’s NewsTilt site to essentially be a personal blog, as we perceived it, we wouldn’t have had this problem.

Since we weren’t making any money for them, and it appeared to them–correctly–that no-one was even reading their content, there was no earthly reason they would keep writing “for us”.

Worse is better

Somewhat surprisingly, the journalists we picked were too good. We made a big deal of only hiring the “best journalists”, something we spent a great deal of time getting right. We had a guy with a Pulitzer, one with an Emmy, and overall a great deal of talent writing for us [3].

In hindsight, this may have been a big mistake. The kind of writer we actually needed was one that was hungry to succeed. Someone who would write five pieces a day, and who wanted nothing more than to be a big-time journalist. We needed a young Perez Hilton or Michael Arrington, people who wrote for 18 hours a day in order to make their names.

Instead, we got journalists who were already successful in their day jobs, and who already had families and other commitments. They were checking out the latest thing in news, not hungry to make something of themselves. Why would they be, they already had made something of themselves. Unsurprisingly, they didn’t write 18 hours a day, instead just dipping their toes in to try NewsTilt out. They applied, and either never started or posted only a small number of articles.

I think it’s important to say that we really failed because of a lack of content. But that was a symptom of having the wrong kind of journalists. All the problems the journalists faced, not writing enough, their distrust of Facebook, their unwillingness to socially promote their work, would not have been problems for a young journalist eager to make a name for himself. If we had the sort of people who gave up everything to succeed at their dreams, these problems could have been blown past. But as established successes in their field, it was unreasonable to expect them to make giant changes for uncertain return.

We never made it clear how hard it was going to be to create an online presence, and so when articles went nowhere, there was little motivation to continue. Building a brand online is akin to doing a startup – it’ll take five years. But we failed to prepare them for this [4], and we failed to recruit the people with the kind of that kind of dedication to “making it”.

Wrong content

The actual content that the journalists wrote wasn’t what we needed either. Content on the site was a very high standard, but it tended to be very long pieces. Long pieces online are difficult to make a success of, as the online attention span is very low.

The problem was that I didn’t really know how the journalists should write their pieces, only a vague sense that it was wrong. I also didn’t really want to tell them what to write, since they were doing us a favour by writing at all. In fact, we actively told them to write what they wanted. This could have been fine, if they had taken their cue from their readers. But we didn’t really know who the readers were, and there weren’t that many of them anyway.

It took a while for me to work out that the first thing I did in the morning to sate my internet addiction wasn’t going to NewsTilt. I was still going to Reddit and Hacker News. When I did read the pieces, I wasn’t terribly interested in them; they were definitely better than what I read in the paper, but they didn’t trigger the dopamine receptors that made me want more and more, and I didn’t know how to fix it.

Who are the readers?

The fact that we didn’t know anything about our readers’ demographics underscores another problem: I don’t understand news readers. I certainly wasn’t one, and I didn’t know many people who really were. My customer development had largely consisted of talking to journalists and figuring out what they wanted. I never really–despite good intentions on lots of occasions–talked to people who loved news about why they loved it. So I was unable to say what was going wrong and why people weren’t sticking around.

I could possibly have fixed this at a certain level by giving a greater role to the editors, but I was very uneasy about having editors in the first place. I didn’t want to tell the journalists what to write, and I felt that greater input for the editors would have made each journalist’s brand less individual [5].

Traffic problems

The major reason the journalists bailed was that we failed them. We didn’t deliver the things that we said we would, and we wasted the content they provided.

One part of the service we offered was that we would get the journalists traffic. Whooops! Getting traffic is really really difficult. We completely underestimated how difficult it would be, largely because I’d never had a problem with it in the past. When I’ve needed to promote some pieces I’ve written, I simply submitted them to Hacker News and Proggit. However, that doesn’t generalise in any way.

In retrospect, it was foolish to offer to do promotion for the journalists. We should instead have built the tools to help them with promotion, and let them do it themselves. We had a couple of ideas of what these would be, but we never built them. We felt that we would understand promotion better if we started by doing it ourselves–standard practise for any company–but it sucked up massive amounts of time, and we never got anywhere with it.

We had no domain experience in promotion, and really had no idea what to do. Worse, we had no idea what to tell the journalists to do. We struck upon the idea that if we had fifty journalists, and they each cross-promoted each other to their social networks, then over time we would get more and more people to read each others’ content. Suffice to say that the journalists were not happy, and didn’t go along with it.

We had pseudo-hired a social media promoter, someone who had gotten bitten by the startup bug, and was interested in community management, social media promotion, etc. She was pretty good, way better than me, but was still relatively new at it. What we really needed was someone who knew this stuff inside out [6], rather than someone who was learning as they went.

We also missed a few opportunities for some traffic. We fluffed our TechCrunch launch by having the piece posted before the site was even launched. This happened because we were sending out old fashioned press releases in an effort to get old media to cover it, and they needed to go out a day early. Old media didn’t cover it, but TechCrunch did, a good 30 hours or so before we actually launched. We got 18,000 hits that day, nearly all of whom saw no news, and probably didn’t come back.

Technical promises

We basically over promised what we were going to do for the journalists. We showed them a list of things we were going to build quickly, then built things very slowly. Our technical ability was going to be our differentiator, but our technology showing was pretty poor.

One reason is that we misprioritized things, where the thing that really really needed to get built right now changed daily. Another is that building things take time, and we weren’t just building one product, we were building tons of products to be used in different ways. We needed Facebook integration and Twitter integration, to integrate the latest changes from the designer, to automatically admit journalists, and alert the editors upon application, and to actually build the product, to build the social promotion tools, etc, etc, etc. Individually each feature might only take between a few hours and a few days each. However, they build up very quickly, and soon we had features we promised would be ready weeks ago, still sitting in a low-priority slot on our TODO list.

We also suffered from a lack of technical resources. I spent nearly all my time doing CEO things [7], so Nathan was left to do the technical stuff alone. We needed to hire quickly to make up for that, but made a key error with our hiring.

We also made some bad technical decisions, such as failing to choose WordPress, and the whole Facebook thing. The latter also prevented us from moving journalists to their own domains, instead of being under the NewsTilt banner.

The end of NewsTilt

As a result of the errors above, we managed to alienate our journalists. By the time we ran out of content, it was already too late. The journalists were disillusioned and unhappy, and we did not have a product to prevent that.

I realised this when I found myself reluctant to suggest that people visit the site. Journalists were still excited by the concept, readers wanted to check it out, but I realised that I didn’t believe that NewsTilt was a good product for either set of our customers.

There were a few possible solutions:

  • fix NewsTilt,
  • make a different product, aka pivoting the company,
  • close down NewsLabs.

Fix NewsTilt

NewsTilt was definitely a good idea, just one that we hadn’t executed well on. The major flaw was that we couldn’t promote the stories well enough, and the journalists’ stories went into a black hole where no-one read them. The clear way to fix this would be to hire someone who knew what they were doing in this regard. We had a couple of people in mind who possibly could be brought in, and if it required stepping down as CEO so be it [8].

The second flaw was that Nathan and I hadn’t been working well together since about February. There probably isn’t any blame to be apportioned, except that we should have tried working together on something before building a company together. So one or the other of us should probably leave the company.

Presumably most other things could be fixed. We could shut down NewsTilt for a few weeks or months while we fixed everything, focusing instead on the personal sites of one or two journalists who were dedicated to making it big [9]. We’d fix our technical mistakes, build the tools we really needed, and relaunch later in the year to take in another 10 journalists. Over the next two years, we’d expand to 1000 journalists, and to 10000 over the next five years. At some point, we’d reopen NewsTilt as a news aggregator, when it actually made sense, instead of when we were too small to do anything valuable with it.

We’d need more money, but the new CEO could handle that. We’d need more tech, but the new money could fix that, and whichever one of us stayed would make it happen. Everything could be fixed.

Well, maybe. This assumes we could find a CEO, and that we hadn’t burnt our journalist bridges. It assumes that we could find anyone to give us more money, especially since we launched once and got no traction at all [10]. It assumes that we could do something with the first two journalists, especially considering making them money would be very difficult for the first year at least, probably longer, so we’d probably have to pay them ourselves.

Our motivation was sapped from not working well together, and so our ability to be optimistic was pretty sapped too, especially since one of us would be leaving. Working on NewsTilt had never been the fun that startups are supposed to be [11], and the stress was not being counterbalanced by anything positive. We also weren’t all that into news and journalism, so our desire to keep pushing the “mission” was extremely low.

I felt that it was probably possible, though extremely challenging, to make something of NewsTilt or some variation of it. When you combine a lack of interest and motivation with that extreme challenge, I think it’s clear that this wasn’t a good idea.

Make something else

We each had a few ideas of other products we wanted to make. I quite liked the idea of a Heroku for Python, a competitor to Google App Engine. I also had a few products that I knew I wanted to use, so this seemed like the best time to make them.

However, since we weren’t going to work together, whichever one of us stayed needed a new co-founder. To be fair to a co-founder, you have to be able to move to Silicon Valley, especially since my product ideas were all for early adopters [12]. I couldn’t move to California without my wife, and I couldn’t get a visa that would allow me bring my wife [13]. Take into account I’d essentially be starting from scratch, and that I’d been working 14 hour days for 8 months and had no motivation left, and you can see why I’m not sure I’d have made a success of this.

Close the company

When deciding how to proceed, we had to consider all the people in the company.

We had one employee, but she was working for free, and it now seemed unlikely that we’d be raising the money to pay her. Our co-founder relationship would have to end either way, so if we decided to part ways we wouldn’t have been screwing our partner. The journalists had already stopped posting, and they and we were not convinced that continuing was the best thing for them. The only thing left to consider was the investors.

We felt a great duty to two great sets of investors who had put up their personal money to finance our idea, even though they both kissed their money goodbye when they invested. They had largely invested because we had clicked as people, and we wanted to make sure we did the best thing for them too. We still had $20,000 of the original $50,000 they gave us, and we had the option to return that to them.

They too favoured shutting down the company down and returning what money we had left [14]. They accepted that NewsTilt was not going to work, didn’t like some other ideas we ran past them, and everybody at that point agreed that returning what was left was the best outcome for them.

So far, everyone–founders, employees, journalists, investors–were better served by closing down. The final set of people to consider were YCombinator.

YC had consulted and advised us every step of the way. When we had co-founder problems, they gracefully refused to take sides. When we wanted to make a new product, they advised us not to proceed without co-founders, and that we’d need to move to Silicon Valley to be fair to those co-founders. And finally, they didn’t expect a cent back, telling us to give all the money back to our later investors. Not once in my whole time at YC did I believe that they valued their investment more than they valued us, and they were OK with us closing down. YC is a class act.

Given the options above, it was pretty clear that closing down NewsLabs was the best option.

Would the NewsTilt model actually work?

Despite everything that went wrong, I’m pretty sure that what we set out to do can be accomplished, though perhaps not by us. It is certainly the product that journalists want, but simply one we were unable to deliver.

The biggest thing it needs is a shit-ton of traffic, and that is not easy to bootstrap. Perhaps that’s why bootstrapped technology startups haven’t been very successful in media, and why most of the inroads into content startups have come from people with more money, ability to create traffic, or both: Demand Media, True/Slant [15], AOL’s, or Pierre Omidyar’s Honolulu Civil Beat.

It also needs to ability to build great tech very quickly, and lots of it. There were tons of things we were dying to innovate on that media companies are still doing very badly, but we hadn’t the money to make them happen quickly enough.

Google could have made this work. I believe that if Google applied the same model they could probably succeed. They have the tech ability, they have the traffic, and they already have a massive news property. They also have have a big problem with whiny news organisation, and an elegant solution would be to kill them off by enfranchising their journalists to be their own bosses.

One of the things we did right at the start was that the journalists trusted us. They may not have trusted us to do everything right or to be successful, but they felt we would do right by them. There are some companies who could probably try to replicate this model and not succeed because they wouldn’t be trusted by the journalists; Demand Media for example. AOL is still tinged in the scent of the content mill, but they’ve hired cleverly and are probably capable of pulling it off. Google has shown it is able to operate transparently and somewhat benevolently, but lots of people don’t trust it.

Things we did right

So far, I’ve tried to present why we failed, which focuses on our mistakes and is by definition pretty bleak. But we did lots of things right too.

For a start, we developed an idea that people wanted. I spoke to lots of journalists and came up with an idea that they were really interested in. Then I talked to more and more of them, developing the idea until it was something that lots of people got behind. To a certain extent, this was easier as an outsider looking in. We didn’t really know what the problem was with the industry, so we instead looked at what people wanted, and we really nailed the customer development aspect of this, at least initially.

The thing that the journalists wanted was to be in control of their own destinies. They don’t like how their newspapers are essentially fucking up their lives and possibly their pensions, and they don’t like the content mill alternatives. They really loved the model that we only made money if they did, and that a 20% cut was the way to go about it.

We were able to convince editors to come on board, and even to lend their names to the enterprise. Doug, Les and Jon helped NewsTilt no end, and I am very grateful to them. But they got behind us because they believed in the mission, and they believed in us, and getting this right was important early on.

Similarly, we got about 150 journalists to sign up to take part. Of those, 27 actually wrote something on the site. That’s no small number, and I was delighted in achieving that. We also launched, which is a lot more than many startups.

We did OK on the business side. We got into YC, and we ditched out first idea which was going nowhere fast. Between demo day and me going back to Ireland, 7 days, we managed to raise $50K from two small angels. I had a five minute video interview in the Wall Street Journal, which is going on my resume next to my Google Tech Talk. And when the chips were down, we somewhat bravely [16] decided to shut down, saving time and money for our unpaid employee, ourselves, the journalists, and investors.

Personally, I’ve learned so much from working on NewsLabs. The largest, for me anyway, was that before I started I still had a bit of that shyness that lots of geeks have; doing the CEO job quickly cures you of that. I don’t want anyone to come away from this essay with the idea that they shouldn’t do a startup because it might fail. Mine failed, and I still learned more and improved myself more than I probably could have in any other way.

Personal Lessons from NewsTilt

I’ve made a list of what I’ve personally learnt from working on NewsLabs. Not every one of these will generalise, but I hope my mistakes are instructive for other founders.

In no particular order:

Lesson: Deeply care about what you’re working on

I think it’s fair to say we didn’t really care about journalism. We started by building a commenting product which came from my desire for the perfect commenting system for my blog [17]. This turned into designing the best damn commenting system ever, which led to figuring out an ideal customer: newspapers. While there, we figured they were never going to buy, and we figured out a product that people were dying to use if it existed.

But we didn’t really care about journalism, and weren’t even avid news readers. If the first thing we did every day was go to, we should have been making this product. But even when we had NewsTilt, it wasn’t my go-to place to be entertained, that was still Hacker News and Reddit. And how could we build a product that we were only interested in from a business perspective.

This compounded when we didn’t really know anything about the industry, or what readers wanted.

Lesson: Don’t be too ambitious

NewsTilt would be a great thing to succeed at. If you assume that newspapers were going out of business [18], all the journalists would become their own bosses, and need something exactly like NewsTilt to help them. As such, there was the potential to be the sole source of news online. Ridiculous, massively ambitious, and very unlikely as a result, but if it worked we’d be billionaires.

Next time, I’m going to make a product which will make me comfortably rich, rather than one with a tiny chance of going supernova.

Lesson: Communicate your idea (and manage it)

Our idea kept changing. The more journalists I talked to, the more I understood what our product should be, and what people needed. Unfortunately, that means I changed [19] our idea all the time.

Worse, I failed to communicate effectively what changed and why. I communicated this badly to Nathan, and badly to the journalists. The latter was difficult to manage–no-one is going to listen to everything I say when it changes regularly–but the former was very important. The result was that Nathan and I never shared a vision of where the product was going, which was one of our biggest problems.

Lesson: Make sure your minimal viable product is viable

We were greatly influenced by the idea of a minimal viable product. Build less, launch, then iterate when you have customers. This is a great idea, but judging when your product is viable is always a tough challenge.

The conventional wisdom is to cut any feature which isn’t essential. Ultimately though, if you cut features which make your users feel differently about your product, that’s a problem. We cut multiple domains from our MVP, meaning that journalists were publishing under our masthead, which substantially changed how they felt about NewsTilt. They were writing for us, whereas they should have been writing for themselves.

Lesson: Be careful about cool ideas

One of the reasons that switching to domains was cut from our MVP was that it didn’t work with our Facebook integration. I was married to this idea that Facebook integration was really important. It was the only way that we would allow people to comment, because it forced them to use their real names. This would mean high quality comments, and great community interaction, and I was convinced that this was essential for our success.

And we could only get real names by making everybody use Facebook to sign in. Absolutely everybody worried about this, but I was convinced. I was totally wrong. It alienated people who didn’t like Facebook, including some of our journalists. Worse, it caused people to just not comment, meaning they didn’t come back, they didn’t engage with the journalists, and they didn’t start to frequent the site.

This was at the time of renewed interest in Facebook’s privacy policy–they had just changed it and people weren’t happy. Every day there were articles in newspapers about how Facebook was doing a terrible thing; there was massive backlash [20]. Our own John Graham-Cumming even wrote a piece called ‘the Facebook Cull‘, and told us the only thing keeping him on Facebook at all was that we required it.

Man was I stupid. When people asked to signup without it, I told them no. When the people who did sign up were worried about things being posted to their walls, I didn’t understand the problem. When readers said they wouldn’t signup to comment, I thought it was just a small minority.

After a few weeks, I realised I had made a mistake, and put it on our “nice to have” list. There were a million other priorities, and how important was this, really. So that was another massive mistake. I should instead have moved it to the top priority, in particular because it held back domains which should (um, also) have been our top priority.

From a technical perspective, it also prevented us from rolling out one-domain-per-journalist a lot sooner. There is an issue with Facebook where single-sign on doesn’t work across domains, so readers would have to approve each domain separately. As a result, we didn’t introduce what was probably the most important feature for actually making the journalists feel like they were writing for themselves.

Lesson: If you think you should build it, not buy it, you’re wrong

We built our whole platform ourselves. Now, we used lots of scaffolding, built on Rails, hosted at Heroku, using every plugin we could find. I reasoned that the platform was the core of our technology, and we were a technology company, and smart technology companies needed the flexibility that comes from writing the core of their platform themselves. In retrospect, this could only be considered premature optimization.

The natural thing to do was build on WordPress instead, but I wasn’t having any of it. The major problem with WordPress is that it’s written in PHP [21]. I hated PHP with a passion, and couldn’t fathom building my company on it. How would we attract good developers? How could we live with ourselves?

Really, I shouldn’t have worried. It was far more important to just get it built, and nothing could have helped that more than just using WordPress. We could easily have given journalists distinctive styles, so they didn’t feel like they were writing for us, and we could have built things really quickly by just plugging them together.

If I was worried about how productive we’d be with PHP, well, it’s not like we had to build everything in PHP. We could just have done data collection in Python, and made it available to the rest of our app through either a web service, the database, or some other way. We might not have been happy, but we would have stood a much better chance of being successful.

Lesson: Build quickly, little company

The biggest fallout of building our platform ourselves was that we couldn’t build quickly enough. When you roll your own infrastructure, everything takes time, more time than you can afford. And we had promised the journalists that we would very quickly build a large list of features, none of which were produced nearly quickly enough. This was the major cause of disillusionment–we overpromised and underdelivered–and this was an important reason why.

Lesson: Hire well

Since we needed to build so quickly, as soon as we got some money we wanted to hire another technical person. Nathan had a friend he wanted to hire, who was exactly the kind of great programmer he could work well with. However, it took some convincing to get him to try working on a news website, and he wasn’t sure he’d stick with it. We were sure we’d be able to convince him to stay, and we even waited two weeks for him to move to work with us.

Unfortunately, we were never able to excite him about the project, and we quickly realised he was never going to be intrinsically motivated the way we need for a first employee. There was a point when I was over in Cambridge with Nathan and the other developer, and I noticed that the developer wasn’t working on a Sunday. Now, we aren’t the kind of people who think our employees owe us 90 hours a week, but startups need that kind of work ethic from very early employees–exactly the reason that intrinsic motivation is so important. If your first employee doesn’t love what you do, doesn’t wake up each morning dying to work on HIS product, you have likely chosen poorly, and that’s exactly what we did.

Similarly, we hired someone who wanted to learn how to do community management and social media promotion, instead of someone who knew how already. This is a pretty tough area, and I think we made a mistake in not hiring someone with much more experienced for such an important role.

Update: The comment about working on a Sunday probably raised more ire than anything else in the piece, largely from developers, so I think I should clarify some things.

Firstly, we’re not talking about death marches or prolonged periods of 100 hour weeks. It was early days in his employment, maybe the first or second week, and we had either just launched, or were coming up to launch, and had a great deal of things to be done.

Secondly, early stage startups are not normal jobs, and early stage employees are not normal employees. Your first employee is almost a founder. While they get less reward as a result of taking less risk, the success of the company depends on them a great deal.

Thirdly, it’s nothing that Nathan and I didn’t do ourselves. We worked 80 hour weeks basically since December. And you can get more done in 80 hours than you can in 40, so long as you don’t prolong it.

Finally, I’m not speaking ill of the developer. The problem is that we tried to convince someone to join a startup who wasn’t really interested. The fault was ours.

Lesson: Distributed teams are hard

At the end of YCombinator, I moved to back to Dublin, and Nathan moved back to Cambridge. Neither of us had US visas, and we both had things to keep us here: Nathan’s girlfriend, and my soon-to-be wife. Both of us prioritised our significant others over the company, and I stand by that.

However, that meant a number of sacrifices, including that we were in different cities. We already had a communication problem when this happened, but this made it far worse. Our social media optimizer was in San Francisco, our designer was in York, and we struggled to make the whole thing work.

While there is nothing wrong with remote teams, I think your company has to be at a certain point for it to work. Everyone has to be on the same page, everyone’s roles have to be certain, and the communication has to be constant. We had none of these, and we never had the time to implement them.

Lesson: Work with co-founders before starting a company together

You need a co-founder who gets you, and who you work together well with. When Nathan and I signed up together, we had not spent any time working together, and that was a big mistake. Nathan is certainly a great coder, but when we didn’t share a vision, and we found it so difficult to communicate, there was no way we were going to get this built.

When I get another co-founder, I’m going to make sure that we spend a lot more time working together on other things before we start a company together.

Lesson: Transparency is tough

It was important to the journalists that we were a very open and transparent company. From the start, we tried to put as much information out there as we possibly could, and the most efficient way was to put every journalist we accepted onto a mailing list. However, this meant that our blunders and critical feedback were visible for all those journalists to see. Lots of them hadn’t started writing, we didn’t know them, and they had simply signed up, so we were always aware that our emails were semi-public. As a result, when we decided to close up shop, our closing down email was “leaked” to Poynter, leading to all sorts of speculation.

It takes a lot of time to be open like this, and a lot of effort to communicate effectively. The lesson here isn’t so much that we did it wrong, but that it’s difficult to do well.

Lesson: Don’t do too much at once

I finished and submitted my PhD thesis a week before the YCombinator application deadline. Three days later I gave a talk to 900 people at StackOverflow London. When I moved to California in January to do YCombinator, I had still to organise my wedding in May, and I had a paper to write in between. My PhD defence in April was 4 hours after we launched NewsTilt. In May I got married and went on honeymoon.

Basically, life happens. There is never a good time to start a company, just like there is never a good time to have kids. Certainly entrepreneurship favour those without other commitments, but it seems like nonsense that people with other commitments shouldn’t start companies.

While I don’t regret doing all those things, I need to stop feeling like I can do everything at once. Everything takes time, and that’s time which could be spent on other things which are really important too. In retrospect, I should have delayed either the wedding or YCombinator.

Lesson: Be very careful how you are presented to the press

When I gave my demo day speech to investors, I explained that there were tons of customers out there; in 2008-2009, 30000 journalist had been laid off. When I gave an interview to AllThingsD a few minutes later, Peter Kafka focused heavily on the unemployed part of this. I didn’t quite realise the problem–it seemed like a minor detail that he was focusing on a bit heavily–until potential customers kept asking “what about solutions for journalists not laid off”. Even though our product was for all journalists, it had effectively been maligned by what I thought was a minor detail.

This also led to people thinking we were going to take advantage of them, and that we were just another content mill like Demand Media. Even when we made it clear that we were only making money if they did–taking a 20% cut–this kept coming up, even with journalists who we had signed up and were using our service.

Update: Peter Kafka spent some time defending himself when he wrote about this post. I feel there is no need for this. It was my pitch that put unemployed journalists in his mind, and I had the opportunity to correct him afterwards. Clearly, the error was mine.

Lesson: You have the greatest product on earth, and everyone should be lucky to talk to you

My natural way to network is to chat to someone, develop a rapport, and to set up another chat to talk about the world, current events, and (given time) some business [22]. But even to chat to someone, you need an in: what do I know about that person, what could I say to get chatting. So this is what I did on Demo day.

The way it should be done it in to boldly walk up to them and ask them what they thought of your speech. After a few minutes discussing it, they have to talk to someone else, so do you, that’s fine, here’s a card, we’ll chat later.

Irish people tend to self deprecate. They also don’t like successful people, and certainly not people who talk like they’re the greatest thing in the world. Silicon Valley is the complete and utter polar opposite. Self deprecation is out. No-one invests in a company who isn’t convinced that they are the greatest thing to ever happen. I’m thinking “my company has a great idea, but most companies fail, probably mine too, but we’ll certainly try as hard as we can to make it work”. Great entrepreneurs never concede that they might fail, and tell everyone how lucky they are to be able to invest in their company.

I was certainly told early on to present ourselves as the greatest thing ever, but I didn’t properly internalise it until demo day was over, which was probably too late.

Lesson: Two sets of customers is hard

NewsTilt was a product designed to connect journalists with readers. As such, we had two sets of customers, which means we need to do customer development twice. I spent a great deal of time designing the ultimate solution for journalists, and almost no time on what readers wanted. As such, I didn’t really know what to make, or what to say to the journalists about what they should write.

It’s tempting to look at the lesson as “don’t forget to do both sets of customer development”, but I think its many times more difficult to do it twice than to do it once. In the future, I’ll certainly be aiming for tools that only need to appeal to one set of customers.


Update: A final point that should be made is that this is not an attempt to blame anyone. The journalists aren’t to blame: we didn’t make a sufficiently good product for them. The developer isn’t to blame; we tried to hire someone for a startup role who had no interest in startups. No, the only people to blame is us, and more specifically me, since I was at the helm when it all went down.

What’s next?

While I’m still tying up loose ends with NewsLabs, I’ve gone and gotten a real job! It’s great to take a break from the stress of startup life, and I’m loving working on compilers again.

I’ve just started a job with Mozilla, and I work on the Javascript engine in Firefox. Its a great job, working with smart people, on product used daily by about 400 million people. Its the sort of job I was looking for when I decided to do a PhD 6 years ago, and the perfect place for a geek to end up.

Finally, I’m looking forward to working on side projects. All projects get put on hold when working on a startup, and most were on hold during the final two years of my PhD. I have to write a scripting language, learn Haskell, read SICP and Concrete Mathematics, fix the mess that is Autotools, and am currently writing a little language for an itch that needs scratching. I’m also going to start writing a lot more. There are a few NewsTilt observations yet to be made, some lessons from my PhD that were a bit informal to make it into the thesis, and about 15 half-written pieces which I hope haven’t bit-rotten since I sketched them.

If you’re interested, all things will be posted here, and on my twitter, as soon as I get a chance.


[1]We didn’t consider delaying the launch, as news is rather timely, and journalists had prepared news for our launch that started going stale as we stalled.
[2]Many of the people we spoke to felt that the Huffington Post was no better than Demand Media in that they exploited journalists. And we looked the same because we also didn’t pay journalists.
[3]Did I say “for us”? I meant “on our platform”. Easy mistake to make…
[4]We weren’t hiding this information from them. It just took us a long time to realise it.
[5]Here’s another problem. We thought way way too far ahead of ourselves. We were worrying about brands when there weren’t any readers.
[6]which we didn’t have the money for, etc.
[7]Incidentally, I used to wonder why tech startups had people who weren’t coders. I have a new-found respect for these people.
[8]I was generally conscious to call myself “co-founder” rather than “CEO”.
[9]More, um, dedicated founders might have just brought on new journalists, and pretended there wasn’t a problem. No points for guessing why we didn’t do this.
[10]If we did, we probably wouldn’t get very favourable terms, which isn’t the end of the world, but it does lower the potential reward for everyone involved.
[11]When PG lures in a new batch, he always talks about how much fun a startup is. And looking around, all the other startups were having fun as they went. This should have been a bigger clue earlier on.
[12]Without going too much into it, I used to believe that a company didn’t need to be in Silicon Valley to succeed. I still believe that, but not being there is a large disadvantage at the least.
[13]If you support the Startup visa take note: if the startup visa does not allow a founder’s significant other to work, then many founders won’t move. I can support my wife on a H1B because it comes with a high salary, but good luck on a founder’s salary, no matter how good the funding is.
[14]Technically, neither set of investors had any say in what we did. But we had to consider them because we felt a duty to them.
[15]Which, despite being a much larger success than us, also shut down, in circumstances which are regarded to be largely unfavourable.
[16]If I do say so myself.
[17]Despite the fact that 90% of what I write goes unfinished.
[18]My money is still on the newsapocolypse.
[19]There is a challenge here to iterate your idea and change and refine it to be what people want, without making them feel like they haven’t a clue what you do.
[20]Ah, how soon people forget. Haven’t seen anyone care about Facebook’s massive privacy problems since Google’s Net Neutrality thing.
[21]Though it must be said that WordPress’ security problems are undeniable, and that is has a bad reputation for performance and maintainability.
[22]I’m going to say this is because I’m Irish, but no doubt real Irish businessmen know how to do this better.
Posted in Uncategorized | Tagged , , , , , | 40 Comments

A rant about PHP compilers in general and HipHop in particular.

I’ve worked on phc since 2005, and been its maintainer since 2007. I wrote the optimizer, and nearly everything performance related.

I had mixed reactions upon hearing about the release of HPHP [1], the new PHP compiler from Facebook. There are a few aspects to this so I’ll start with the technical stuff. I always love the social aspect, so skip to the bottom if you like whining and tears.

How does it work?

I don’t know the answer to this. I haven’t seen anyone even mention PHP’s references, which are incredibly gnarly for a static analyser [2]. HPHP might just ignore it, which isn’t necessarily a bad idea. I’m wary of ignoring edge cases, as they tend to interact in horrible ways, but I guess Facebook already run all their code off it, so it can’t be that bad.

In general, I’ve found that ignoring the edge cases is bad when compiling PHP. There are a million of them, and they all interact. They interact worst of all in static analysis, because you have to consider all possible paths. Its the sort of thing where if you nail it 100%, then you have something amazing and widely applicable, so that’s what I was aiming for with my PhD. I suspect HPHP doesn’t consider all paths, and makes all sorts of hacky assumptions. [3] This is probably a really good idea. I did the opposite in the optimizer, and the result is instead immature and slow.


Facebook said HPHP reduces by half their number of servers. PHP’s libraries are already written in C, which gives it the appearance of being fast, even though the interpreter is dog-slow. This implies that HPHP-compiled PHP code is much more than twice as fast as the PHP interpreter. This probably means HPHP is way faster than phc‘s compiled code as well.

They could have built a JIT!!

I saw some criticism for not building a JIT on LLVM. But:

  1. LLVM isn’t mature enough for a proper dynamic JIT yet, as the Unladen Swallow team found out.
  2. JITs are very hard to build. They ratchet up the complexity of building a compiler by about 10 times, so its probably best to avoid them if you can. [4]
  3. PHP doesn’t really need a JIT. Server side programs in PHP don’t do a great deal of dynamic stuff, and it would be incredibly rare to load some random code at run-time, so a JIT wouldn’t be all that useful.

PHP is not like other dynamic languages. Duck-typing is possible, but most of the community best practices come from Java (along with the class system), and so its not used that much. Monkey-patching — switching out classes and methods from objects at run-time — isn’t possible, except with the hackiest of hacky unsupported extensions. Dynamicism in PHP tends to involve templates instead, like with Smarty. If you want to analyse it, then you just need to run it a few times, get all the templates to be instantiated, and compile all the generated PHP code. I’ve started calling this "deployment-time analysis", since in server-side, you probably know all the code you’re going to compile at deployment-time. So a compiler is a perfectly reasonable approach for PHP, and a JIT is probably not needed.

Will it be useful for me?

People seem to want to know if HPHP is widely useful outside of Facebook, and some people are saying "no". I disagree strongly. In order for HPHP to be useful, you need to have a PHP application which is suffering due to PHP interpreter performance. That matches Facebook perfectly, and they’ve always been the canonical example I use to explain why PHP compilers are interesting. But you don’t have to be Facebook size or scale to have performance problems.

Do you really need more speed? [5]

I’ve heard the argument "you don’t need a compiler, since PHP is rarely the bottleneck" for many years. I think its complete bollox. But I wrote a compiler for PHP, so I would say that.

Unless your PHP server is sitting there idling (which is probably the case for many PHP servers out there), then you could make use of a PHP compiler. For small timers, all components of your application are going to be sitting on the same box, contending for the same resources. Even if you assume the DB is the bottleneck, the resources the interpreter consumes could be more profitably spent on the DB.

The PHP interpreter is also quite memory hungry, as interpreters go. Any PHP value in your program uses 68 bytes of overhead [6]. An array of a million values takes over 68 MB. If HPHP is able to convert your million value array to native C types, it will only take 4MB. I’m sure your caches could make good use of that savings.

However, optimization isn’t only about speed. The main value is that they give you freedom in how you code.

There is a meme in the scripting language communities that {PHP,Ruby,Python,Perl,etc} are "fast-enough". If you need it to go faster, then you should take your hot-loop and rewrite it in C. HPHP will free you from such concerns.

You should consider also that PHP is considered relatively fast. Its not — the interpreter is dog-slow — but programs written in PHP are typically not that slow. This is because most of PHP’s huge standard library is written in C, with a thin layer of PHP over it. Anytime you call a string function, your PHP C string is passed into the C library, the pointers are manipulated and the bits are twiddled, and then it’s handed back to your code. Its a bit like driving in America: it takes a few minutes to get on the freeway, but once you’re on it you’re there in no time.

This is not necessarily a good thing:

  1. if you want to write a library, and it needs to be fast, then it needs to be written in C,
  2. if there is a PHP function that does almost what you need, and you write your own version instead, it will be slow.

Believe me, if your entire application just ran PHP interpreted code, it would not be fast at all. But people who write PHP functions and libraries don’t want to write C. They like PHP, are productive in it, and any time spent arsing around in C is wasted when there’s a website falling apart and a long list of features due yesterday. HPHP will free you from such concerns.

Compilers also provide other niceties. You don’t have to unroll your own loops, or move constant expressions out of loop headers. I don’t know if HPHP supports these, but I’m sure it could.

Allowing your existing code to go faster is hardly the point though. Really, the point is that you can do more in less time. Suppose you’ve decided that your application needs to response to the user in 500ms. The DB takes 200ms, the request takes 200ms, the framework takes 50ms and your code only has 50ms to run [7]. That’s quite a constraint. This leads to people using PHP as a simple templating layer, instead of as the Turing-complete langauge it is. I expect we’ll hear a lot more about HPHP, simply because of how freeing it is to the user.

So even if you only have a small VPS, instead of massive server farms like Facebook, you’re likely to find a use for HPHP. I’m sure shared hosts will set it all up soon for their users, and everyone will be happy.

Dynamic constructs

A better question is, how widely applicable is a compiler which doesn’t support all of PHP’s dynamic constructs? Funnily enough I did a bit of research on this. We chose the opposite tactic for phc: trying to stay 100% compatible with Zend, all the extensions (even the 3rd-party, unpublished, top-secret, proprietary ones), etc. You would expect we would first research whether this was useful? Well no. We built it, and then I did some analysis to check whether it was useful. You can read it in depth [8]. Basically, I downloaded 700 packages off sourceforge, and wrote some phc plugins to check for evals and dynamic includes (a dynamic include uses a variable as its parameter, instead of a constant or literal).

Result: 40% of PHP packages use dynamic constructs. Now this isn’t quite as scientific as it should be. Lots of those programs were old, and styles have changed. eval is discouraged these days, but it’s probably still used, if only to get around the weaknesses of the PHP parser. In particular, this doesn’t imply that HPHP is somehow fatally flawed for not supporting dynamic constructs. It just means it might not be so useful to you, the common PHP programmer.

An easy way to get round dynamic includes is to just consider the PHP files in the directory structure. There was a good research paper by Wassermann on supporting that, but I find it very hard going, so you probably will too. Still, a naive approach is to just stick a switch statement in, and compile everything that makes sense. This is how you would deal with things like WordPress plugins. It does mean that if you change your plugins, you’re just going to have to recompile. If you’re using a compiler, I doubt you would find that a problem.

Social stuff

But not everyone is happy with this new compiler, such as, well, me.

Lets start with a quick whine.

  • I contacted Facebook two years ago to test and demo phc,
  • I went and gave a talk at Facebook, and met a number of engineers,
  • I know they’ve used phc internally in the past,
  • They’re releasing a PHP compiler.

You would think they could at least invite me to the party. Those bastards.

More seriously, I actually was annoyed at all the news reports about HPHP, principally because they were largely bullshit. And I knew it, and I couldn’t say more because I told Facebook I wouldn’t. There is very little more irritating than idiots being wrong on the internet, and the news stories brought out thousands of them! Reddit and Hacker News were literally covered with stories about HPHP. Hundreds of trolls emerged from under their bridges, not knowing the difference between a bytecode-based interpreter, a caching PHP accelerator, and a native compiler (which is fine, until they start saying they’re all the same). Think about it now still makes me angry.

I’m also slightly annoyed that people all of a sudden care about PHP compilers. I worked on one for 4 years and I could not convince anyone to give a shit. But now that its got the Facebook logo on it, all of a sudden PHP compilers are the greatest thing ever. Bah.

One saving grace is that they didn’t patent it. I have an email in my inbox from one of the HPHP developers saying he couldn’t talk to me about the compiler because they might patent it. That’s pretty shitty. Thank god they open-sourced it instead [9]. It sounds like it was a bit touch and go for a while.

The most important question!

Which bring us to the question of whether they should have used phc?

Obviously it would be great if they had used phc, and I’m not privy to the reasons they didn’t [10]. The design decisions we made in phc were aimed at maximum compatabilty, and the performance suffered [11] as a result. The optimizer was designed to solve these problems, and I believe it would have, but it is not mature enough now, and was still a twinkle in my eye when HPHP started two years ago.

Facebook was solving their performance problems, not building a PHP compiler for general use. If they were doing the latter, it would be much easier to criticise their approach, but for now I can’t say I would have advised them otherwise. On the other hand, they probably didn’t need to build their own parser – its a tricky problem and phc‘s parser and front-end are excellent. Had they gone another way, then they could probably have started to use phc‘s optimizer [12], which while immature and slow to compile, is pretty state-of-the-art and has great potential (if I do say so myself).

A better approach would probably have been to hire all the programmers who worked on PHP compilers, to get that expertise in house. They did try to hire me, but only recently. I’m honestly surprised that they haven’t tried to hire the Shannon Weyrick, who is currently working on rphp, his second PHP compiler [13].

What does this mean for phc?

When they annouced HPHP, I would have said it was phc‘s death toll. The original phc authors, Edsko and John, have moved on to other projects, and I’ve run it mostly solo for about two years. But I havent worked on phc in about 6 months, and my hatred of PHP makes it unlikely I will again. My requests for new contributors to step up has fallen on deaf ears, and my summer intern hasn’t decided to take over either.

Since no-one wants to take on the compiler, the new competition from Facebook should probably kill it, right? Maybe not. Over the last week, traffic to the phc website has increased by five times [14]. Facebook has unleashed some sort of latent interest in PHP compilers that I haven’t been able to extract from people. So perhaps this might be the rebirth of phc, not its death.

And phc is better than HPHP in some ways. HPHP is almost certainly faster because they didn’t have to deal with eval, dynamic stuff, and because they don’t use the Zend libraries. But phc was specifically designed to work with the Zend libraries, with eval, with everything. So it’s probably a better fit for most projects than HPHP.

If you want to take over phc, then join the mailing lists, download the source code, read the death notice and contribution page, and email me for commit access.

phc will likely live on anyway. The front-end is pretty slick: Facebook ran it over their million lines of code and only had one or two problems. It gives a lovely AST to allow all sorts of code transformation tools, has a nice plugin interface (for C++ lovers) and an XML interface (for the rest of you), and will spit your code out largely as you put it into it. Its certainly the most mature and well tested part of the whole project.

The optimizer is pretty slick as well, but in a different way. I’m pretty sure its the most advanced static analyser for PHP, and it’s waiting to be put to good use. That said, it’s damn slow, and not mature (read: pretty buggy), and itself doesn’t support eval and dynamic includes (surprise!!). The optimizer is waiting for some love — I could imagine it making a pretty nice "automatically find out what types your function may be passed" kind of linter.

Otherwise, phc will only live on as part of the Roadsend Raven compiler. I understand that they’re going to take the optimizer and the parser from phc, and that will be really interesting.

Finally, what does it mean for me? Well, I’ve left that ship already. I’ve hated PHP for a long time, and have no desire to go back to it. I’m doing a startup now, but when I go back to regular employment I will be looking for another scripting language run-time. There are plenty to choose from, in particlar Unladen swallow and TraceMonkey. Mozilla looks like an amazing place to work, so I think I’ve worked out my backup plan.

[1] I can’t bring myself to call it Hiphop. Worst name ever. Rumor has it that the ‘H’ in HPHP stands for "Haiping", the author of HPHP, so I like that name better.
[2] See chapter 6 of my thesis
[3] This is just a hunch. Obviously I have no way of verifying this, and I don’t really want to read the source when it comes out to check. So as accusations go, this one is obviously pretty baseless.
[4] Says a guy who wrote a PHP compiler. I may be biased.
[5] I’m trying to imply you don’t, but everyone loves more speed. Plus, compilers are cool toys, even if you don’t need them. So 99% of the people who start using HPHP will use it cause its cool and they love to go fast, not because they’ve carefully considered the design of their project and determined that a compiler would solve something.
[6] 68 bytes is on 32-bit systems. I think its 96 bytes on 64 bit.
[7] I pulled these numbers out of my arse.
[8] You can read the paper (see Section 7.5) for my method, results, etc.
[9] Open-sourcing and patenting aren’t strictly mutually exclusive, but presumably they won’t patent it now. A firm word on the topic from Facebook would be nice. And yes, I’m aware of the patent climate in America, and how you have to patent everything you can get your hands on, and it doesn’t make you evil. It still fucks with people who write compilers though.
[10] If you know why, let me know. This is open source, I can take it.
[11] I went into a bit of detail in my Google Tech Talk, if you’re interested.
[12] Interested parties can find more information in chapter 6 of my PhD thesis
[13] This is pronounced RoadsEnd, apparently. I’ve been mispronouncing it for years.
[14] The phc website used to get 200 visits per day. On Tuesday it got 1100 visits, going up to over 2000 on Wednesday. And 1000 downloads by the look of it. Does anyone know how to find out how many people check the code out of a Google Code svn repostory?.
Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , | 22 Comments

Introducing Malicious Code Reviews

I’m inventing a new sport today, which I call “malicious code reviews”. I spent a few hours reading some really very bad code, and in retaliation against its author(s), I’m going to code review it [1]. The code comes from PHP version 5.2.8, the latest stable release. This particular file is Zend/zend_operators.h [2]. You might want to open it in a new window, or in a popup, so that you can follow along.

I’ll start [3] at the top:

#if 0&&HAVE_BCMATH #include "ext/bcmath/libbcmath/src/bcmath.h" #endif

I’ve skipped a few minor problems to go straight to the laughably poor, the #if 0. Funny story though: I saw a “code review” in PHP recently which chastised the addition of a #if 0. I initially thought that, finally, someone is actually stepping up to stop the rot within the PHP engine. Sadly, they instead complained that according to the rules of the PHP project, an #if 0 must also have the author’s name added to it. The mind boggles.

There is a limited amount of reasonable code, which I’ll skip, followed by the is_numeric_string() function [4], only one of the finest examples of poor code I’ve ever seen. I linked to it above, so I recommend that you actually read along as I go. This will be no fun unless you can actually see it.

It starts off, surprisingly, with the most thorough comment I have yet to see in PHP. It is merely average in terms of what you might read in a gcc source file, but here it is a shiny gold nugget floating in a murky brown sea. However, it degrades fairly rapidly. You might notice this giant function is in a header, and that it is declared to be static inline. This is a prelude for what’s to come.

The function starts with some whitespace skipping [5]:

while (*str == ' ' || *str == '\t' || *str == '\n'
    || *str == '\r' || *str == '\v' || *str == '\f') {

Just two lines above the function they have two macros, called ZEND_IS_DIGIT and ZEND_IS_XDIGIT. Could they not have added ZEND_IS_WHITESPACE? A pity, but a tiny flaw compared to the gaping maw of despair that follows a little bit later. The code continues fine: check for a digit, check if its hex (with comments, very good) until we come to this line:

for (type = IS_LONG;
     !(digits >= MAX_LENGTH_OF_LONG
        && (dval || allow_errors == 1));
     digits++, ptr++)

I’d like somebody to come forward and explain why

type = IS_LONG

is in the loop initialization statement. And why the loop condition is so unreadable. And why the elements of the loop header are not related in any way at all!!! But this is just the start. The next line is a doozy:


Do you feel the fear? I feel the fear. Its a label. That means that somewhere in this function, there is a goto. Not that there’s anything wrong with gotos. Sure, if misused they can lead to unreadable, spaghetti co– OH MY GOOD GOD. I’ve found some gotos, but they go to a different label. Two labels. And the first one is in a for-loop! Don’t panic, maybe its readable. Maybe the second one is also in the for-loop. Please? Pretty please?

Fuck. Fuck fuck fuck. I’d like to suggest you try to work it out yourself, but you’d probably prefer not to. If you haven’t looked at the code yet, now is the time. Don’t worry if you don’t know C — with code like this, knowing the language is not the advantage you’d expect.

The first label, check_digits, is in a for-loop. That for-loop has two gotos (and one continue and one break, just to make things more readable), which both go to the other label process_double. process_double is outside the loop, deeply nested in a completely separate series of if-else statements. They have also given up on comments by now). After checking a few more conditions, only then do you jump back into the previous for-loop!!!! Oh wait. No, that’s not right. They’re in different paths. There actually added control-flow edges from an else-body to within a for-loop in the if-body. Just wow.

I’d like to say that this horrendous function is over. After all, this is just the first function I’ve come across. But while there is only simple (but uncommented) code remaining in the function, I have a final nit to pick.

if (ptr != str + length) {
    if (!allow_errors) {
        return 0;
    if (allow_errors == -1) {
            "A non well formed numeric value encountered");

There is a check to see if allow_errors is -1, even though the comment only mentioned two possible values for allow_errors. So what does it mean for it to be -1? We’re saved from figuring it out because the check can’t even trigger. If allow_errors was non-zero, the function would have returned already.

Now, you might consider this a minor nit, and its easy to see that it wasn’t fixed when you consider how deeply it was nested [6]. But this sort of thing is the rule, not the exception in PHP sources. Broken windows built on top of other broken windows.

The rest of the header is OK, considering. There is a macro that’s not appropriately guarded [7], and a few with no guards and no comments. A few macros use their parameters more than once (bane of post-incrementers everywhere), and some duplicate some code between them, or write the same code in multiple ways [8]. There are then a ton of macros which can be used as lvalues:

#define Z_DVAL(zval)            (zval).value.dval
#define Z_STRVAL(zval)          (zval).value.str.val
#define Z_STRLEN(zval)          (zval).value.str.len

except one

#define Z_BVAL(zval)            ((zend_bool)(zval).value.lval)

where an enterprising soul mustn’t have thought hard before committing. PHP has long been lambasted for its inconsistency; it turns out that this inconsistency is not limited to just its libraries, syntax or semantics.

To finish off this file is a frankly baffling piece of code. Let this be a lesson about commenting. Sometime the ‘why’ of the comment is not sufficient – occasionally, you will need to explain what the code is doing.

#if HAVE_SETLOCALE && defined(ZEND_WIN32) && !defined(ZTS) \\
    && defined(_MSC_VER) && (_MSC_VER >= 1400)
/* This is performance improvement of tolower() on Windows
 * and VC2005
 * GIves 10-18% on bench.php

And we’re done

Unfortunately, this file is not an isolated incident. The entire Zend/ directory — the core of the whole PHP implementation — is a filthy mess. While the is_numeric_string() function might be the worst code I have ever seen, most of the Zend/ files contain a lot of hideous code: poor organized, badly written, badly documented, unreadable messes.



I’m aware that as a ‘code review’, this is actually pretty poor, primarily due to its lack of constructive criticism. So this is more of a detailed, flame-bait-y rant about the quality of code in this file, which I will use to assure people that the rest of the code I’ve seen in the PHP project is of similar caliber. And none of that is really very constructive.

As it happens, I am preparing a much more constructive piece about why the code in PHP is so bad, how it got this way, and how to fix it. But first I’d like to demonstrate how poor the code currently is, and it’s difficult to do this without some bile and vitriol.

[2] I was originally planning to do zend_operators.c, and thought I’d quickly do the header first. But this was so poor that I ended up writing about it.

I didn’t want to bog down the intro with my method, but it should probably be here anyway.

  • I’m analysing the latest release, PHP 5.2.8. Its a little difficult to choose which version of PHP to pick on, but the latest stable release is probably not too unfair.
  • All files that I’ll choose come from the Zend/ directory, which makes up the core of the PHP interpreter.
  • I could do some code archaeology, and find how the code got to the state that it did, and perhaps personally hunt down the person who did it, but I wont. Even if I could find a sole contributor to blame, there are so many broken windows in PHP that I feel the blame should be spread across all the PHP internals developers.
[4] Unfortunately, is_numeric_string() was cleaned up at some point (though the version I review here is what’s in all the 5.2.x releases, and is scheduled to be in 5.3.), so this post loses a smidgen of its sting.
[5] I should point out that I’ve tidied up the code to fit in the blog. So if you’re thinking “at least the lines aren’t too long”, well, they are.
[6] When they fixed this particular piece of code, the allow_errors checks became much less obfuscated, but still was not removed, sadly.
[7] A macro guard (if that is indeed the right name, and not one I just made up), is when you wrap a macro in a do-while(0) loop.

If you know Zend internals, spot the difference:



if (!(*ppzv)->is_ref) SEPARATE_ZVAL(ppzv);
Posted in Uncategorized | Tagged , , , , , , , | 10 Comments