Open vs. Fast, Good vs. Evil, Google vs. Facebook

The landscape of the community-engineered social web, the one based on open technologies, has changed dramatically over the past few months. If you took a year off and just came back, you would probably not recognize it at all.

The movement that started with protocols such as OpenID, OAuth, and Activity Streams, is now mostly gone. All the cool kids got grownup jobs and the market is back again driven by a small number of corporations. In fact, it is so small it can be counted on two fingers. A year ago, a meeting with Chris Messina, David Recordon, Joseph Smarr, Monica Keller, Will Norris, Luke Shepard, and John Panzer represented 7 different organizations or communities – a well-balanced mix of big and small, corporate and independent.

Today it’s just Facebook and Google and that has significant implications. But when examining how these two companies engage in the development of open technologies, the findings are quite surprising. On the product side, Google is famous for their openness while Facebook is notorious for their closed garden. But when it comes to their community engagement, these two giants behave in a rather reverse fashion.

Open technology is slow by definition.

That’s assuming you accept my definition of Open Technology:

Developed in the open with full transparency
Open process for anyone to participate freely
Everyone is free to implement
Decisions are made based on technical merit

Open technology doesn’t have to happen in a standards body or format open source organization, but important work usually does; because if it’s important, there are simply too many people collaborating to be successful without a well-defined process and governance. Successful open technology acts very much like a proprietary one – they both tend to block or delay new innovation. As soon as something becomes successful, it poses high risk to those absent from the process.

OAuth 1.0 created a myth that it is possible to develop open technology fast. But in reality, OAuth 1.0 took a little over a year, and that’s with a tiny, well-aligned group of people almost exclusively from one town, with very little at stake. The numbers are even less impressive if you include getting to revision A as part of the 1.0 lifecycle.

OpenID had a very similar experience where version 1.0 (really 1.1) took little time (mostly created by two people) while version 2.0 took a very long time. Deciding what the next version of OpenID looks like seems to take even longer.

Two years ago I was one of the leaders of this movement. It was the main driving force behind creating the Open Web Foundation (i.e. a lightweight legal framework suited for small, community driven projects). The problem with this approach is that it ignores why and how companies develop open technology, and the real reasons why it takes time.

People new to standards like to accuse the excessive process and legal framework for the time it takes to produce a standard. But this argument is simply not grounded in reality. For example, XRD 1.0 which is quickly becoming one of the building blocks of the social web took over 2 years to mature from XRDS through XRDS-Simple to XRD. During those 2 plus years, the OASIS process (where the work was done) did not slow us down by more than a week or two. What slowed XRD’s development was lack of feedback, review, and editorial time. But even these factors were not the primary reason.

There are two reasons why specifications take time: building consensus and reaching maturity. Consensus time is usually a function of the group size. The bigger the group the longer it takes. Maturity however, is a function of taking intentional breaks during the process to let the technology sink in, and allow time for experimentation. Maturity is what differentiates a useful technology from an essential one.

If I listened to the many calls to get XRDS-Simple done two years ago because people wanted to ship stuff “now“, we would have ended with broken discovery architecture and a complex format that no one really needed. On occasion, these demands for finalizing specifications took the form of subtle threats to develop competing proposals. Getting the right people to read specifications takes time, and it often means waiting for the use cases to catch up with the vision.

If you compare the initial proposal of site-meta with the final RFC for Well-Known URIs, you immediately notice a remarkable transformation which was the result of a yearlong discussion and collaboration across three standards bodies. My discovery work follows the same pattern, from the initial OAuth Discovery work to the current, much simplified host-meta proposal (and the retirement of the LRDD stand-alone specification).

Like anything else, early adopters have to accommodate this reality, and it can often lead to frustration. The Open Web Foundation is one example for how this frustration with standards bodies materialized into an (unsuccessful) attempt to create a whole new framework. While the foundation was very successful in creating a useful legal license, it was not significant enough to get new technology out faster. Ironically, it made it much easier for companies to develop new technologies internally, the open them up when done.

In order to understand the forces that drive the development of open technologies, we must first examine how companies address these issues. There are generally three categories of companies when it comes to interaction with open technology: conservative, agile, and delayed-open. It is important to understand that these categories are specific to how companies interact with open technology, and are not an indication of how the interact in the marketplace.

Traditional companies are rarely first to adopt new technology, wait for final versions before implementing, and actively participates only when a specification takes a wrong turn. They are usually absent from the process or lurk around, but engage with some form of indirect support (such as sponsoring events or supporting the community). Traditional companies tend to focus on building business relationships with other similar companies and establish back-channel alignment, which enables them to let others represent their interests.  Yahoo! and Twitter are good examples of traditional companies.

Agile companies live on the cutting edge, deploying early drafts and are not afraid to change quickly. They are usually very hands on, often leading the development efforts, and maintain some level of control over the process, which helps reduce risk by having better understanding of the proposal’s stability. Agile companies are usually very small where being agile provides a necessary edge, or very big where they can afford to take more risk and experiment because of their market dominance. Facebook and (the now defunct) Pownce are good examples of agile companies.

Delayed-open companies are very selective in what they choose to participate in, and are rarely involved in efforts they do not control. The basic strategy of delayed-open companies is to develop as much as possible internally and confidentially. They only share their work when they are ready to ship a product or when they are confident in the stability of the technology. While these technologies end up being open (typically by opening their development to final enhancements, or by contributing them to a standards body), by the time they are opened up, very little can really change. Google and Microsoft are good examples of delayed-open companies.

Companies looking to engage in the development and utilization of open technologies must find the right balance that fits their culture and market needs. Google and Facebook provide important case studies on how companies approach open technology from opposite ends of a given market. When it comes to social web applications, Facebook is the undisputed market leader, while Google is one of the (hardest trying but) least successful players.

Both companies recently made high profile hiring moves, grabbing every free-agent open technologist in the space. Facebook hired David Recordon and Monica Keller (joining Luke Shepard and others), while Google hired Chris Messina, Joseph Smarr, and Willl Noris (joining John Panzer, Brad Fitzpatrick, and others). But their reasons for doing so are fundamentally different, dictated by their involvement category.

Facebook, with time on their side, hired people to help open up their technology and use their market dominance to influence and lead new efforts. Their position allowed them to deploy an early draft of OAuth 2.0 in a highly publicized fashion, and to promise keeping their production code up to date as the specification matures. To accomplish that, their engineers engaged early and intensely, contributing the first draft and running code. The more Facebook invests in open technologies, the longer it takes these technologies to mature (due to their sudden importance and popularity), but with a higher likelihood of maturity.

Google on the other hand, hired top caliber talent for very different reasons. Google is significantly behind Facebook and cannot afford to take years (or even months) for new technologies to mature. They need to build products and ship them quickly, and that requires much more control than is available in open development. By hiring highly respected individuals like Chris Messina and Joseph Smarr, Google is hoping to get away with doing more work delayed-open. Salmon, PubSubHubbub, WRAP, and their OpenID extensions are all examples of past technologies developed successfully using this method.

While Facebook shipped an early draft of OAuth 2.0, Google shipped WRAP. Both promised to ship the final OAuth 2.0 protocol.

It is hard to say which category ends up making the biggest contribution to the development of open technologies. Google is clearly a leader in adopting open technology in general, but the way in which they get comfortable doing so isn’t always the most open or polite. Google’s style helps get more free technology faster, but it also makes is much harder for other companies to participate when the foundation is established. Google is leading an Open war, and in war, there are often innocent casualties.

There is a direct correlation between a company’s ability to control the process and their embrace of the technology. Facebook came in late to the WRAP party, leading them to favor the (at the time) less prominent and less successful OAuth 2.0 effort at the IETF (while Google did their very best to derail the IETF effort). In the same spirit, Google’s bear hug of HTML5 came only after they guaranteed their strong control over the process by hiring the specification editor (for the sole purpose of getting HTML5 done fast).

The manifestation of these two approaches is sometimes ironic. Facebook who is not known for their embrace of open protocols and technology, is the one enabling open communities to take their time and spend a little longer to get the details right. On the other hand, Google who is considered the biggest champion of openness is doing their best to push efforts to a quick conclusion, strongly aligned with their immediate needs.

While this analysis includes a slight bias in favor of the contribution of agile companies, delayed-open companies play an important role in creating alternatives, not possible by purely open means alone. Facebook only turned the page and joined the open community when it felt the open community no longer posed a threat to their dominance, and their behavior depends greatly on the individuals leading the effort. The Facebook influence at the hands of David Recordon (who at this point is by far the most influential voice in advocating open technologies) is an extremely constructive force. However, it is clear that without Google chasing their tail and portraying them as closed, Facebook will be much less motivated to open up.

It is also important to remember the significant contribution of traditional companies. They provide the final seal of approval for new technologies and are the key to mass adoption. They are also critical in getting enough community sponsorship to allow work to continue.

At the end we rely on a few individuals to do the right thing. On the Google side, we rely on people like Chris Messina and Joseph Smarr to know where to draw the line between delayed-open and fully-open. Because delayed-open leave very little for the community at large to influence, they must find the right balance between pragmatic time-to-market and forcing their own ideas on the rest of us. When Chris and Joseph joined Google, I predicted exactly this kind of shift. I have confidence that their work will continue to be innovative and inspiring, but the silence and disengagement coming from the Google team over the past six months is a real cause for concern.

6 thoughts on “Open vs. Fast, Good vs. Evil, Google vs. Facebook

  1. Interesting analysis.

    Though I’d push back a little on the last paragraph, especially the part about Google going silent. Many of the most active participants on the technology mailing lists you refer to (Activity Streams, OpenID, OAuth, PubSubHubbub, Salmon, etc), are still the same people as ever before, such as Eric Sachs, Brian Eaton, Breno de Medeiros, Chris Messina, John Panzer, Dirk Balfanz, Will Norris, Brad Fitzpatrick, Bob Wyman, Brett Slatkin, etc:

    http://groups.google.com/group/oauth/about

    http://groups.google.com/group/activity-streams/about

    http://lists.openid.net/mailman/listinfo

    http://www.ietf.org/mail-archive/web/oauth/current/maillist.html

    http://groups.google.com/group/pubsubhubbub/about

    http://groups.google.com/group/salmon-protocol/about

    http://groups.google.com/group/webfinger/about

    If you look at the above lists, many of the most prolific contributors to the public mailing lists do indeed work for Google, adding value very much out in the open. And they’re posting on the public lists for a reason: we have a strong in-house rule that we’re not to discuss open technologies privately if there is a preferred public forum. (A nod toward the ‘agile’ model.)

    I should know, I’m among the many of my teammates that help enforce that rule.

    It’s designed to be the opposite of delayed-open — we’re telegraphing as clearly as possible exactly what we’re going to do (see my early posts on Buzz, for example), and snapping to, and contributing openly to, external specs whenever they’re available. (And yes, OAuth 2.0 will be among them, conspiracies be damned.)

    On a rhetorical note, it’s not really appropriate for people on the outside to state why Google hired particular individuals. Speaking as someone who does in fact know, (by and large, I helped hire many of them), I’m 100% certain you don’t have the full story here. : ) You are free to speculate, of course, (just as I am free to correct you), but please do qualify statements like that, lest they be taken out of context and be misunderstood.

    I also of course applaud David and Luke and Monica getting increasingly involved in an official FB capacity. David and Monica of course have always been actively involved in their previous jobs, and it’s *great* that they are helping FB do the same more broadly. And if Google helped pressure Facebook to be more open and engaged with the community spec process, I can’t say that was an accident. : ) But it wasn’t just Google — it was the entire cottage industry of open protocols and the excitement that those communities have brought to the table.

    But most importantly than anything else, what are you suggesting should be done? Both by Google, and also by Facebook, but also by other companies and individuals? I’d love to hear your ideas here.

    • PubSubHubbub, Salmon, Wave, and other proposals have followed the delayed-open model to the letter. Google did not share their plans for the OpenID extensions and proprietary discovery work until it was done and deployed with a few partners, and even that was a slip by Eric Sachs sent to the wrong board list.

      The common theme at IIW was Google employees criticizing new efforts and proposals based on extensive internal work and hinted new products and efforts. The spirit was very much of an outsider than an integral part of the community. And not everyone was busy with I/O.

      As for what motivated Google in hiring any particular individual is of no interest to me. Late last year I was told about some high level meetings inside Google with at least one of the co-founders, discussing how to be more competitive in the social space and coming up with a strategy of shifting the focus to products instead of individual technologies. These hires as a whole, can only be explained as a direct continuation of this internal strategy.

      Google is an extremely savvy company, and one of the things it does better than most is to sell the idea that what is good for Google is what good for the web.

      • I think we need to define “delayed-open” a little bit better, as that seems to be the heart of the matter.

        (And as an aside, and with all due respect, it remains awkward — not just here, but anywhere — to be told by someone on the outside why we did the things we did. From where I sit — as someone directly involved in these decisions — you are mistaken on several important points, but it is hard for me to correct the incorrect statements without sounding argumentative. So while I may not be able to convince you personally, I do hope to clear up any lingering misperceptions.)

        Regarding Wave, we announced the product and launched it to beta testers a full year before shipping it for everyone, releasing the a freely licensed specification and community-driven process on the very day we announced it (see waveprotocol.org). Since the beta and subsequent open source launch we’ve done nothing but iterate out in the open with a community of thousands (and hundreds on the protocol specs), to the point that, if I’m remembering this correctly, we ourselves first found out about a major third-party implementation of the federated protocol the day the public did. We of course waited until having a beta and running code before announcing in 2009, but that’s hardly something we should be faulted for. (Just imagine the reaction if we announced only the *plan* to build Wave several years back, but didn’t actually show anything!)

        Regarding the OpenID extensions you’re referring to, I don’t know the backstory, so I can’t say that we didn’t do something wrong there. I’ll have to check; you may well be correct in voicing a frustration here. I’m not that enthused about how much work in the OpenID space has in the past been negotiated in private, either, though lately (just my impression) things are turning around, knock on wood.

        Regarding both Salmon and PubSubhubbub, we need only to look at the mailing lists to see the timelines and evolutions of the protocols. There you will see that for Salmon, John Panzer clearly and repeatedly talked publicly about the need for the protocol (initially at IIW and on his blog), and did every last ounce of that work publicly, before even a single line of code was written and before any decisions were locked in. Regarding PSHB, Brett and Brad had a working implementation running to prove its viability (and to prove Google’s commitment, lest it sound like we were simply asking other people to be guinea pigs), but again, the entire thing has been designed and discussed openly, with tons of contributors from around the web.

        But most importantly, what would you have us done differently than the above? <– sincere question, it is part of my job to fix such things, and I do want to help.

      • Some of the differences can be attributed to having completely different point of views. Others are based on personal experiences. If you read my analysis carefully, you will see that I am not sure that there is an alternative. Companies trying to catchup from way behind are required to play by different rules.

        I am strongly biased against working on something that is meant to be open in secrecy before revealing it. I have been very successful talking about ideas when I didn’t have anything written up.

        My personal experience interacting with Google over the past year has been very mixed. John Panzer is a great example of someone who consistently reach out and engage where the work is done. Others, whose name I won’t mention, are more likely to make threats of the sorts “if you don’t publish a draft by next week we are going to go with our own thing”. I was told that more than a handful, and always from Google.

        I think the best example is Google’s behavior with regards to OAuth and WRAP. Even while somewhat engaged (there was a long silence during the first half of the year while the Google team was working hard behind the scenes trying to shift attention towards WRAP), there is a clear undertone. This is an example where I am not at liberty to share everything I know.

        So is there anything you can do to fix this? Being aware of it and acknowledging the challenge is a start. Beyond that, I don’t have answers. This is just how things are. A year ago Google was my first stop with every single project. These days talking to Google feels like a waste of time.

  2. I have been complaining that Facebooks blatant disregard for “open standards” (beyond using it as a marketing marketing) is doing greater damage than even their position on privacy for a while now.

    However your post has broadened my perspective even further and helped me realize I am just a newbie when it comes to open definitions and the process that goes with.

    Good stuff. Thanks. Still processing it.

  3. Hello,
    I’m just getting up to speed with QAuth and all the issues involved with the topic, and I wanted to say thank you for your writings. Rarely when learning new web technologies do I find such accessible explanations of “How we got to where we are” — I wish it happened more.

Comments are closed.