OAuth 2.0 and the Road to Hell

Update: three years later I wrote something new… introducing Oz.

They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0.

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

How did we get here?

At the core of the problem is the strong and unbridgeable conflict between the web and the enterprise worlds. The OAuth working group at the IETF started with strong web presence. But as the work dragged on (and on) past its first year, those web folks left along with every member of the original 1.0 community. The group that was left was largely all enterprise… and me.

The web community was looking for a protocol very much in-line with 1.0, with small improvement in areas that proved lacking: simplifying signature, adding a light identity layer, addressing native applications, adding more flows to accommodate new client types, and improving security. The enterprise community was looking for a framework they can use with minimal changes to their existing systems, and for some, a new source of revenues through customization. To understand the depth of the divide – in an early meeting the web folks wanted a flow optimized for in-browser clients while the enterprise folks wanted a flow using SAML assertions.

The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesn’t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

Under the Hood

To understand the issues in 2.0, you need to understand the core architectural changes from 1.0:

  • Unbounded tokens – In 1.0, the client has to present two sets of credentials on each protected resource request, the token credentials and the client credentials. In 2.0, the client credentials are no longer used. This means that tokens are no longer bound to any particular client type or instance. This has introduced limits on the usefulness of access tokens as a form of authentication and increased the likelihood of security issues.
  • Bearer tokens  – 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases.
  • Expiring tokens – 2.0 tokens can expire and must be refreshed. This is the most significant change for client developers from 1.0 as they now need to implement token state management. The reason for token expiration is to accommodate self-encoded tokens – encrypted tokens which can be authenticated by the server without a database look-up. Because such tokens are self-encoded, they cannot be revoked and therefore must be short-lived to reduce their exposure. Whatever is gained from the removal of the signature is lost twice in the introduction of the token state management requirement.
  • Grant types – In 2.0, authorization grants are exchanged for access tokens. Grant is an abstract concept representing the end-user approval. It can be a code received after the user clicks ‘Approve’ on an access request, or the user’s actual username and password. The original idea behind grants was to enable multiple flows. 1.0 provides a single flow which aims to accommodate multiple client types. 2.0 adds significant amount of specialization for different client type.

Indecision Making

These changes are all manageable if put together in a well-defined protocol. But as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide. Here is a very short sample of the working group’s inability to agree:

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

On the other hand, 2.0 defines 4 new registries for extensions, along with additional extension points via URIs. The result is a flood of proposed extensions. But the real issues is that the working group could not define the real security properties of the protocol. This is clearly reflected in the security consideration section which is largely an exercise of hand waving. It is barely useful to security experts as a bullet point of things to pay attention to.

In fact, the working group has also produced a 70 pages document describing the 2.0 threat model which does attempt to provide additional information but suffers from the same fundamental problem: there isn’t an actual protocol to analyze.

Reality

In the real world, Facebook is still running on draft 12 from a year and a half ago, with absolutely no reason to update their implementation. After all, an updated 2.0 client written to work with Facebook’s implementation is unlikely to be useful with any other provider and vice-versa. OAuth 2.0 offers little to none code re-usability.

What 2.0 offers is a blueprint for an authorization protocol. As defined, it is largely useless and must be profiles into a working solution – and that is the enterprise way. The WS-* way. 2.0 provides a whole new frontier to sell consulting services and integration solutions.

The web does not need yet another security framework. It needs simple, well-defined, and narrowly suited protocols that will lead to improved security and increased interoperability. OAuth 2.0 fails to accomplish anything meaningful over the protocol it seeks to replace.

To Upgrade or Not to Upgrade

Over the past few months, many asked me if they should upgrade to 2.0 or which version of the protocol I recommend they implement. I don’t have a simple answer.

If you are currently using 1.0 successfully, ignore 2.0. It offers no real value over 1.0 (I’m guessing your client developers have already figured out 1.0 signatures by now).

If you are new to this space, and consider yourself a security expert, use 2.0 after careful examination of its features. If you are not an expert, either use 1.0 or copy the 2.0 implementation of a provider you trust to get it right (Facebook’s API documents are a good place to start). 2.0 is better for large scale, but if you are running a major operation, you probably have some security experts on site to figure it all out for you.

Now What?

I’m hoping someone will take 2.0 and produce a 10 page profile that’s useful for the vast majority of web providers, ignoring the enterprise. A 2.1 that’s really 1.5. But that’s not going to happen at the IETF. That community is all about enterprise use cases and if you look at their other efforts like OpenID Connect (which too was a super simple proposal turned into almost a dozen complex specifications), they are not capable of simple.

I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.

At the same time, I am expecting multiple new communities to come up with something else that is more in the spirit of 1.0 than 2.0, and where one use case is covered extremely well. OAuth 1.0 was all about small web startups looking to solve a well-defined problem they needed to solve fast. I honestly don’t know what use cases OAuth 2.0 is trying to solve any more.

Final Note

This is a sad conclusion to a once promising community. OAuth was the poster child of small, quick, and useful standards, produced outside standards bodies without all the process and legal overhead.

Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete.

Bringing OAuth to the IETF was a huge mistake. Not that the alternative (WRAP) would have been a better outcome, but at least it would have taken three less years to figure that out. I stuck around as long as I could stand it, to fight for what I thought was best for the web. I had nothing personally to gain from the decisions being made. At the end, one voice in opposition can slow things down, but can’t make a difference.

I failed.

We failed.

Some more thoughts…

153 thoughts on “OAuth 2.0 and the Road to Hell

  1. I can’t decide if I should feel guilty for dropping out immediately after IETF San Francisco, or if I should feel grateful I didn’t waste any time on the OAuth 2.0 fight. Every now and then I would see it discussed at an IIW, and be invited back into looking at the mailing list again, but the energy and focus I loved about the OAuth 1.0 experience just wan’t there, and I could see all the crap forming (what you call the WS- way) that we intentionally kept chopping away when writing 1.0.

    Eren, when we meet again next, I will buy you a beer, for your heroic efforts.

    To all that come after, who have to implement 2.0, we are deeply sorry. It wasn’t our fault.

  2. >a new source of revenues through customization

    This is your key point and the driver factor to fabricate this standard the way it gets as per your article. This goal is perfectly aimed and hit with the mix of details and the lack of that and with the anticipated coming different standards and non-standard solutions.

    • Peter, I have no idea what you are trying to say.
      Are you writing in human-readable encrypted code that only your intended audience would understand?

    • I think I managed to decode it:

      Literal translation:
      “>a new source of revenues through customization
      The OAuth2 specification aimed to help enterprises get moar money, and it succeeded by writing a specification the way enterprises usually do: incredibly detailed while managing to impose no concrete restrictions/specifications”. I think the implication here is that OAuth2 is essentially meant to help enterprises cover their own ass should they get caught with their pants down. The specification is vague enough that it can be implemented shoddily, preferring time and flexibility over actual security if required, but the specification exists so that it can be blamed when something goes wrong. Excellent point.

      • lol’d so hard at Peter Verhas’s comment, trying to understand wtf he was saying! He’s either an epic troll… or . 😀

  3. Eran,

    we are in 2012, and there are two things we are sure of, a) large companies have a vested interest to fail standards (ebXML, WS-*, UML, BPMN/BPEL…), they mastered the process to achieve that goal, b) WYOS (write your own spec) works, I don’t see why a well written spec, with the feedback of its users/implementers can’t take off. You no longer need “them”

    • I loved this article, by the way. Just curious… what’s the benefit of failing UML?

  4. First, I would like to commend you for admitting that, despite the hard work of yourself and others, OAuth 2.0 is a failure. It takes a strong person to admit failure these days.

    But we also should all acknowledge the biggest failure: the entire web stack. Today, it is nothing more than one hack layered upon another, from the very bottom to the very top. HTTP is full of miserables kludges, from its support for caching to its support for compression. Cookies are an ugly hack. SSL/TLS are another hack. But none of those are as bad as JavaScript, which is among the worst scripting language ever devised, even if people who only know JavaScript and PHP claim otherwise. HTML5 is a severe regression from rigor and sensibility of XHTML, and CSS has always made practical layout and styling far more difficult than it ever should be. Given that it’s a patchwork of hacks, the security is horrid.

    Security can’t be tacked on 15 or 20 years later, like has been attempted with OAuth. It is, of course, possible to semi-successfully add another hack to the existing pile of hacks that make up the web, like was done with OAuth pre-2.0. But it’s damn near impossible to go beyond mere hacks, as we’ve seen with OAuth 2.0. To make something like OAuth 2.0 succeed, you need to throw away essentially all of the existing web stack, and rebuild it from scratch, but taking security into account from the very beginning. If you did that, however, you’d find that hacks like OAuth wouldn’t even be needed in the end!

    • I agree with most of what you say, and the feeling is that the web stack is something that went out of control, I agree that CSS is not the simply joy I would expect it to be (try to align a div horizontally or god forbid vertically without 3 visits to stackoverflow and jsfiddle) HTML5 is not answering many things we would hope it would be, you have too many JS frameworks (most of them are great) and even though I disagree about JavaScript (well, I’m just reading JavaScript the good parts, so I’m biased) I agree that having to console.log everything just to learn what this Ajax call returns (or reading the documentation if there are any) is a pain, which doesn’t really exist in Java / Scala or other strong typed languages. I was a strong supporter of Flex, and hoped JavaFX will succeed, but it turns out the web has life of it’s own, the developer community, the Rails one especially in my opinion, has created something very elegant in a world without elegance, things like SASS/SCSS/LESS, things like CoffeScript, like Ruby on Rails / Grails / Django / Play framework, and let’s not forget jQuery, and even Dojo and the commercial, but great sencha / ExtJS, the people who did the web standards might have made errors, but the enterprise, the standards organizations are just a little behind, that’s all, they have much more legacy stuff (people and machines) to satisfy. I think what needs to happen is to have the “hacker community” meet the “software engineer II” community and share ideas, concepts and not just be “us” and “them” corporate software developers solve hard, real problems, they just use XML and SOAP and XSLT and all those WS-* nonsense because they are a little behind, that’s all. and also because no one have shown them the right way, I work in a corporate, financial corporate, and people do want to learn, they do get excited hearing about Node.js, and about Thrift and protocol buffers, they like seeing the benefits of MongoDB, they appreciate the speed of Ruby on Rails / Grails and in some rare cases, they eventually adopt new things. I think instead of “hating Java” and overengineered software, hating windows SOAP and relational databases (except MySQL) people should just show the enterprise engineers that there are other ways, most of them simply don’t know as they have no time or simply think the world revolves around Java EE / Spring / Hibernate…

      • Just a helpful nudge with your js debugging (assuming you’re debugging in browser and not on a nodejs server app). Webkit developer tools, available in both safari and chrome, allow you to set breakpoints and debug your js much as you might debug compiled code in an ide. No need to console log all that stuff. I believe firebug in Firefox provides the same functionality but its been a long time since I’ve used it.

      • If you are an Eclipse user then there are packages you can install which give you all the bells and whistles of other IDE’s while coding in javascript. Including syntax coloring, smart indenting, and best of all it includes an AST parser which gives you context sensitive information, aka intelisense.

      • (try to align a div horizontally or god forbid vertically without 3 visits to stackoverflow and jsfiddle)

        …HA! Been there… so true

      • This thread is about ‘web stack as one big hack”. The parent comment illustrates one hackish aspect of the stack. At least that is my inference.

    • I used to feel similarly to you until I thought more about why so many heralded better technologies failed:

      XHTML failed because that “rigor and sensibility” never existed: in practice, XHTML was unusable for any real web site unless you gave up all hope of validation – due entirely to the various standards committees in the XML ecosystem trying to build in enormous amounts of complexity in an attempt to solve every problem in advance. In practice, this caused no problems except for people who wasted time trying to produce valid XHTML and the rest of the web converged on HTML5 because it solves problems authors actually have. It turns out that not penalizing people for moving ahead of a committee is a great way to make new things which normal people want to use

      HTTP might be miserably kludgey in your opinion but it’s solved more real problems than any other protocol in existence because of its simplicity and extensibility. Remember CORBA? Remember SOAP and the million WS-* protocols? All heralded as better ways to build Serious Applications but so complex that in practice no two implementations worked while simple HTTP applications work routinely on harder problems with an insane number of different clients.

      JavaScript: again, not great but it had enough flexibility that people have been able to bootstrap more advanced constructs than most critics thought were possible. Various self-proclaimed better languages tried to claim that position but again, they all failed because they added huge amounts of complexity to every application with little real benefit. Remember client-side Java, with 10-100x the code for any example, littered with AbstractFactoryFactory and developers at Sun full of hubris deciding that all existing graphics APIs were unusable so they needed to invent their own?

      As far as security goes, again, there are some areas for improvement but no other system in the history of computing has offered millions of people anything remotely as secure as the current web, warts and all.

      The simple fact is that building massive heterogenous distributed systems is hard: it’s easy to build castles in the air but actually building them takes longer and costs more than expected with generally little to show at the end. Much as people like to think otherwise, we’re simply not smart enough to build enormous systems top-to-bottom without constant real world feedback. You get that by making designs as simple as possible to solve a problem which real people are demonstrated to have and iterating quickly. Trying to redesign the web stack top to bottom is our field’s quest for perpetual motion.

      • Well said.. People keep on looking for a silver bullet for a complicated universe. And with the security not even built into the infrastructure, it is difficult to retro-fit. Protocol like this are great in theory, but there is no mentioning on the infrastructure which is required to supports it.

    • Yeah, I absolutely agree that Javascript is suboptimal, it’s unmaintainable and a pain to debug, and you don’t really want to use it for anything spanning more that 20-or-so lines of code. It also used to be slow but less so these days with V8 engines and optimizations.

      On the other hand .. machine language and assembly language are also a pain to maintain and debug, yet machine language is qlways down there at the very bottom of the tech stack somewhere. As I see Javascript, it’s evolving more and more into a standard “machine language” of the web. Not a perfect standard but something that you can build sensible higher-abstraction “compilers” on top of just in the same way the C took over when assembly language became unmanagable.

      Same could be said of HTML and CSS. Neither is perfect, both do the job, and if you don’t like the lack of structure, wrap it in something more structured.

    • Yes! It’s hard to understand why an industry that claims to be quickly and rapidly be deploying new technologies is stuck with hacking up old and worn out protocols. X.509 is the best example – supposedly the core of web security … It’s 80’s technology that’s been patched and extended , but still uses the 80’s ideas of a grand Directory service. We need a clean slate.

    • Why do you think much of existence is exactly the same as what you describe? Layers and layers of good intentions that never quite achieved the “perfection” they aimed for and so ended up “macgyvered”. The world isn’t perfect, we aren’t perfect, so why do we think we can create something that is? It is the major stumbling block for a vast number of technically-minded persons to think that there is such a thing, and in my mind, is exactly what Eran Hammer laments in his post. To strive for something that is perfect for all, leaves you only with compromise.

      Success occurs online with structures that are either designed for a targeted audience or ones that can survive in a hacked, overloaded and misused state, that is because the majority of Life works in exactly the same manner. It is a coders urge (bordering on OCD) to always want to go back and “start again”, but when systems become so large and depended on, you will most likely send yourself mad in doing so.

      Take JavaScript as an example, it wasn’t constructed for the way it is used today – but it has survived because it can be used in so many different ways but it still follows simple, clear rules. It isn’t perfect, or even anywhere near the best, but I think it’s a language that embodies the way the Internet is…

      Just to add – Mr Hammer, I feel the pain of your struggles, my hat is off to you.

  5. Hi Eran,

    I know all about that game… Been there… Done that… Leave as well… And I was on an enterprise forum… It’s always the same game. Is not about doing what’s right is doing what will get me money or/and what costs me less.

    In the end of the day, they don’t like sharp and young people who doesn’t care about the business value… They need puppies…

    So, I’m with you 😉
    Cheers,
    Bruno.

  6. Having spent the last few weeks learning about OAuth 2 and implementing / studying it for several providers (Facebook, Google & Twitter [which is really OAuth 1 I guess]) – I have to agree. I’m no security expert but it felt maddening to learn because it seems like each implementation can be implemented in such a vastly different manner. Glad to know it wasn’t just me missing something & rereading documentation over and over.

    I don’t know much about your work, but I appreciate the post & the hard work it sounds like you’ve put in. Sounds like you fought the good fight, but as most engineers know you can only swim up stream so long.

    • Not sure what OAuth 2.0 support would look like. The hard part would be deciding what to implement and how. I would put it on hold until you find use cases that justify the effort.

  7. A few years ago, some members of Mensa informed me that despite their *individual* genius (or higher) IQs, when a group of them experimentally took an IQ test *as a group* the result came out “dull normal.” There’s just something about group effort that destroys the output. So, even if your “capable, bright, and otherwise pleasant” people WEREN’T just “show[ing] up to serve their corporate overlords,” the end product would *still* have been poor. It appears to be an inevitable feature of the Universe.

    • This may help all of us in terms of being able to think “collectively” instead of as individuals, with nod toward members of Mensa to help solve the reasons behind their results.

      This medium cannot help to justify my explanation because of its limitations, but I will do my best to translate – and yes this idea can be considered “out there”. Take any idea with a grain of salt, but never exclude that idea solely based on your perception of “it is impossible”. If the idea did not have the slightest possibility of existing, why would you be thinking of it?

      James Cameron’s Avatar would be the best reference to date; I am pointing to the physical connection representing their far more global awareness. Another reference are Native Americans and other Tribal Cultures consistently saying how we are all connected, they also refer to non-physical connections.

      Have any one of you went through an experience where – unexplainably – you were able to match the actions of a person you barely knew, or were able to connect without speaking and engage in a game of responsive physical action?

      The central effect being that you still knew who you are mentally, but your mind felt significantly more expanded and you temporarily lost the physical sense of personal identity. Afterwards, you had to reign your mind back in and you were tired for a time following that experience.

      That is the best explanation I can give for your Mensa friends.

      • I have a simpler explanation;

        Collaboration requires roles for each participant. Without clear defined and well allocated roles, The collaborations value can be affected.

        I suspect if they determined the best candidate for each category of IQ question and gave them the role to answer that question, the ‘system’ would have produced fair better results.

  8. Timely post. I have been investigating OAuth the past couple of days for a project implementation. Is your opinion that OAuth 1.0a is still a strong and valid approach? Any other recommendations to use for a standardized approach?

  9. I’m an intern at a fairly large web service company, and my summer project was to implement an API for one of our services that supported OAuth so that clients could access private user data in a secure way. I found out that no one there was up to speed on the protocol so I read the OAuth 1.0 spec, then read 2.0, and then asked my managers which one I should use. Since it seemed like our competitors were implementing OAuth 2.0, we went with that.

    Our API clients are historically incompetent, so I insisted we use some kind of signature to avoid risking their credentials being compromised. I ended up using JWT authentication. This required a close reading of the OAuth 2.0 Authorization Framework, the OAuth 2.0 Assertion Profile, the JWT Bearer Token Profiles for OAuth 2.0, and finally the JWT spec itself. And too much head-scratching for a college intern.

    So I get that OAuth 2.0 isn’t something you can just plug in and have the kind of security guarantees you’d like to have (because it’s flexible or something). Have I done bad to go the route I went? And if the IETF isn’t the place for the creating the right spec, what is?

    • Personally, I wouldn’t touch JWT with a 10 foot pole, or any of those messed up assertion extensions. It is hard for me to say if you picked the right solution without having full insight into your existing infrastructure and requirements. The IETF is clearly the wrong place. The right place is probably github. Someone should write a new protocol and control that work, just like open source.

  10. Eran,

    Thank you for your efforts over the years. Even if things have taken an ugly direction, please be proud of all that has been accomplished using OAuth (both 1.0 and various drafts of 2.0) that could not have been accomplished otherwise.

    You did mention that you foresee small communities forming to develop new, simple, secure protocols that address targeted use cases especially well. Might I suggest that you’re a logical candidate for such a community to form around? Your reputation precedes you, especially where OAuth is concerned. If you were to finagle the cooperation of Facebook and Twitter, or other heavy-hitters in the dev community (e.g. DHH), it’s a near certainty that you’d be able to achieve what you’d envisioned, AND guarantee widespread adoption. Just something to think about…

    • Thanks for that trust. I have considered doing that many time over the past year. But the truth is I’m tired of it and I am not really in a position to lead such an effort given that I am not an active develop working on API authorization security at the moment. I think the key to a successful project, which was a hallmark of OAuth 1.0 is being developed by those who are actually shipping it. But if someone else starts something, I’ll do my best to offer my lessons learned.

      • I’d be interested in contributing to any such effort. I know many others who would be anxious to contribute as well, but I don’t think that any of us have the requisite notoriety to generate widespread community interest. And that more than anything is my point. Few people have that going for them. You do. 🙂

      • but I think that perhaps what people need is an architect who could show us what we need to do explain why and give us more or less a good idea to follow, people will just enact those ideas themselves, we don’t really expect you to sit down and write it, next week appearing to say “hey guys! download the javascript code (or whatever) here”

        sometimes, people just need to know what to write, having somebody like you with an instant recognisable history on the subject, people will follow.

        so perhaps you should offer to steward a process, very lightweight, but focused on directing people as opposed to sitting until 4am writing code…..there are lots of people who would do that….we just need a guy to say which direction to walk…and have a place to always come to when needing directions…that github project, could be this place.

      • I, too, would be interested in contributing. Your knowledge and expertise could be used to guide multiple developers and quickly produce a product that is everything oAuth 2 was supposed to be. Just because you yourself are not typing the code doesn’t mean you aren’t actively developing the product.

        Think on it. If you do decide to start up a new project, you will have many people who will jump at the opportunity to assist and contribute,

  11. Eran,

    Thanks for all the work and trying to fight the fight.

    Why not just go the Douglas Crockford route a la JSON and publish your own “spec”? What you thought OAuth 2 should have been. I, for one, would appreciate that tremendously.

    • +1.

      This is a huge bummer. I know I, and many others, in the web space, saw what you were doing and said, “This guy knows what’s up. I can chill out, wait for the spec, and then full steam ahead.” I’ve told this story to many coworkers over the past two years. To hear that people like us backing away left you to the wolves is really a shame.

      Publish OAuth 1.1 directly on GitHub, and if you want an “official” version, run a personal submission to the IETF (assuming it hasn’t been fucked by IPR…)

      • +1 as well! Eran, you are clearly far more rational than your IETF counterparts – it’d be a delight to see your work done in the cleanest way possible, solving a specific problem with a specific protocol in mind. I don’t want to say it’s the best path to redemption – it’s entirely up to you whether even continuing the OAuth brand is worthwhile – but you would be met by many supporters and co-contributors on Github, myself included.

  12. Eran,

    This is truly unfortunate. I was unable to follow the work on OAuth due to the volume of messages and my work emphasis placed elsewhere, but what you tell me here is definitely not a surprise. Well, almost. I knew for a long time that you had a strong personal desire to see this through. I’m not surprised the IETF process has drained you.

    Perhaps rather than entirely abandon the work, you just step out of the IETF process. Go define that profile of which you speak, or perhaps even step back to 1.0 and define 1.5. Call it 3.0.

    Paul

    • I’d actually propose to align the numbering with the history of HTML and instead of calling it 1.5 or 3.0, it should be called OAuth 5. Half-serious about it 🙂

  13. I wish you had had that experience before you convinced me to let the IETF get their hands on WebSocket. 😛 (Same thing happened there, I ended up getting my name removed from that spec too. What a disaster.)

    • All I can say is that I’m truly sorry! The biggest problem with the IETF is that you are completely at the mercy of the working group chairs and area director. When the effort is short, you know who you are dealing with and if there is good chemistry and consensus, it is smooth sailing. But if the effort takes longer than a year, chairs and area directors change and there is no telling who you will end up with. In the OAuth case, the original team was fantastic and fully aligned. But three years later and a whole new set of characters (as well as the move from the application area where it belonged to the security area), it was clear which direction the wind was blowing. It got so bad process-wise that compromises made at huge expense (over hours of conference calls) were later tossed aside by a new chair who was more aligned with the other side of the issue.

      • Oh don’t worry, not your fault. Honestly I was as shocked as the next person about how much of a mess that work became. The biggest problem in retrospect was mostly just the assumption from the people trying to sell us the IETF (Lisa, mainly, but also others) that the IETF had the expertise and that the WHATWG did not. But in practice, I think the quality of the feedback we were getting at the WHATWG was of higher quality; the best people in the WG were all already commenting while we were just a WHATWG effort. (We’ve since seen this again for other specs, like the Origin spec and the MIME sniffing spec.)

        Really it says more about how quickly the WHATWG has matured than about the IETF.

        Anyway, WebSocket’s design is worse now (IMHO) than when it arrived at the IETF, but it’s nowhere near as bad a situation as what you’ve described for OAuth.

  14. As a developer but not a security expert, I had to read this and provide a summary recently of what changed from 1.0. Even without security experience it was clear to me that the latest version was ridiculous, and I called out a subset of what you do here.

    I’m sorry to hear that your work is now (somewhat) for naught, and thank you for your efforts, sir. Sucks to be in that position.

    As suggested below, it’d be good to see your unadulterated proposed security spec, and hopefully the power of the OSS community could help drive adoption, if so.

  15. Thanks for all effort, but it sounds like an emotional. What you were claiming on just were tactical issues not just technical, for example have you ever reported by a security hole so far or you do not want to disclose here otherwise… Or finally you think that you are responsible for developer’s knowledge, no disclaimer…

    Regards,
    K .N.

  16. Eran,

    Just wanted to add another Thank You for your hard work.

    One day, we will fix it. One day, people will forget that the internet wasn’t always safe and simple and within their control. When it happens, it will be because of effort like yours. Well done.

  17. ++Nick’s comment about joining the Persona/BrowserID work over at Mozilla. To me, this approach is the best concept in web security (and privacy!) I’ve seen yet. I feel like if we’d thought of it back in the mid-90’s we could have solved this once and for all. As it is, we’re dealing with enterprise v. web and losing regularly. I have some hope that NSTIC might not get taken over by enterprise, but it’s a slim hope. Anyway – hop onto the Persona work if you can stand another go, I know they could benefit from your expertise.

  18. I like when specs are made by Google and Facebook rather than “web” theoreticians because it makes them much more practical. OAuth 1 was difficult for an average developer because it required client side signing of MAC tokens. You have to compromise between usability and security.

    • Slon, we know when we wrote 1.0 that the client site signing was going to be difficult. Several different approaches to avoid that were proposed, and IIRC we even went pretty far with one, until someone with some good crypto experience weighed in and showed how to break. To do what OAuth 1.0 does, client side HMAC is mandatory. Prove otherwise, and you will deserve the Fields Metal you will get for it.

      • I don’t think the problem is difficulty, that can be abstracted away with libraries, the problemis, nobody stepped up to make those libraries, lots of web developers are quite low skilled in the programming game and if they are the people who are trying to do this encryption “thingy” then I reckon thats why it’s all of a sudden so hard to accomplish.

        to get around this, we just need a simple 1, 2, 3, 4 type library which doesn’t impose so much policy that it’s malleable into different forms and use cases. then the problem disappears.

        but yeah, we need to do encryption on the client, we can’t just sit back and blindly trust TLS, because I don’t know about you, but I don’t trust the person saying they are the person they say they are, I need to force them to prove it. if I need to encrypt stuff on the client to get that, then so be it.

      • Actually, OAuth 1.0 does support sorta-bearer-tokens with Plaintext “signatures”. However, no one (other than Yahoo! IIRC) was willing to deploy it. IOW, when given the option between ease of deployment or improved security, most 1.0 vendors chose security. 2.0 doesn’t give them that option anymore.

    • We security types agree there needs to be a compromise between security and usability, but that goes both ways. Your comment displays a inability to comprehend security at a level that should render you prohibited from touching code exposed to the Internet at any time and under any circumstance.

  19. Yes, this is sad. And it happens all the time.
    Exactly the same thing happened to the whole Semantic Web effort at W3C. It basically got overtaken by enterprise and now it is of little interest or use to regular web developers.
    reply

  20. Thanks for the interesting story. I had a few discussions with the OAuth WG, and I have to agree that it wasn’t too pleasant.

    That being said I think it is unfair to say that what happened here is what happens throughout the IETF. I’ve seen all kinds of Working Groups, some dysfunctional, some working great but slowly (ahem), some working just right (appsawg comes to mind).

    Best regards, Julian

  21. Eran,

    I really appreciate all you did and the challenge you faced. As I have stated before – we try to cram transactional assurance into one of the three A’s, and the best candidate unfortunately becomes Authn since few thing of Assurance as its own “4th A”. But PLOA is gaining some huge traction and we welcome folks that want to work on off-loading Assurance to its own decoupled standards-based architecture.

    I know you already know this stuff, but for the other readers here are some links to get started:

    Video close-up of a demo – watch both parts to get complete story.

    As follow-up on the PLOA videos, these links might prove helpful.
    • PLOA White Paper – http://openidentityexchange.org/sites/default/files/PLOA%20White%20Paper%20-%20v1.01.pdf
    • Donovan PLOA Blog – http://www.attinnovationspace.com/innovation/story/a7779392
    • Drummond Reed PLOA Blog – http://equalsdrummond.name/2012/04/08/ploa-just-what-you-need-to-know/

    =Jay

  22. I am glad to know that I am not the only one who has hit the wall that is OAuth 2.0. I gave up on implementing after only a few days of research.

  23. One problem with OAuth 2.0 is the mismatch between the original design goals of OAuth 1.0 and how OAuth 2.0 is being used today by Facebook, Twitter, LinkedIn and other social networks. OAuth 1.0 was an authorization protocol designed for traditional, server-side, pre-Web 2.0, Web applications. Most actual usage of OAuth 2.0 is for social login, which combines authentication and authorization; and we now live in the age of Javascript and native mobile applications. Recent discussions on the OAuth mailing list have reported that mobile application developers are using undocumented Facebook extensions to prevent user impersonation attacks, and that developers who do not know of the undocumented extensions are producing vulnerable implementations.

    What would it take to design from scratch a social login protocol for the age we live in? If you are curious, have a look at http://pomcor.com/2012/06/25/a-protocol-for-social-login-in-the-age-of-mobile/ .

  24. Wow. You insisted on total editorial control, restructured the document several times, dragged your heels on submitting changes, ripped out the bearer token to a separate spec because you don’t like that mechanism, started the MAC spec for a signed token, but then resigned from that spec. Now you resign and blame enterprise use cases for a spec you . Herding the cats to end up with a simple specification is hard, and it is the job of the editor.

    • I’m not surprised that you are upset and I am truly sympathetic to your frustration. You have invested much time and energy working on the WRAP specification only to have it put aside in favor of the OAuth 2.0 effort. You were then hired by the lead enterprise player to represent their interest in the working group and give them top billing on the specification. At this point, regardless of the process and how it ended where it is, OAuth 2.0 is not a good specification. Like many others, you bailed out early, as soon as your contract ended. I have spend the last year working on this on my own dime. I take full responsibility for my actions and failures. I think I have made that pretty clear. The outcome is an honest representation of the working group consensus and compromises. It is the best result any other editor could have accomplished given the working group make up. But that doesn’t make it good. Since your name is the one left listed at the top, I will leave it up to you to make up your own mind about the quality of the work and whether you are proud of your affiliation with it. But you can’t have it both ways: it cannot be my fault and a quality specification at the same time. It’s also telling how you are the only person who turned this into a personal attack.

      • I also feel your frustration dealing with the IETF “security mafia”. Every IETF meeting I have been at has been frustrating. I was at 3 meetings trying to get what became OpenID work started at IETF. Could not get it started there so helped form the OpenID Foundation to work on identity problems.

        I’m ok with my name being on the spec as OAuth 2.0 is essentially WRAP which is simple to implement and is great for authorization. (btw: WRAP was not put aside for OAuth 2.0 — it IS the core of OAuth 2.0)

        Unfortunate twist as OAuth 2.0 is also used for authentication, something it was not designed for.

        And I agree that client signing is better than bearer tokens — but bearer tokens are good enough for many use cases. The JWT work is hopefully the future of standard tokens.

      • I had no issues with the IETF “security mafia”. In fact, my problem was that they were completely absent. The great promise of experts coming to help from the security area was all promises no action. The fact that you consider the JWT work the future is exactly where we disagree. I find it overly complex and focused on enterprise use cases than simple consumer web applications.

      • The article is harshed, IMHO. The OAuth protocol itself is sound (I will not call it a security protocol under any circumstance). The interesting part is the bearer token for 2.0, the problem is that there is not a well defined format on what it looks like (whether it should be cryptographic protected, both encrypted/signed, and it should.), and client should authenticated itself again before using the token. 1.0a is not much better, IMHO. The weakness in how the token is protected, but not the protocol inself..

  25. Hi, Eran:

    I also believe there’s a very real and problematic separation between web and enterprise worlds.
    At the same time I think this is a natural outcome given their different motivations.
    If OAuth2.0 has failed (and I have my doubts) there are still a few alternatives to investigate.

    If I were you:

    1. I would rethink the “bastard protocol” on my own and focus solely on the web, APIs and cool things.
    2. I would name it “NobleAuth1.0”.
    3. I would work on it (seeking own peace of mind) until the following question become easy to answer:

    a) Does it make sense to upgrade from OAuth* to NobleAuth*? (YES, absolutely)
    b) Does it provide better security? (YES, absolutely)
    c) Do I have more than 80% chance to get it all right if I’m a fine programer with zero experience in API integration? (YES)
    d) Is there an alternative to NobleAuth* (Not at all, because of… [endless feature list and comparison table])

    I would put it out there and wait for it to become the defacto authentication standard for web systems.

    Do you (we) really need the IETF to develop (or finish) something great?

    Why?

    Good luck and thanks for this post and all your efforts.

  26. I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.

    oauth is a frigging mess of half-witted insecurity, doesnt matter whether we are talking 1.0 or 2.0, everyone implements it wrong, its taxing on RNG which results in easily guessed tokens, et cetera et cetera. Way to set web security back a decade by being dumbkopfs. Your major security failure will be coming sooner than you think.

  27. The IETF process at this point is pretty much guaranteed to produce crap. None of the people there are responsible for shipping a real product within a real time frame. Over 15 years ago, a colleague and I at Nortel tried to introduce a _simple_ protocol for dealing with computer telephone interactions, which could have turned into a nice VoIP protocol. Instead, we got SIP, which, to understate things mildly, is not easy to implement. Simplicity and usability were never what determined winners at IETF, it was all about academic credentials and academic/industry politics. I haven’t attended an IETF meeting in years, and haven’t looked back. Sorry you got burned by the process, wasted, time, etc.

  28. I can only second others. Given your knowledge, experience, and reputation, it sounds only natural and logical to get back to Getting Things Done™. Idealists like us just don’t give up. 😉

    Very naturally, it will definitely take a vacation (or two) as well as quality time with friends and family to get past the frustration and resignation state (of fighting a doomed uphill battle). But based on the mindset you’re communicating, I cannot believe that you’re going to let this truly pass. 😉

    Having looked into both specs, I’d see a high potential for OAuth 1.5 (or 3.0, or 2.0NG ;)). I fully agree with your analysis/evaluation on the current 2.0 proposal.

    I somewhat disagree with the mentioned general stance of 1.0 being insanely hard to grasp/implement though. IMHO, the problem of 1.0 is two-fold: 1) The spec goes to great lengths in trying to explain how implementations are supposed to work, but is lacking essential details all over the place, and for the average developer, the entire spec is cryptic in the first place. 2) Despite the clear spec, there are/have been multiple reference implementations for individual platforms/languages/interpreters, consisting of very poor and not maintained code for some — which, in turn, made “innocent” developers study the cryptic OAuth spec/implementation in the first place. So while I agree that (primarily the signature part of) the OAuth 1.0 spec was/is complex, I think that the spec itself only takes a minor part of the blame. The major part rather is that an open protocol requires valid, working, and co-maintained quality implementations for most/all platforms.

    In any case, these issues are resolvable. Even more and even better with a revised and improved spec. Even if you’d only take on a minimum “responsibility” of setting up a repository and inviting/shaping a team of maintainers to get the OAuthNG kick-started as a newborn FOSS project, I’m seriously and truly confident you’d have a critical impact on the web’s future. Keep it KISS, incorporate the OpenID Connect and other proposed enhancements; it will prosper.

    First, it’ll take time to get over it though. (no pun intended, I’ve been there before)

    Thanks for your hard work, your patience, and vision!
    sun

  29. Some people in these comments and elsewhere have made a big deal out of the fact that they consider OAuth 2.0 to be much easier to implement than OAuth 1.0. But they point they are missing is that these experiences are largely based on proprietary profiles of the early drafts. It is true that 2.0 can be profiled into a simple, straight-forward protocol. But every single one of these profiles is slightly different. Developers have to relearn it for each vendor. And of course, there is no interop across vendors. So sure, 2.0 can be simpler, but so can any other home-brewed alternative. If ease of development comes at the expense of interop and baseline security, what’s the point of standards?

    • > It is true that 2.0 can be profiled into a simple, straight-forward protocol. But every single one of these profiles is slightly different.

      If interoperability between profiles is the only real issue, it’s a bit dramatic calling the standards process “the road to hell”. Interoperability issues can be explicitly fixed as a next step the way the SOAP guys did with WS-I(nteroperability) Basic Profile, which pretty much put that issue to rest (pun not intended).

      I can empathise that the “design by committee” approach has produced something less elegant than what any single committee member would have liked, but pragmatically, can we now start to use this to build real apps? What I’m reading from your post even after all the criticism is that it will indeed be possible, once interop is fixed.

  30. Hi Eran, I cannot say I am surprised to read this given our discussion at last year’s OSCON, but it is disappointing all the same. Thanks for the effort, and I hope the next or current project has a more satisfying conclusion.

  31. Hiya,

    Why doesn’t someone just take the protocol that worked, and the features the worked, and then start a new protocol-based oAuth fork?

  32. I feel your pain 😦 Thanks from all of us for trying to hold the fort, your contributions are very much appreciated.

    I was considering 2.0 for a large scale project but now will not as we can afford the risk (and we have some pretty good security peeps) hopefully this obvious cluster fu** will somehow get pulled back to reality but from your post it looks doubtful.

    Onwards and upwards Eran !

Comments are closed.