Auth to See the Wizard

(or, I wrote an OAuth Replacement)



It’s me again.

The fuck OAuth guy.

Before that I was the guy who wrote this and then this (and then I took my name off it).

I wrote a replacement protocol and thought you might want to check it out.

Well, sort of. I didn’t write a protocol. I wrote a JavaScript module providing a full authentication and authorization solution for building web applications. I am done with protocols and specifications. At the end of the day, I needed a working solution I could deploy and trust. The problem with security protocols is that they are useless without an equally solid implementation. The only point in a protocol is interoperability and I don’t care about interoperability. I just want to build great products.

I actually wrote three modules.

Iron is a simple way to take a JavaScript object and turn it into a verifiable encoded blob. Hawk is a client-server authentication protocol providing a rich set of features for a wide range of security needs. Oz combines Iron and Hawk into an authorization solution. Together these three modules provide a comprehensive and powerful solution.

I’ll take some questions now.

How is Oz different from OAuth?

OAuth, especially 1.0, is based on solid, well established security best practices. There is no reason to invent something new. OAuth 2.0 added the foundation for building highly scalable solutions. Any new protocol should be based directly on this existing body of work and Oz does just that. It throws out all the silly wire protocol parts because they add no value. Oz makes a lot of highly opinionated decisions about how to implement the things that actually matter. If you understand OAuth well, you should be able to pick up Oz and Hawk pretty quickly.

What so cool about it?

Oz provides a complete solution with full support for access scopes, delegation, credential refresh, stateless server scalability, self-expiring credentials, secret rotation, and a really solid authentication foundation. Some would say Oz goes a bit overboard in layering security but I don’t think there is ever enough of that. The implementation is broken up into small utilities which can be composed together to build other solutions with different properties. And by braking it into three modules, you get to use just the bits you want.

Does it require client-side cryptography?

Yes. Building a solution without security layers is irresponsible and stupid. Don’t do that. Bearer tokens are a bad idea. That said, Hawk, the layer providing the authentication component is trivial to implement. It’s a simple HMAC over a few strings. No sorting and encoding and all that nonsense.

Who should use it?

Me, mostly. I wrote this for myself because OAuth 1.0 is based on obsolete requirements, and I rather stick pencils in my eyes than use OAuth 2.0. If you are a happy OAuth user (regardless of the version), I say stick with it. But if you don’t like it or looking for an alternative (and are using JavaScript), to the best of my knowledge, Oz is the only other option. It is particularly smooth experience when also using hapi.

Is it done?

Yes and no. The core protocol is done and is in great shape. It has been stable for over two years. You can expect the same quality engineering I’ve put into hapi. The code is lean, clean, and it goes out of its way to protect against developer mistakes. What’s not done are the workflows such as the OAuth 2.0 implicit grant. Right now Oz provides an OAuth 1.0-like workflow, but more workflows (especially for mobile) will be added soon. Oz is in active development and will be the core security component of my new project. Expect it to get better as I continue to use it myself.

Is there going to be a specification?

Not if I had to write it. Honestly, I think a specification is a waste of time. I don’t care about Oz on platforms other than JavaScript. While Hawk and Iron have already been ported to other platforms, I am not aware of Oz ports yet.

What the background behind Oz?

Oz was initially an OAuth 2.0 higher-level protocol developed for the Yahoo Sled project (now open sourced as Postmile). In fact, Postmile turned out to be the beginning of a lot of cool stuff including the entire hapi ecosystem. However, it turned out, the OAuth bits were adding no value and compliance just made development slower and more complicated. My initial focus was on the authentication bits which resulted in Hawk. Hawk is actually widely used already and was the foundation of the Mozilla identity API. Iron followed providing the token format needed to send self-encoded information securely (and is heavily used by hapi users). I then got stuck on Oz for about three years because I didn’t have a use case for it. I left it alone for a while until it was time to put the final touches on it.

Got more questions?

Just open an issue and I’ll do my best to answer.

OAuth 2.0 and the Road to Hell

Update: three years later I wrote something new… introducing Oz.

They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0.

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing. Continue reading

OAuth 1.0 Blog Cleanup

As I’m getting ready to finish work on OAuth 2.0 and add new content to this site, I decided it was time to finish the OAuth 1.0 chapter of this site. I’ve finally cleaned up the OAuth 1.0 guide and other pages. The guide is now updated to reflect RFC 5849 as well as some bug fixes in the scripts used to generate the signature base string tutorial. If you are linking to this site for OAuth resources, please link to the OAuth page.

OAuth 2.0 Redirection URI Validation

Why do we require clients to include the redirection URI when exchanging an authorization code for an access token in OAuth 2.0 (section 4.1.3)?

Consider the following scenario:

  1. Evil user starts the OAuth flow on a legitimate client using the authorization code grant type flow.
  2. Client redirects the evil user to the authorization server, including state information about the evil user account on the client.
  3. Evil user takes the authorization endpoint URI and changes the redirection to its evil site.
  4. Evil user tricks victim user to click on the link and authorize access (using phishing or other social engineering methods).
  5. Victim user thinking this is a valid authorization request (it looks kosher), authorizes access. Access is granted to the right legitimate client. So far nothing is wrong.
  6. Authorization server thinks it is sending victim user back to the client, but since the redirection URI was changed, victim user is sent to the evil site.
  7. Evil user takes the authorization code and gives it back to the client by constructing the original correct redirection URI.
  8. Client exchanges the code for access token, attaching it to the evil user’s account.
  9. Evil user can now access victim user data via his client account.

The way this works, the attacker does not get direct access to protected resources, but it tricks the client into attaching the victim’s access token to the attacker’s account.

Pre-registration of the redirection URI can help a lot, but depends on the matching rules. Since many large providers have open redirectors, the attacker can use those to construct a redirection URI that passes the authorization server validation.

Requiring clients to register their full redirection URI without allowing any variations or partial matching is highly recommended.



OAuth Bearer Tokens are a Terrible Idea

Update: three years later I wrote something new… introducing Oz.

My last post about the lack of signature support in OAuth 2.0 stirred some very good discussions and showed wide support for including a signature mechanism in OAuth 2.0. The discussions on the working group focused on the best way to include signature support, and a bit on the different approached to signatures (more on that in another post).

However, the discussion failed to highlight the fundamental problem with supporting bearer tokens at all. In my previous post I suggested that bearer tokens over HTTPS are fine for now. I’d like to take that back and explain why OAuth bearer tokens are a really bad idea. Continue reading

More OAuth Nonsense

ComputerWorld is the latest to run a scary story about OAuth 2.0 and how insecure it is. Unfortunately, instead of doing their homework and paying attention to my post, they borrowed a bunch of my quotes (almost half the article), added some original nonsense, sprinkled a few errors, and gave it a sensational headline: “OAuth 2.0 security used by Facebook, others called weak”.

OAuth 2.0 without signatures works just fine for companies like Facebook, because their developers hard-code their API endpoints. There is no danger what-so-ever for an access token to leak or get phished (any more than if they used signatures) because Facebook uses HTTPS for their OAuth 2.0 API.  This is why I titled one of the subsections: “Why None of this Matters Today”. My post was about discovery and long term security improvements of the web – not that there is anything broken about today’s implementations. Continue reading

OAuth 2.0 (without Signatures) is Bad for the Web

OAuth 2.0 drops signatures and cryptography in favor of bearer tokens, similar to how cookies work. As the OAuth 2.0 editor, I’m associated with the OAuth 2.0 protocol more than most, and the assumption is that I agree with the decisions and directions the protocol is taking. While being an editor gives you a certain degree of temporary control over the specification, at the end decisions are made by the group as a whole (as they should).

And as a whole, the OAuth community has made a big mistake about the future direction of the protocol. A mistake that is going to make OAuth 2.0 a much less significant agent of change on the web. Continue reading

Twitter a Hot Princess, Google an Empty Castle

Over the past two years I have been arguing that the problem with supporting OAuth 1.0 signatures wasn’t with the signatures, but with the lack of value in trying to figure them out. The primary argument made by the WRAP authors and now the majority of OAuth 2.0 contributors is that signatures are hard and developers are stupid. This combination, they argued, is costing them developers.

To address this, they argued that the only solution is to remove signatures. I countered that instead of creating a new protocol, the companies complaining (primarily, Google, Microsoft, and Yahoo!) should invest in quality libraries and debugging tools.

My point was (and still is) that if you give developers value, they will fight to figure out the signatures. A couple of weeks ago Twitter discontinued their support for Basic authentication, and what these people said cannot happen, happened. All these developers figured out how to migrate their application to OAuth 1.0. That despite the lacking Twitter developers support, alleged bugs, and other complaints about Twitter’s implementation.


Don’t expect knights to battle dragons if your castle is empty. Twitter put a hot blond (or brunette) princess (or prince) in their castle, and their (API) knights fought the evil (signature) dragon and got their reward. Google and the rest of the big web providers with their useless offering of boring APIs left their castle empty.

Guess what! The kind of knights who come to fight dragons living in empty castle are there for the fight, not to do something useful. Yes, battling dragons is a bitch, but knights tend to forget that once they get their happy-ever-after. Give them an empty castle and they will do nothing but obsess about their battle scars.

Why does this matter? Because without signatures, there can be no secure discovery and less open web.