OAuth 2.0 (without Signatures) is Bad for the Web

OAuth 2.0 drops signatures and cryptography in favor of bearer tokens, similar to how cookies work. As the OAuth 2.0 editor, I’m associated with the OAuth 2.0 protocol more than most, and the assumption is that I agree with the decisions and directions the protocol is taking. While being an editor gives you a certain degree of temporary control over the specification, at the end decisions are made by the group as a whole (as they should).

And as a whole, the OAuth community has made a big mistake about the future direction of the protocol. A mistake that is going to make OAuth 2.0 a much less significant agent of change on the web.

All this Crypto Business

OAuth 1.0 allows application developers to sign requests. Signatures remove the need to send plain-text secrets on insecure (or secure) channels. Instead of sending the secret with the request for the other side to compare against their copy of the secret (similar to how passwords work), the secret is used to calculate a value which cannot be converted back the secret itself. Instead, the signature can be verified by someone with a copy of the secret.

When performing an irreversible calculation that the other side can verify, signatures protect secrets by simply never sending them on the wire. By removing the need to send secrets, applications don’t need to rely on other protocols (such as SSL/TLS) to protect the plain-text secrets. That’s not all signatures provide, but more on that later.

To sign a request, developers have to follow a list of steps in a very specific order and with much care (which often feels like battling a dragon). The smallest mistake causes the entire request to fail. While the OAuth 1.0 signature process could have been somewhat simpler (no double encoding, different sorting, no URI parsing into query parameters, etc.), any time developers need to canonicalize data, stuff breaks. Even beyond the complex math, cryptography is hard because it is generally unforgiving. It does not tolerate mistakes.

WRAP and the Stupidity Threshold

After deploying OAuth 1.0, many companies discovered the cost of supporting OAuth 1.0 due to mismatching signatures. OAuth 1.0 looks simple enough for developer to code from scratch instead of using a library (as opposed to SSL or TLS which no one in their right mind will try to write from scratch). When developers write their own code, they are likely to get one small detail wrong. It didn’t help that the specification was vague and implicit about many important details (which since has been corrected in the RFC).

Ironically, the bigger the company was, the more resources it had, and the least interesting and useful API it offered, the louder the complaining about OAuth signatures was. This had an easy and straightforward solution: provide better libraries to your developers as well as better (or any) debugging tools. Alternatively, make your API so valuable that developers will be motivated to struggle through it and figure it out. Unfortunately, this was not the solution the people behind WRAP had in mind.

At the heart of the WRAP architecture is the requirement to remove any cryptography from the client side. The WRAP authors observed how developers struggled with OAuth 1.0 signatures and their conclusion was that the solution is to drop signatures completely. Instead, they decided to rely on a proven and widely available technology: HTTPS (or more accurately, SSL/TLS). Why bother with signatures if instead the developer can add a single character to their request (turning it from http:// to https://) and protect the secret from an eavesdropper.

Much of the criticism that followed focused on the fact that WRAP does not actually require HTTPS. It simply makes it an option. This use of tokens without a secret or other verification mechanism is called a bearer token. Whoever holds the token gains access. If you are an attacker, you just need to get hold of this simple string and you are good to go. No signatures, calculations, reverse engineering, or other such efforts required.

As Secure As a Cookie

WRAP was based on a simple, and powerful argument: bearer tokens are already a core web architecture. While far from ideal, the WRAP security model was directly based on cookies – the authentication layer behind almost every web application. Why bother to create something more secure if it makes it harder for developers to use, while not actually improving the overall security of the service. As long as a site offers both an OAuth API and a human web interface (i.e. a web site), the overall service will only be as secure as its weakest part – the cookie-based authentication system.

The problem with this argument is not today, but 5 years from now. When trying to propose a new cookie protocol, developers will make the same argument, only this time pointing the finger at OAuth 2.0 as the weakest link. Removing signatures and relying solely on a secure channel solves the immediate problem, and maintain the same existing level of security. But it lacks any kind of forward looking responsibility, and the drive to make the web more secure. It’s a copout.

What makes this more frustrating is that the people behind WARP are some of the brightest security minds on the web. These guys know exactly what they are doing, and it’s not like they don’t care. They just gave up and decided that the best they can do is maintain the status quo. They are also representing a large and powerful coalition of big companies too lazy to work a little harder by helping their developers use signatures successfully.

Doesn’t HTTPS Solve Everything?

HTTPS guarantees an end-to-end secure connection. The implementation and deployment details are critical to ensure that, but when done correctly (which is not always the case), is a great solution. What HTTPS provides is a secure channel. Any secret, password, or bearer token sent over HTTPS is protected and cannot be compromised by an attacker listening in on the line. HTTPS allows a client to send a secret to its desired destination securely.

However, HTTPS can’t help if the client’s desired destination is a bad place. HTTPS doesn’t help prevent phishing attacks because anyone can get an SSL certificate and show the secure icon in the browser. The fact you are using a secure channel doesn’t mean the entity on the other side is good. It just means that no one else can listen in on it (just the bad guys). If a client sends their bearer token to the wrong place, even over HTTPS, it’s game over.

Another issue is that the OAuth working group could not even reach consensus on actually requiring HTTPS, leaving it as an recommendation for services to decide. Even OAuth 1.0 requires HTTPS for its plain-text flavor, which was added to get it published as an RFC. Instead, OAuth 2.0 is satisfied with just a warning.  OAuth 2.0’s solution is to allow (but not require) access tokens to be short lived. By limiting the bearer token’s lifetime, stolen tokens are only useful for a short period of time, limiting the potential damage.

Why None of this Matters Today

OAuth today is used together with proprietary web service APIs. There is little to no interoperability across these services (Facebook API used only on Facebook, etc.) and almost no clients performing discovery of any kind. Because the API endpoints are hard coded into the client, when combined with HTTPS, there is no risk of leaking the tokens. In this setup, the client does not need to do much thinking about where to send tokens and how to protect them.

Unlike cookies which are sent to the server based on a somewhat complex set of client rules, OAuth clients today don’t use any rules. Instead they use a single token for an entire service with all API endpoint preconfigured. There are no new subdomains to handle, or really any kind of unexpected or dynamic interaction. In this environment, bearer tokens over HTTPS are just fine.

Why All of this will Matter Soon

As soon as we try to introduce discovery or interoperable APIs across services, OAuth 2.0 fails. Because it lacks cryptographic protection of the tokens (there are no token secrets), the client has to figure out where it is safe to send tokens. OAuth reliance on the cookie model requires the same solution – making the client apply the security policy and figure out which servers to share its tokens with. The resource servers, of course, can ask for tokens issued by any authorization server.

For example, a protected resource can claim that it requires an OAuth access token issued by Google when in fact, it has nothing to do with Google (ever though it might be a Google subdomain). The client will have to figure out if the server is authorized to see its Google access token. Cookies have rules regarding which cookie is shared with which server. But because these rules are enforced by the client, there is a long history of security failures due to incorrect sharing of cookies. The same applies to OAuth 2.0.

Any solution based on client side enforcement of a security policy is broken and will fail. OAuth 1.0 solves this by supporting signatures. If a client sends a request to the wrong server, nothing bad happens because the evil server has no way of using that misguided request to do anything else. If a client sends an OAuth 2.0 request to the wrong server (found via discovery), that server can now access the user’s resources freely as long as the token is valid.

It is clear that once discovery is used, clients will be manipulated to send their tokens to the wrong place, just like people are phished. Any solution based solely on a policy enforced by the client is doomed.

No Discovery for You

Without signatures, OAuth 2.0 cannot safely support discovery. It is a waste of time and a risky business. Clearly, the OAuth community today does not care enough about discovery and interoperable services to do something about it. The cryptographic solutions proposed so far are focused on self-encoded tokens and other distributed systems, based on narrow use cases promoted by the likes of Google, Microsoft, and a few other enterprise-focused companies.

Without discovery, smaller companies will have a harder time getting their services accessible (e.g. when importing your address book from any provider, not just the big four).

Now What?

I am not advocating throwing OAuth 2.0 out, starting over, or requiring signatures. All I have ever advocated for is the inclusion of a basic signature option in the core specification, in the spirit of OAuth 1.0. The 1.0 signature isn’t perfect, but as the Twitter developer community demonstrated, is clearly within reach.

35 thoughts on “OAuth 2.0 (without Signatures) is Bad for the Web

  1. So let’s put signatures into the core spec. This was always how I differentiated OAuth 2.0 from WRAP. OAuth 2.0 was meant to be the best parts of OAuth 1.0 and WRAP combined together. I’d rather see the signature use case supported than all of this crazy SAML assertion stuff. :)

  2. I am glad to see this blog. I read OAuth 2.0 last year for the first time, the part of removing signature has bothered me ever since. If the OAuth community prefers easy adoption to future security/interoperability in OAuth 2.0, then I’d like to see the road-map for OAuth 3.0 as soon as possible.

    +1 from me for signature in core

  3. I agree whole heartedly. There are so many applications where signatures are required. I have always been frustrated by the fact that so many people tried to implement OAuth 1.0 themselves. There are now very good libraries for almost all languages. As I’ve argued on the IETF list several times I would like to keep signatures in, in particular if this is the hold up for getting discovery working it’s a no brainer.

  4. Hey Eran,

    Here i was bitching and moaning in private about how you, personally, screwed up OAuth 2.0 by dropping signing. I’m sorry, you’re right, it’s important, and dropping it is a total mistake.

    The problem with signing in OAuth 1.0 is we never made it super clear in the spec that the parameters had to be ordered alphabetically. It’s implicit, and there, but not as clear as it could be.

    Without signing, anybody can throw up a fake cert on ssl, it won’t be checked by the client, and it makes man in the middle attacks trivially easy.

    rabble

  5. Can you give a concrete example of how discovery would fail with OAuth 2.0? Discover is not well-specified, as far as I know, so when you say they wouldn’t work together I have a hard time blaming that on OAuth 2.0 with any confidence.

    • Discovery is any use case in which the client is talking to an unknown or unfamiliar resource server. The resource server provides some information on how to go and obtain an access token (we have a few proposals on how to do that). At this point, the client follows those instructions, get the end-user to approve, and comes back with a token. However, because the resource server is unknown, the client has to figure out if the resource server’s claim about where to get a token from is valid.

      Any resource server can claim to be accepting access tokens issued by one company, regardless of its truthfulness. If the client already has such a token or if it goes and fetches one, it will be handing an access token to the wrong resource server, allowing it to steal user data where the token is really valid. The only way to solve this without signatures, is by the authorization server telling the client where it is safe to use this token. This has been proposed as the ‘sites’ parameter. The problem, just like years of security problems with cookies, is that it is up to the client to enforce this policy.

      Any security that is based on the client enforcing a policy is broken and will never work. Using signatures, token secrets are never leaked and sharing them with the wrong resource server does little to no harm.

      • The premise that we can’t rely on the client to enforce a policy is too strong. We have to trust the client not to make the user’s readable data public, and we have to trust the client not to implement instructions from unknown parties about overwriting the user’s writable data. These requirements that the client must safeguard the user’s data are policies.

        There might be some other reasonable premise about not trusting the client to play its role competently. I would be curious to know what that would be; right now I can’t imagine it.

        One reasonable fix for the discovery process would be to get it to return all information required to access the resource, instead of just the resource server. This would include both the means to find the resource server as well as the credentials needed to access it on behalf of this client and user. This is the usual story about making things be capability-based. If both the credentials and the identity of the resource server are found by discovery, all we have to trust the client to do is use those credentials to access that server, and if that happens we don’t have to worry about starting the process with an impostor resource server.

        Once we’re talking about making things be capability-based, another standard question arises: do you want the client to be able to delegate work? If the client cannot delegate work, then the client may find itself being a middleman to transactions only because it cannot delegate. If it can delegate work, then it has to give the delegatee enough information so it can do its job. I can’t distinguish that from sharing a token secret.

  6. Hi Eran,

    I have never understood why signatures have been dropped from the draft. Let’s add a signature mechanism that helps to prevent token abuse, e.g. based on token secrets.

    But we can do more. IMHO the risk of token abuse could furthermore be reduced by supporting the issuance of different tokens for different services (least privileges).

    regards,
    Torsten.

  7. What do SAML assertions have anything to do with OAuth signature? It’s a different axis altogether.
    But yes I totally agree on putting signature back to the core spec!

  8. Though I’m not too familiar with OAuth (still learning it) and cryptography, I’d like to suggest that you (and those supporting your point of view) start creating a specification which extends OAuth 2 with signatures (called e. g. “OAuth 2.0 + Signing x.y”). As OAuth 2.0 is still in the making, you might even be able to include a reference to the signing extension into the current draft (stating signing is an optional module resp. ‘there are other ways for authorization which can be used in conjunction with OAuth, e. g. [the signature spec], indicated by sending xxx and returning yyy …’).

    Though having signatures optional and in a separate spec will never be as secure as having it specified in the core and possibly creating a small overhead (by the clients having say ‘I support signing’ and the servers ‘I require signing’ and providing other required info), there is a benefit: The signature extension can be developed (and incremented) independently, and other authorization techniques can be added in the future.

    What do you think about that?

      • So why don’t we? Maybe I’m wrong but I didn’t get the sense that the opposition was *that* strong to some kind of optional signature support.

      • It was pretty strong, but masked. No one was outright against signatures, just stuck in a long debate about what they should look like. I want to simply go back to the OAuth 1.0 approach, and let others define fancier methods.

      • Yeah, true. But there won’t be any progress if you’re just fighting whether or not to implement signatures into the core. Just go ahead, create a spec, with the goal of having it implemented into the core – if it’s not implemented, try to reference it as an external module. Then we’ll have at least a spec for signatures, instead of having nothing like this at all.

  9. Signed requests sent in plain text – even those that are short-lived – can be stolen and reused in a man-in-the-middle attack within the lifespan of the token. So, to me, any request has to be HTTPS for rock-solid security anyway.

    However, I agree 100% about this: “As soon as we try to introduce discovery or interoperable APIs across services, OAuth 2.0 fails. Because it lacks cryptographic protection of the tokens (there are no token secrets), the client has to figure out where it is safe to send tokens. … Any solution based solely on a policy enforced by the client is doomed.”

    One day I would like to manipulate the Activity Streams generated by my institution’s internal systems’ APIs in popular third party mobile clients like Tweetdeck. If Tweetdeck can be fooled into sending bearer tokens for my API to bad sites, I can’t guarantee the security of my APIs.

    Let’s have signed requests (over HTTPS) back as an option at least…

    • With a nonce, a MITM can only pass along the request once anyway. Signatures don’t offer any secrecy, but the damage from a captured signed request is little to none. Of course, any payload has to be signed as well.

      • There are many workarounds for dealing with nonces and what they are meant to prevent. For example, there is little reason to check the nonce part on read-only requests (which in many cases are the vast majority). Since OAuth doesn’t provide privacy at all, allowing an attacker to read data can be prevented using HTTPS if you really need privacy. You can ignore the nonce part and just check the timestamp within a short window (but should provide a way for the client to sync clocks). Or you can ignore it completely if you are not worried about replay. Either way, there is a huge difference between bearer tokens and signed requests, even if you don’t implement nonce checking.

  10. I read the article carefully and I have to admit that given what I believe are reasonable assumptions about how discovery would probably work in OAuth 2.0 I can’t validate the attack described here. I was hoping you could look at the article and let me know if either I misunderstood the attack you are proposing or if my reasoning for why the attack won’t work is flawed. The article is available at http://www.goland.org/bearer-tokens-discovery-and-oauth-2-0/.

    Thanks,
    Yaron

    • Nope. You got it just right. The problem with your solution is its complexity – though we can argue what is more complex, signing a request or sending the audience information all around. Your approach requires that the client go back to the authorization server on every new protected resource request, unless you make it more complex by adding rules and policy information. In the world of simple web services, it would be much better if discovery worked securely without having to put any burden on the client. You solution is to tell the client to go ask the authorization server every time.

      Also, keep in mind that my issue is not limited to discovery, but to web security as a whole.

  11. great writeup. I’d like to point out that even before discovery is common on the web, the lack of signatures introduces security vulnerabilities. in an attack where a malicious dns server is used on a local network to impersonate a legitimate website, the RP will send its users’ tokens to the malocious party. especially since many implementations of web service usage often have certificate verification disabled.

  12. Very interesting and I especially find the bits about sending your token to the wrong endpoint to be valid in a world where we are using many different tokens and endpoints.

    I also think that encrypted tokens while orthogonal to the OAuth spec itself can solve issues related to storing of the signing secret (it can be encoded inside the token itself). If the endpoint can decrypt the token, then it can validate the signature without having to maintain additional state.

  13. One more comment here. I’ve implemented the OAuth 2.0 v10 spec in Java as a library meant to deploy as a service as an exercise in vetting this out. The blog entry really got me thinking about the lack of signatures. After pondering it for a bit, I’m not as worried about the fact that the flows in engaging with the authentication endpoint and requesting the access_token use https and do not require a signature.

    What I am very worried about is the fact that when the access_token is granted, the spec does not specify that a signing secret be optionally generated. This means that once an access_token is granted it must be used over https as well and there is no provision for using signing over http. Why not simply add a signing secret to the response that grants the access_token. Then a service is free to decide whether it’s interfaces want a signed request or not.

    I was just discussing used cases with a colleage and

  14. In my mind, OAuth 1.0 went too far. So many developers struggled to get it working. While it (1.0) is technically correct and secure, it is complex enough to be somewhat unpragmatic. I guess that is why 2.0 exists. It seems that 2.0 has swung back in the other direction. It appears to have given up too much.

    There is something to be said for just using a simple access token. In many cases this is fine. However, there should be the option to kick this up a notch. It is unfortunate that this is not the case.

  15. Why not combine bearer tokens with client certificates? A bearer token could be encoded and signed such that it relates to the client certificate of the application which requested it. When the application makes the request to the server, it does so using SSL mutual authentication and the server can check that the token was signed for the client certificate of the caller, reducing the opportunity for a man-in-the-middle attack since a malicious server will not have the private key for the original client certificate.

    BTW, I do think that signatures should be an optional part of the standard. However, having had experience both of teams a) implementing OAuth 1.0 signatures and b) implementing a scheme similar to the one above b) was significantly easier to get working and to get right (from a security perspective).

Comments are closed.