OAuth Bearer Tokens are a Terrible Idea

My last post about the lack of signature support in OAuth 2.0 stirred some very good discussions and showed wide support for including a signature mechanism in OAuth 2.0. The discussions on the working group focused on the best way to include signature support, and a bit on the different approached to signatures (more on that in another post).

However, the discussion failed to highlight the fundamental problem with supporting bearer tokens at all. In my previous post I suggested that bearer tokens over HTTPS are fine for now. I’d like to take that back and explain why OAuth bearer tokens are a really bad idea.

But first, some live entertainment:

Facebook developer ‘wesbos’ writes:

Hey guys, I’m using the freshly downloaded PHP API

I’ve got eveything setup and when I login to authenticate, I get this error:

Fatal error: Uncaught CurlException: 60: SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed thrown in /Users/wesbos/Dropbox/OrangeRhinoMedia/reporting/src/facebook.php  on line 512

To which ‘Telemako’ replies (emphasis is mine):

[…] try this one:

$opts[CURLOPT_SSL_VERIFYPEER] = false;
$opts[CURLOPT_SSL_VERIFYHOST] = 2;

And ‘philfreo’ asks:

Is there any good reason why these shouldn’t be a part of the core default $CURL_OPTS?

Well, philfreo, Telemako, and wesbos, you have just turned off one of the most important protections SSL provides: defense against MITM attacks.

Putting All Your Eggs in One Basket

The main problem with using OAuth bearer tokens is the fact that they rely solely on SSL/TLS for its security. Bearer tokens have no internal protections. Just like cash, whoever holds the token is its rightful owner, and proving otherwise is impractical. OAuth 1.0 signatures evolved from the basic requirement not to mandate SSL/TLS. But using signatures goes beyond just a deployment consideration.

In his blog post “defending against your own stupidity”, Ben Adida (a highly respected security expert) writes:

Consider the utility of a safety parachute. A determined attacker trying to kill you will obviously sabotage the safety parachute just as easily as he can sabotage the primary one. So, does that mean you might as well jump without a safety parachute? Of course not. You want to take into account not just the worst-case attacker, you want to take into account your own stupidity. A safety parachute means that, if you packed your primary wrong, you can still live. Defense in depth, as it’s more commonly known in the security community, is usually not about building the 12 layers of security around the “Die Hard” vault that a skilled attacker has to vanquish, one by one. Defense in depth is the humble realization that, of all the security measures you implement, a few will fail because of your own stupidity. It’s good to have a few backups, just in case.

No matter what you do, if for any reason the SSL/TLS layer is broken, the entire security architecture of OAuth bearer tokens falls apart. In a world moving steadily towards multiple means of authentication and verification, this is a step backwards.

SSL/TLS is Harder than You Think

It is common knowledge among security experts that implementing SSL/TLS is notoriously hard. This is why very few attempt to write their own libraries. However, equally well established is that many clients implement SSL/TLS in ways that void its most valuable defense: that against a man-in-the-middle. If anyone can get between your client and the server, they can capture the token and use it at will.

The example above shows one of two common ways in which SSL/TLS security is compromised. I would love to mock the developers quoted that was a mistake I made myself three years ago when writing my own OpenID library. I had no problem getting the signatures done, but had a hard time with bad certificate errors. Some were due to local configuration (getting root certificates configured is still a hard thing for most developers to do), but also due to poorly configured servers. My solution, just like the suggestion above, was to ignore these errors.

The problem with SSL/TLS, is that when you fail to verify the certificate on the client side, the connection still works. Any time ignoring an error leads to success, developers are going to do just that. The server has no way of enforcing certificate verification, and even if it could, an attacker will surely not.

This is not new. Ben Adida described this exact issue when reviewing the original WRAP proposal:

But wait, you say, don’t worry, the token is sent over SSL, so it’s all good.

Right. What’s going to happen when someone “forgets to turn on SSL”, which is all too common when security is abstracted out “somewhere down in the stack.” Or when we stop dealing with those pesky certificate errors and just choose not to validate the cert, which will leave the protocol wide open to network attackers who can now literally play man-in-the-middle just by spoofing DNS on a wifi network, capturing the token, and replaying it to access all sorts of additional resources, effectively stealing the user’s credentials.

This might actually be worse than passwords, because at least you can work to educate users about SSL (and after their Facebook account gets hacked, they might actually care), but it’s very hard for users to gauge whether web applications are doing the right thing with respect to SSL certs when the SSL calls are all made by the backend which has trouble surfacing certificate errors.

The second common potential problem are typos. Would you consider it a proper design when omitting one character (the ‘s’ in ‘https’) voids the entire security of the token? Or perhaps sending the request (over a valid and verified SSL/TLS connection) to the wrong destination (say ‘http://gacebook.com’?). Remember, being able to use OAuth bearer tokens from the command line was clearly a use case bearer tokens advocates promoted.

Who is in Charge of Your Service Security?

The main difference between using bearer tokens and signatures is in who they put in charge of enforcing the service security. Signatures ensure that mistakes don’t compromise the token because the secret is never sent with the request. At most, an attacker can use the captured request once, and for enhanced security, signatures can and should be used together with SSL/TLS for a multi-layer protection.

Yes, developers do stupid things, like posting their client credentials (API keys) on public lists and in open source code. But I have yet to see developers expose token secrets like that. It is really hard to expose OAuth 1.0a token secrets by mistake (unless that mistake is leaving your entire database open for public review, in which case, nothing will help you).

Bearer Tokens are not Just Like Cookies

The winning argument in favor of using bearer tokens has always been cookies. Bearer tokens have the same security properties of cookie authentication, as both use plaintext strings without secrets or signatures. However, there is one big difference – the client developer.

For many years, browsers made it insanely easy to ignore bad certificates. It took a lot of time and many attacks to make it hard and scary to approve bad certificate exceptions. Client developers are usually not at the same level as browser security developers. While a JavaScript OAuth client enjoys the same quality code that the browser uses to verify certificates, other clients do not.

The Ironic Solution

We know developers don’t read security considerations sections. That’s a fact of life. Developer only read the parts they need to implement and ignore the parts that ask them to think. Only large companies with dedicated security teams pay much attention to that, and they usually do their own analysis. So warning developers to check the certificate is not likely to accomplish much.

What’s the alternative? SDKs!

If your developers cannot be trusted to implement your API correctly, it is common to offer them a library instead. I hope you can see the irony here. If you solve the SSL/TLS deployment issue with an SDK, why not just implement signatures? One argument used against signatures was that even with libraries, people will try to write their own code. Well, that’s a scary proposition.

Security is Hard

Ben Laurie, one of Google’s top security experts and the OpenSSL project lead writes in his explanation why WRAP “is just so obviously a bad idea, it’s difficult to know where to start”:

The idea that security protocols should be so simple than anyone can implement them is attractive, but as we’ve seen, wrong. But does the difficulty of implementing them mean they can’t be used? Of course not […] every crypto algorithm under the sun is hard to implement, but there’s no shortage of libraries for them, either.

As I have pointed out before, the ‘signatures are hard’ argument dies with Twitter’s recent deployment of OAuth signatures as a requirement.

But Everyone Else is Doing it

My favorite argument in favor of bearer tokens, and the most ridiculous one, is that Facebook, Google, Microsoft, and Yahoo! are all doing it, so that must mean its good. After all, they must have reviewed it and decided it was safe.

The other variation of this argument is that bearer tokens are already widely deployed so we need to support them. All I have to say is quote the obvious: “if your friend jumped off a roof, would you?”

Facebook is using MD5 in their current API, probably because that is what Flickr used when Facebook copied their API authentication scheme. See the pattern? New services simply follow what other established companies are already doing, without questioning the reasoning behind it. Big companies have resources to make these decisions, but they rarely apply as-is to others, and it is simply irresponsible to ignore their leadership position.

This is how we got stuck with Basic authentication and cookie logins. This is what RFC 2617 has to say about Basic HTTP authentication:

The Basic authentication scheme is not a secure method of user authentication, nor does it in any way protect the entity, which is transmitted in cleartext across the physical network used as the carrier.

And yet, it is still one of the most widely deployed authentication scheme on the web. People don’t read specifications, they just follow the market.

Actually, Not Everyone is Doing it

OAuth 1.0 had bearer token support alongside signatures for three years now, and yet, it is barely used. Twitter could have deployed OAuth 1.0 as specified in RFC 5849 section 3.4.4 but chose not to. The claim that bearer tokens are a new feature is false. All OAuth 2.0 does is clean it up and present it in a more accessible way.

The problem is both with the specification and the market hype. By positioning bearer tokens as the default choice for 2.0 developers, and by highlighting the big vendors deploying it, developers are less likely to think for themselves, especially given the allergic reaction many developers have to signatures.

I still have hope that early and future adopters of OAuth 2.0 will make the right decision and not deploy bearer token support for their core APIs. I have never suggested to outright ban bearer tokens. They provide a useful way to experiment with an API, to try out ideas, to test code, and to make API calls that have little to no security requirements. For example, it is perfectly fine to use bearer tokens for any Twitter API call that returns user-specific data that is publically available (like your followers list). Not so fine to use it to update your status or read private messages.

Now What?

I have concluded that the OAuth working group is not a productive place to have this debate. People are much too entrenched in their positions and with existing live deployments, a committee is not the way to reach consensus. Instead, I believe that an open public debate and market pressure is the best way to go. This is where I hope others will join me in asking their service providers to take their security more seriously.

With regard to the specification, I have proposed (and so far received wide support) to make an editorial change that will separate the ‘how to get a token’ part from the ‘how to use a token’ part. This is not a new idea. By doing so, we will enhance the modularity of the protocol and remove any bias for or against bearer tokens and signatures from the specification. Instead, developers will be presented with two or more alternatives, and will have to make their own decision about what is right for their platform. This proposal is still pending working group consensus.

I am sure this will not be the last word on the subject.

13 thoughts on “OAuth Bearer Tokens are a Terrible Idea

  1. Hi Eran– As you know, I support treating message-signing as a “first-class citizen” semantic in the spec. And your analysis about market pressure being the way to go seems right to me. But I don’t get splitting getting-a-token and using-a-token as any kind of a solution. The two steps are inexorably tied together as the purpose of OAuth in the first place, and I can see important pieces falling through the cracks, as it were. (E.g., the security of the whole token-passing chain needs to be considered.) In addition, any use cases where the authorization server and resource server meet dynamically or are otherwise likely to be in separate administrative domains (as is the case in UMA) may want or need request-signing in the “getting” phase, not just the “using” phase.

    • I agree that a single comprehensive specification is the ideal solution, but unfortunately the working group is incapable of reaching such an objective at this time. While splitting the specification is more in line with ignoring the problem than fixing it, I believe that ignoring the parts each side doesn’t like is the only possible course at this point. I am in no rush to conclude this work, but the market clearly is.

      • I guess my point was that dividing the spec in this way isn’t just ignoring a problem in the hope of getting consensus; it literally doesn’t seem to solve anything. We should be prepared for clients to sign their communications with both authz servers (“getting”) and resource servers (“using”). So what do we achieve by splitting the spec in half? There’s modularity and then there’s fragmentation…

  2. I’m an engineer on the Facebook Platform and I think you’ve mischaracterized the risk to the majority of devs. I think in general use, bearer tokens over SSL are easier to use and generally more secure than the previous system that relies on developers to create and check signatures.

    The main objection is that some developers won’t check certificates because they don’t understand what’s happening. First, any API call made from Javascript will benefit automatically from browser protection. And the vast majority of server side developers will use a library that checks certs by default. Even in your examples, the base curl libraries check certs, and developers have to go out of their way to break it. I think that’s a generally good model for a security protocol.

    I think it’s the responsibility of the service providers like Facebook to offer good SDKs and documentation that encourage developers to do the right thing, and we continue to work on education, but in general I think that the SSL approach + bearer tokens is still better for the common case.

    • Easier? Sure. But how is it more secure than 1.0a?

      Don’t forget that if you worry about the limited MITM attacks possible with 1.0a signatures, or if you need privacy, you can and should use signatures with SSL/TLS, as the OAuth 1.0 specification clearly suggests. SSL/TLS and signatures are orthogonal means of protection. Your argument about using libraries and SDK’s is exactly why we can continue to rely on signatures.

      Those are not my examples, but those of actual Facebook developers. And I have already been contacted by 4 people who told me this is exactly how they solved similar certificate errors. The problem is not that tools like CURL don’t surface the problem, but that ignoring the errors does not result in failure. The protocol seems to work just find and there is no other indication that something else is broken.

      My point is: bearer tokens security falls apart with just a single mistake. One stupid thing and the entire token security is gone.

      I think it is the responsibility of service providers like Facebook is to promote better web security and help move the web forward, not backwards. The only good thing you can say about bearer tokens is that they are trivial to implement on the client side. Beyond that, its all downhill.

      • My organization is looking to implement OAuth as we expand our API beyond public data, and we’re very interested in helping to see the 2.0 spec shored up this year as we start on our project. Along this thread of interest, I want to make sure I understand what you mean by bearer tokens since this term is only mentioned once in the v10 draft.

        As I understand it, bearer tokens are equivalent to access tokens. That is, there is no proposal to change the way access tokens are received (getting-a-token), only the manner in which tokens (be they called access or bearer tokens) are exchanged for data access (using-a-token). When we say we want to introduce signatures into the process for greater security, we’re only talking about signing requests for getting-a-token, not using-a-token, correct? (I’m reusing using Eve’s terms to try to keep things clear/consistent).

        Furthermore, I understand this to mean that signatures are only going to be valid for a flow where clients use secrets. In v10 of the spec, this would concern only the Web Server flow, not the User-Agent flow or the Desktop flow (which is built upon the User-Agent flow as well). That is, if an application received it’s access token via a User-Agent flow, it would not have used a client ID nor a client secret because it’s not able to keep these things secret. Therefore, any requests using this access token in the future can’t be signed as there are no client secrets involved. Thus, we should be able to say, rather definitively, that User-Agent and Desktop applications won’t benefit from the addition of signatures. If that’s true, it helps clarify the scope of the debate over signatures to just for use by Web Servers (and possibly Autonomous clients).

        If the above is a correct understanding of the situation, I’m going to step back and actually contradict one of my previous statements – that User-Agents won’t benefit from signatures. Actually, they can gain, perhaps, some benefit if a new element were introduced – the access secret. This is almost like a request token in that it’s only issued during initial authentication and the UA is required to sign each access request with the access secret when using the token. This provides integrity for the token over the lifetime of the token, but it doesn’t prevent token use of a token by untrusted or anonymous client.

        If this new type of signatures is actually what you are referring to, perhaps splitting the specs is helpful as it restricts the discussion of tokens and signing and such to specific parts of the flow. If however, you are speaking of 1.0-type signatures (where the same client secrets signs both getting and using requests) then keeping specs together makes more sense.

        Btw, I actually advocate having signatures in the spec, especially for Web Servers. I just want to be clear on the scope of this discussion. OAuth 1.0 didn’t go far enough in helping distinguish the different levels of security provided by different types of clients (likely because it focused on one flow). 2.0 seems not to be doing much better at the moment as all flows are lumped together then left w/ line-item exceptions for the user to determine what level of security is actually possible in each flow. If the debate is only about increasing security in the Web flow – let’s call that out clearly.

  3. Eran,

    Thanks for the enlightening articles. I’m not sure if you addressed this already, but something that bothers me about OAUTH 2.0 is that once my application receives a bearer token for a user, I can make client side api calls with that token using jsonp. For example:

    https://graph.facebook.com/100000210709794/friends?format=jsonp&access_token=131575316854920|c0111b0d……

    The access token is now exposed in my client side code. Anyone can view source, steal the access token, and make api calls on my application’s and the user’s behalf. Further, when using Facebook’s graph api, bearer tokens can be long lived.

    Is this an additional concern? Did you already address this?

    I assume in most cases, an application will only expose access tokens for the current user in client side code, but, it’s not unthinkable that an application would want to display some Facebook information about one user, to another user.

    Thank you for your help.
    Sarah
    .NET Software Engineer

    • I am not sure I understand. The token isn’t part of the source code, but is obtained by running the client application. How can anyone other than the local application executing see the token?

      • Only the local application can see the bearer token, but with Facebook offline_access permissions, I thought it would be possible for an application to use a persisted long-lived token in client script for a Facebook account that belongs to someone other than the current user. In this case, the current user could access the other user’s data using the exposed token.

        I guess this isn’t actually something I need to worry about.
        Thank you again!

      • Eran, if the site is compromised at the user’s own webrowser or at the site level (session hijacking, sidejacking, javascript injection, etc) in anyway, it is most likely possible to grab the access_token from the html source or javascript memory. User A is able to insert bad javascript into a comment section, then they can pick up the tokens for everyone else if they are included in the html or javascript.

        I”m a huge fan of allowing signed requests as an optional add-on. We’ve already seen sites announcing on the OAuth2 list that they are doing this, either using OAuth 1.0a or some custom version… An optional ‘standard’ that can be required at the whim of the service provider would be fantastic rather than going down this fragmented route.

        And for ease-of use, if a company is really concerned they are driving away API developers, I think your argument that there is probably not a lot of interest in your API to begin with is a good one. But also, a company can say that for a dev account, you can use a bearer token, which does make testing the API a *lot* easier, but for production, we require another layer of security.

        I’m eager to see draft 11 with signature support. If there is another way to provide support rather than posting a comment on a blog, please contact me.

        regards.

  4. Hi Eran, a quick question to clarify what I think I’ve learned here (hopefully I’m not missing an obvious point somewhere, forgive my ignorance): You say “The main problem with using OAuth bearer tokens is the fact that they rely solely on SSL/TLS for its security”, and the solution is to use an SDK that includes checking for signatures, in which case MITM attacks are avoided even if SSL is broken or not implemented at all.

    If this is the case, then aren’t you also arguing against other forms of authentication over SLL such as users entering a username and password on a website’s front page? And not just bearer tokens?

    The reason for my question: I considered abandoning bearer tokens for authenticating remote applications (“slaves” in our case) in favour of 2-legged OAuth, but even that is susceptible to MITM without an SDK with signing. In the end it seems kind of trivial – username/password, bearer token, or other – all have a ‘secret’ that gets sent and can be broken when SSL is broken AND messages are not signed.

Comments are closed.