I have to admit, I’ve been surprised by the tone and magnitude of the reaction to my announcement. I’ve been following the reactions on Twitter and every follow-up blog post I could find. I’d like to share some follow-up thoughts about this decision and the reactions to it. First, I didn’t say ‘dead’, I said ‘bad’. Continue reading
They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0.
Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing. Continue reading
As I’m getting ready to finish work on OAuth 2.0 and add new content to this site, I decided it was time to finish the OAuth 1.0 chapter of this site. I’ve finally cleaned up the OAuth 1.0 guide and other pages. The guide is now updated to reflect RFC 5849 as well as some bug fixes in the scripts used to generate the signature base string tutorial. If you are linking to this site for OAuth resources, please link to the OAuth page.
Why do we require clients to include the redirection URI when exchanging an authorization code for an access token in OAuth 2.0 (section 4.1.3)?
Consider the following scenario:
- Evil user starts the OAuth flow on a legitimate client using the authorization code grant type flow.
- Client redirects the evil user to the authorization server, including state information about the evil user account on the client.
- Evil user takes the authorization endpoint URI and changes the redirection to its evil site.
- Evil user tricks victim user to click on the link and authorize access (using phishing or other social engineering methods).
- Victim user thinking this is a valid authorization request (it looks kosher), authorizes access. Access is granted to the right legitimate client. So far nothing is wrong.
- Authorization server thinks it is sending victim user back to the client, but since the redirection URI was changed, victim user is sent to the evil site.
- Evil user takes the authorization code and gives it back to the client by constructing the original correct redirection URI.
- Client exchanges the code for access token, attaching it to the evil user’s account.
- Evil user can now access victim user data via his client account.
The way this works, the attacker does not get direct access to protected resources, but it tricks the client into attaching the victim’s access token to the attacker’s account.
Pre-registration of the redirection URI can help a lot, but depends on the matching rules. Since many large providers have open redirectors, the attacker can use those to construct a redirection URI that passes the authorization server validation.
Requiring clients to register their full redirection URI without allowing any variations or partial matching is highly recommended.
My last post about the lack of signature support in OAuth 2.0 stirred some very good discussions and showed wide support for including a signature mechanism in OAuth 2.0. The discussions on the working group focused on the best way to include signature support, and a bit on the different approached to signatures (more on that in another post).
However, the discussion failed to highlight the fundamental problem with supporting bearer tokens at all. In my previous post I suggested that bearer tokens over HTTPS are fine for now. I’d like to take that back and explain why OAuth bearer tokens are a really bad idea. Continue reading
ComputerWorld is the latest to run a scary story about OAuth 2.0 and how insecure it is. Unfortunately, instead of doing their homework and paying attention to my post, they borrowed a bunch of my quotes (almost half the article), added some original nonsense, sprinkled a few errors, and gave it a sensational headline: “OAuth 2.0 security used by Facebook, others called weak”.
OAuth 2.0 without signatures works just fine for companies like Facebook, because their developers hard-code their API endpoints. There is no danger what-so-ever for an access token to leak or get phished (any more than if they used signatures) because Facebook uses HTTPS for their OAuth 2.0 API. This is why I titled one of the subsections: “Why None of this Matters Today”. My post was about discovery and long term security improvements of the web – not that there is anything broken about today’s implementations. Continue reading
OAuth 2.0 drops signatures and cryptography in favor of bearer tokens, similar to how cookies work. As the OAuth 2.0 editor, I’m associated with the OAuth 2.0 protocol more than most, and the assumption is that I agree with the decisions and directions the protocol is taking. While being an editor gives you a certain degree of temporary control over the specification, at the end decisions are made by the group as a whole (as they should).
And as a whole, the OAuth community has made a big mistake about the future direction of the protocol. A mistake that is going to make OAuth 2.0 a much less significant agent of change on the web. Continue reading
Over the past two years I have been arguing that the problem with supporting OAuth 1.0 signatures wasn’t with the signatures, but with the lack of value in trying to figure them out. The primary argument made by the WRAP authors and now the majority of OAuth 2.0 contributors is that signatures are hard and developers are stupid. This combination, they argued, is costing them developers.
To address this, they argued that the only solution is to remove signatures. I countered that instead of creating a new protocol, the companies complaining (primarily, Google, Microsoft, and Yahoo!) should invest in quality libraries and debugging tools.
My point was (and still is) that if you give developers value, they will fight to figure out the signatures. A couple of weeks ago Twitter discontinued their support for Basic authentication, and what these people said cannot happen, happened. All these developers figured out how to migrate their application to OAuth 1.0. That despite the lacking Twitter developers support, alleged bugs, and other complaints about Twitter’s implementation.
Don’t expect knights to battle dragons if your castle is empty. Twitter put a hot blond (or brunette) princess (or prince) in their castle, and their (API) knights fought the evil (signature) dragon and got their reward. Google and the rest of the big web providers with their useless offering of boring APIs left their castle empty.
Guess what! The kind of knights who come to fight dragons living in empty castle are there for the fight, not to do something useful. Yes, battling dragons is a bitch, but knights tend to forget that once they get their happy-ever-after. Give them an empty castle and they will do nothing but obsess about their battle scars.
Why does this matter? Because without signatures, there can be no secure discovery and less open web.
In a wordy article that could have been much shorter and a lot less sensational, Ryan Paul of ArsTechnica throws mud mostly at Twitter, but saves plenty to throw at OAuth. Unfortunately, Ryan Paul (who clearly is a smart guy) is heavy on the accusation but light on the arguments. Typically, I would go over this article an item at a time, but I’m right in the middle of draft 11 of OAuth 2.0 which is a much better use of my time. If you want to read a great rebuttal, Ben Adida’s response (as always) is a great read. Continue reading