OExchange is a newly-introduced protocol stack that allows users to share URL-based content with any service on the web. It covers posting links to social networks as well as sending content to things like online translation and printing services.
The protocol — driven by the folks at Clearspring (where I work) with the support of a long list of online services — builds on several existing open web specifications. It is backed by an open development list, tools for developers, and lots of additional resources.
Metalink is an XML format for describing downloads. Publishers pack information about a download into a Metalink XML file, such as mirrors and checksums, to overcome many common download problems like a server going down or file corruption. Other useful information can be included as well.
Metalink/HTTP, or mirrors & hashes in HTTP Headers, is another way currently being developed to improve the download situation. It relies on Web Linking (recently approved for RFC publication as a proposed standard) for mirrors and Instance Digests (RFC 3230) for cryptographic hashes. Continue reading
This post was written as a collaboration between Chris Messina (Google), Dick Hardt (Microsoft), David Recordon (Facebook), and I, and was originally published on O’Reilly Radar.
The OAuth protocol and community have seen a lot of changes over the past couple of years. With the recent introduction of WRAP, the IETF working group, and discussions about OAuth 2.0, many developers were left confused as to what was going on.
The OAuth protocol enables users to provide third-party access to their web resources without sharing their passwords; kind of like a valet key for the web. To date, OAuth 1.0a is the most successful such protocol deployed on the web. The origins of OAuth date back to late 2006, when a small group of web engineers, tired of reinventing the API authorization wheel, came together to find a common, open solution. Continue reading
The majority of the time, downloads just work. Most downloads are relatively small files. But, when files are larger, you are more likely to encounter errors. Errors with large downloads can be frustrating and a waste of time. That’s even more true for areas with unreliable Internet connections, such as developing parts of the world. Continue reading
One of the criticisms leveled against OpenID is that it uses HTTP URLs to identify users, and that users don’t understand URLs-as-identifiers (especially those who are not very technical). User Experience research confirms this.
Email addresses, on the other hand, are widely understood to identify people. So why can’t we use email addresses as OpenIDs? The reason is that, in OpenID 2.0, the way you perform discovery on an OpenID makes it necessary for that OpenID to be an HTTP URL: discovery information is obtained either through an X-XRDS-Location HTTP header sent by the server when accessing the OpenID (which is an HTTP URL), or by parsing through the document itself that is returned when accessing the OpenID HTTP URL. Continue reading
In my last article I wrote about the differences between user discovery and provider discovery. In this article, I will explain how both of these discovery flows can be easily be done using the “Link-base Resource Descriptor Discovery” (LRDD) pattern. A resource descriptor is, for the purposes of OpenID discovery, an XRD document describing meta-data about a resource, which can either be a user’s OpenID (i.e., identified by a user identifier) or an OpenID provider (i.e., identified by a provider identifier).
Today, there are two discovery flows in OpenID: Directed Identity and Claimed Identity. This is not how they are called in the spec. In fact, they’re not separated in the spec at all (more about this below).
In the Claimed Identity flow, the relying party (RP) knows the user’s identifier (i.e., his or her OpenID URI), and attempts to figure out the OpenID provider (OP) for that user (i.e., the web site that will authenticate the user to the RP).
In the Directed Identity flow, the RP doesn’t know the user’s OpenID, but still figures out the user’s OP endpoint.
OpenID has been around for a while, but has for most of its life been a niche technology. This is not too surprising since it was originally designed for authenticating bloggers wanting to comment on other bloggers’ blogs.
More recently, it has been embraced by some big players like MySpace, Yahoo, Google, and Facebook. Even before it was picked up by the industry heavy-weights, the OpenID community revved the version to 2.0, anticipating some of the use cases beyond blogs. For example, users were no longer required to know their OpenID URL, and enter it at the Relying Party’s (RP) web site. Instead, they could just tell the RP who their OpenID Provider (OP) was, and log into the RP that way.
Still, OpenID 2.0 is in some ways inadequate for today’s requirements.
Over the past two years this blog turned from a chronicle of my startup adventures to a community resource about many of the open specifications and standards I am interested in. Lately I have been completely invested in discovery and the daily operations of the Open Web Foundation. This left no time for the practical application of all this legal and technological infrastructure building.
Over the last few months I have been trying to turn this blog into a technical resource for developers looking for insight into existing and upcoming open specifications such as OAuth, OpenID, LRDD, and many others. So when last week an opportunity presented itself in hosting guest posts about OpenID, which is a topic I don't get to right about often enough, I got immediately excited. It inspired me to think about this blog in a new way.