On 11 and 12 February, the IETF’s Applications Area held an architecture workshop, and I promised to post a summary of it here. This will probably interest about 1% of my readers, but it’s also a way for me to put it down for my own reference. I’ll try to find something more stimulating for tomorrow.
I’m going to put the 1% stuff below the click. My apologies to those who read the feed; I don’t know how to get Blogger not to put the whole thing in the feed. So, honk if you’re a geek...
The goal of the workshop, broadly, was to look at possible new work in the Applications Area. We were particularly interested in what new ideas participants have, what they’d like to work on, what they think is missing from the standards right now, and that sort of thing.
We started with about 20 participants and a plenary session to bat the ideas around and to come up with a set of topics for breakout sessions. After each breakout session, we came back into plenary and had brief — well, sometimes not so brief, as the discussion developed — reviews of the breakout work. That there’s a lot of material related to HTTP, is partly due to the recent formation of the httpbis working group, aimed at revisiting the HTTP standard and advancing it along the standards track.
We ended up with the following set of breakout topics, here in no particular order:
- Use of HTTP for more than just web browsing. This group looked into the question of updating BCP 56 (RFC 3205), which gives advice on using HTTP. They started to sketch out what such an update would address, and one of the participants has agreed to work on a draft document.
- Protocol layering, looking at how protocols at the applications layer interact with those at the transport and network layers. The group subdivided the applications layer, conceptually, into sub-layers for semantics and protocol, and noted that the protocol sub-layer really needs some knowledge of lower layers, without considering it a layer violation.
- Localization and identity, considering aspects of protocols that need to change with different locales and different user identities. This also looked at discovery of local services. And it blended somewhat into internationalization.
- Synchronization: although there’s years of work behind this and there are many solutions here, we still have real user problems. Consider the personal address book: it’s just one person’s data, yet it can’t get synchronized properly between phones, cars, and multiple laptops. The group looked at separating semantics and conflict resolution from the transportation of changes and metadata, and they discussed what metadata are required for synchronization.
- XML schemas. The basic issue here is that XML itself doesn’t create interoperability. It needs a schema to lay semantics on top of the XML. I didn’t participate in this discussion, and there aren’t much in the way of workshop notes from it.
- URI templates, standardizing a mechanism for plugging functional templates into URIs. For example, each search engine has its own URI that will initiate a search, with the common aspect that the search terms are substituted into the identifier somewhere. This group looked at how to standardize that concept to make it easier for different services to be swapped in and out.
- HTTP authentication, looking at where to go with it to try to improve on the current state. The main point was to consider how to develop an HTTP authentication protocol that
- reduces the opportunity for phishing and
- is likely to be deployed (no small order, this).
- Develop an authorization framework such that:
- No identifier/password is transmitted, even over TLS.
- Using (a), eliminate the need for a user to ever enter a password in an HTML form. Users would be taught to enter passwords only in a password-entry widget, not in the web page itself. This would be a partial solution, making it easier to keep users from giving the bad guys their passwords.
- Use an authentication mechanism (such as a public key method) that makes a password useful only locally (the password is to the local keychain, for instance, which gives access to the private key). Exposure of the password to the bad guys no longer matters, even if they convince the user to do it. This is a more robust solution.
- Firefox is a good starting point. Make a Firefox extension prototype a prereq for real work here. The Firefox extension can “discourage” entry of passwords into HTTP forms, to enhance the effectiveness of 1b, above.
- Get input about requirements, from “real” phishing targets. To that end:
- Engage the Anti Phishing Working Group (APWG).
- Look for an HTTP authentication workshop resulting from the “bar BOF” at IETF 70.
- A participant noted that talking to banks and such is useless, and suggested engaging congress (senior staffer to chair of House Banking Committee, for instance).
- We discussed the idea of “directed identities” built into the browser — the browser would strongly discourage the user from using an identity directed to, say, boa.com on a bogus site such as b0a.com or boa- login.com.
- We discussed the idea of extremely obvious and non-spoofable UI indications, such as colour cues and other OK/warning mechanisms. We noted that anything that will work has to be very hard to miss or misunderstand. We noted also that most have some sort of accessibility issues (colour cues don’t work for the colour-blind, for example).
- Email re-architecture, considering whether it would be wise to do some major redesign of Internet email, and looking at a specific proposal. We started by questioning whether reworking email protocols is even important any more: “Is it the case that we already have more email retrieved over HTTP than by IMAP?”
One answer: HTTP is not being used to retrieve email; it is being used to view email. Many services use IMAP to retrieve the email that is presented over HTTP. That brings up to consider a third piece, using something like Ajax to build the interface between IMAP and the HTML view of the email. The point is that any new email protocol would exist on the back end, regardless of what sort of client sits on the front end.
Significant discussion points (just notes, here):
- Support in the base protocol for optimization for different client devices
- URIs for all messages in the message store
- Standardized set of retrieval expectations (by time received, etc.)
- Extensible set of message properties
- Architectural separation between properties that are immutable and those that may/will change
- Granularity of retrieval vs optimization for bandwidth and client resources
- Alternative storage format — not necessarily RFC 2822, but convertible to/from that
- Submission and delivery — are they clients of this new access protocol?
- Single, flat message store, where any hierarchy or “mailbox” separation is done by message attributes
- Features for mobile synchronization need to be built in
- Email “mash-ups” and other innovative uses of messaging
- Integration of messaging mechanisms: email, IM, blogs, phone messages
- Integration with workflow and other activities
- Push/pull/notify, considering a proposal for a notification framework as an alternative to polling. HTTP is designed for pulling information, and current ways to simulate pushing using HTTP actually require the client to poll — though an interesting variation is the long poll, where a client will ask for information and a server will not provide it until there’s been a change. The group looked at ways to do active push notifications, and some of the security implications of that.
- Protocol Architecture Guide, a proposed informational document to record what we’ve learned over the years about what works and what doesn’t. We talked about a number of aspects of the protocols:
- Application bootstrapping: how to find your server, for example
- Application configuration: how to set up your client
- Protocol substrates and how to use them
- Data encoding
- Authentication, encryption, session state
- URN/URI use
- Schemas and XML use
- Extensibility, capability advertisement and negotiation
- Registration and naming issues
- Modular design
This was a good way to spend two days.