Minneapolis is always a good venue for the IETF meetings, and this one was no exception. We’ve been there several times in March, and once before in November. It can get chilly brisk there during both of those months, but the “Habitrail tubes” (officially, the Skyway system) lets one get around much of downtown while staying warm. It’s nice to be able to go grab some lunch without having to go outside (so, without having to have a coat with you).
There were some attendance issues this time:
- Because of the sudden economic change in the two months before the meeting, we had fewer attendees than expected, and quite a number who had pre-registered cancelled their registrations at the last minute.
- There was a serious visa issue for the Chinese — it seems that visas weren’t being processed, and so a lot of Chinese participants (on the order of 50) had to stay home because they didn’t get visas in time. This was the main topic on the attendees’ mailing list this time, with many suggesting that we stop having meetings in the United States.
I’m going to skip most of the working-group details in the summary this time, since most of it was a matter of presenting status and going over open issues. I’ll stick to calling out specific issues of consequence, with longer discussion of IAB matters and the OAuth BOF.
Internet Architecture Board
The Internet Architecture Board presented the current status of one of our architecture initiatives, an architectural analysis of the IP model, focused on problems that occur when assumptions are made that are not (or are no longer) fully valid. The presentation got some industry press in Network World and elsewhere.
We also made progress on another architecture initiative, about peer-to-peer issues.
We expected some controversy over a document from the AntiSpam Research Group about DNS blacklists, but that seemed to have run itself out on the IETF mailing list. I got together with the IRTF chair and the Applications Area directors, and we agreed that I’d work with the ASRG chair to fill out the Security Considerations section to reflect the discussion.
The geopriv working group has a liaison issue with the W3C. I discussed it with our W3C liaison, the Applications and RAI Area directors, and the geopriv working group chairs, and we agreed on the points that need to be brought to the W3C. The working group chairs have drafted a liaison statement that the area directors will sign, and that our liaison will take over.
The IETF, through the IAB, has approved a new RFC editor structure and new document boilerplate that provides different copyright-release terms for all new documents. These two items generated significant discussion at the plenary meeting — the first about concerns for openness of the RFP process, the second about concerns about updating older documents and changing the terms in the process.
apparea — Applications Area general issues
There are four new proposals for possible working groups in the Applications Area:
- New LDAP extensions.
- New FTP extensions.
- YAM — a proposal to advance a set of email standards along the Standards Track.
- MORG — Message ORGanization, a proposal to standardize a set of extensions aimed at helping users/clients locate and organize email messages.
idnabis — Internationalized Domain Names in Applications, revision
The principal discussion here was about mixing Arabic-Indic numerals, and the main issue is limiting the mixing of three different kinds of numerals, in order to avoid an explosion of different allowable combinations. A subcommittee will do more investigation and recommend an answer to the working group.
There was also a smaller discussion of how we can handle the German Eszett character, which used to be translated into “ss”, but which will now be assignable directly. The change could result in significant user confusion. The decision was that it’s an issue that domain-name registries will have to deal with.
sieve — Sieve Mail Filtering Language
We ironed out some minor things here, but there wasn’t anything controversial or difficult. Proceeding smoothly, as the working group has recently been re-chartered to add new work now that it’s finishing what it had on the old charter.
alto — Application Layer Traffic Optimization
This is a new working group that aims to define protocols and information formats for peer-to-peer applications to do initial peer selection in a “better than random” way. The working group will work with the P2P Research Group, and it will start small and develop extensible standards that can be augmented later. This first working-group meeting was spent presenting and discussing the various starting points for the work.
vcarddav — VCard and CardDAV
There were two main issues discussed here. The first was an extended disagreement about whether a vcard should have a “timezone” item in it. Pro: it’s a “hint” about what time zone I’m usually in, and it’s useful in many situations. Con: it changes, and it’s better suited to being taken from presence information, rather than a relatively static vcard. In any case, if we include it, it’s not clear how it should be specified (what document we point to that defines the value). Decision was to include it, but to have text that suggests preferring a timezone from presence information if that’s available. Consensus isn’t clear, though.
The second major item involved a presentation by Pete Resnick about a proposal for reconciling parameter IDs through two-way and three-way synchronizations. It looked like Pete’s scheme is workable, and reduces back down to the simple case relatively quickly, rather than cluttering the vcard with a lot of extra bytes. In the end, though, generalized sync handling is hard.
dkim — Domain Keys Identified Mail
Document status: there are three documents left to finish. Two — ADSP and “Overview” — are with the IESG, and the third — “Deployment” — is much of the way toward finishing. We had a review of the new document structure of Deployment, which highlighted some questions about which the authors are looking for input (ideas, preferably with suggested text). We reviewed three unresolved issues with errata for RFC 4871 (DKIM base spec). And then we looked toward the future...
We had three brief presentations of topics that the working group might adopt with a re-charter. It seems unlikely that the group will actually take any of them on, and that they’ll proceed, if at all, as individual submissions.
We spent a good bit of time at the end of the meeting discussion the idea of developing standards for domain reputation services — something that DKIM enables for email, but which is not limited to DKIM nor email. The consensus is that there is not enough interest nor clarity within the (small) community of reputation-service providers to support a standards effort now (or soon).
It’s likely that DKIM will not meet at IETF 74, and might actually be ready to close or become dormant by then.
eai — Email Address Internationalization
We spent quite some time discussing how to downgrade certain header fields, and handling some edge cases. No big controversies here, though some of these decisions will be revisited after experimentation gets more experience with them.
behave — Behavior Engineering for Hindrance Avoidance
Here went the big food-fight of this meeting: the issue of whether to standardize IPv6-to-IPv6 Network Address Translation (NAT66). There are two warring camps on this one:
- There is no reason to have NAT66, and its presence will be damaging to an all-IPv6 Internet. We must do everything we can to stop NAT66, and we particularly should not standardize it in any way.
- There are good reasons to use NAT other than to cope with address shortage, and some of those reasons carry over into IPv6-only situations. Networks will use NAT66, and if we don’t standardize it we’ll damage an all-IPv6 Internet by having things trip over non-interoperable NATs, just as is now happening with IPv4 NAT.
oauth — Open Web Authentication BOF
OAUTH is looking to standardize a protocol for a three-party authentication & authorization system for web users/sites. Examples of use cases involve importing contacts from one web service (say, Gmail) into another (say, MySpace), and printing photos (using one web service) that reside in another (such as Flickr). There’s quite a list of companies that have already signed onto this, have implemented it, are using it (the list includes the likes of AOL, Yahoo!, and Google).
The idea is that when you want to import your contacts from Gmail into MySpace, you (the “resource owner”) click an “Import from Gmail” link on MySpace (the “resource consumer”). You get redirected to a Google (the “resource provider”) login web page — a real Google login web page — where you log into your Gmail account. You get a “request token”, and you’re redirected to a Google page that tells you about the request token: MySpace wants to import your Google contacts; do you approve? If you grant access, your request token is exchanged for an “access token”, and you’re redirected back to MySpace, which gets the access token and does the deed.
From the user’s point of view, it’s easy: You click a link on MySpace, you get the Google login page, you get a Google “Do you approve this access?” page, and you’re back to MySpace with your contacts having been imported. But are you sure that was the Google login page, and not a spoof? Are you sure the page you started at hasn’t been hacked? Are you sure none of the redirects were intercepted? For that matter, what happens if the redirects don’t all work?
I’m concerned about the redirects. I’m concerned about the blending of the user and the user agent, here, and the extent to which the user has to trust the browser and the clicked links and the displayed login page and so on. And I’m concerned (though not surprised, and I’m not sure what we can really do about it at this point) that it’s not making any attempt to get us away from the “send us your password” model. At least you’re not sending your Google password to MySpace — though how do you really know that you’re not being phished?
In any case, the protocol as currently written is pretty widely implemented and deployed, which means that it is, in fact, working quite well. I’m not sure whether it’s been attacked yet, nor whether anyone’s really done a threat analysis on it and looked closely at the attacks that could be mounted. [Sam Hartman and Eric Rescorla have already done reviews of the spec, so it’s not like there aren’t security folks looking closely at this now.]
It’s also clear that this is important work for the IETF to do, and that there’s a lot of energy to do it. On the other hand, as was the case with DKIM (so I know it very well), this is another situation where there’s a well defined spec being brought to the IETF and we’re being asked not to futz with it too much. I have sympathy for this, and I actually think we have to be more ready than we have been to get these requests and to compromise on them. There was a relatively small danger that DKIM would have gone off and ignored the IETF; there’s a much greater danger of that in the case of OAUTH.
So there’s the main charter question of whether the existing OAUTH spec should be the starting point, and exactly what assumptions and requirements should be “baked into” the charter. There are also questions about other protocols and payload formats... SAML and URLauth, for example. What about tying this to channel bindings? And so on.
In the end: We need to do this, and I think a working group will be chartered for it. I think it’s inevitable that the current OAUTH spec will be the starting point, but it’s not clear to what extent changes should be locked down. Whether it’s chartered in the Apps Area or the Security Area, it will need lots of clue from the other, so this is a (another) clear example of what’s truly a cross-area working group.
No comments:
Post a Comment