Saturday, July 28, 2007


IETF 69, Chicago

The Chicago River at nightI've just spent the week at the 69th IETF meeting in Chicago. It was the best of weeks, it was the busiest of weeks. And here's a hint for those arranging meetings: regardless of the assurances the hotel gives you, do not believe that the fact that they're doing serious construction and renovations will not interfere with your meetings. A number of sessions were severely disrupted by the noise of jackhammers. The hotel kept promising that it would stop during the meeting hours, but at least through Wednesday it continued.

And I talked with a Chicago Tribune reporter on Thursday (as did the IETF chair and the IAB chair), and his article about the meeting came out in Friday's paper, on the front page, above the fold. Very cool. Here's the electronic version of the article. It's pretty good.

Anyway, here's my summary of the meeting, hidden below the “click here if you want to read the boring, geeky bits” link.

Read the rest of the post...

For me, the week started on Saturday, as the IETF management (the IESG and the IAB) spent the afternoon and evening with our counterparts from ITU-T, the standards body of the International Telecommunication Union. The goal was to meet them, to discuss some technical issues, and to set up a context for working together more effectively. I know that I made some personal contacts in the course of the day that'll be useful to the standards work that I (and those contacts) do, so the meetings served at least part of their purpose. Beyond that, time will tell.

On to the highlights of the standards sessions....

lemonade — Enhancements to Internet email to Support Diverse Service Environments

As usual, we had two busy, full lemonade sessions. We spent the time reviewing document status and updates, and discussing issues with the latest versions. We took a good chunk of the time — about the first hour with a presentation and discussion about the OMA Mobile Email (MEM) Enabler development status. That was followed by a brief discussion of CONVERT and a more extended discussion of NOTIFY, working on resolving a fairly long list of open issues with the document. There wasn't really serious contention, but a number of things needed to be batted around, so it was a good discussion. Streaming needed essentially no discussion.

We started the second session with about five minutes on Message Events, and then hit my IMAP-Sieve draft, which needed quite some time on the open issues (one of which was discussed in more detail in the Sieve session, summarized later). Issues:

  • Should we allow transient Editheader functions, for use in prepping the message for Redirect? Resolution: This can be added later, as an extension, if we decide we really do need it.
  • Should EXPUNGE be an IMAP-Sieve event? It currently is not. Problems: it introduces inconsistencies and difficult-to-explain (and difficult-to-understand) conditions. But it's not just theoretical; Alexey already has a need for it. No significant support for it in the room, though. Resolution: take it to the mailing list, put a two-week timeout on the discussion, and decide then. If there's no clear choice then, the default will be to consider it as an extension later. (I've already posted the question to the mailing list.)
  • Is it clear enough that the meanings of some Sieve actions are different here than in the base Sieve spec, because the context is different? Resolution: current document text is OK.
  • Should the references to optional features be normative (they are currently not). Argument: some of the references say things like, “if [action x] is supported, then [action x] is valid in this context,” and that strikes me as non-normative. Chris (Apps area AD) considers that to be in the “grey area”, and suggests taking the less risky path (from a process point of view) of making the references normative. So agreed.

We finished by discussing Profile-bis. Should we add Streaming? No, not a requirement for OMA, and not worth holding Profile-bis up for. Should we drop Idle in favour of Notify? No, already in Profile, so already supported, and essentially it comes “free” with support for Notify anyway. Should we add Quickstart? Needs implementations before deciding; leave it out for now. Some discussion of notifications ensued, with references to Friday's “Biff” BOF.

DKIM — Domain Keys Identified Mail

The meeting's goals were to move the Sender Signing Practices specification along, and to highlight the Overview document and bring the working group's focus to it. We started with a review of document status:

  • DKIM base protocol specification is now RFC 4871.
  • SSP requirements document is with the IESG, working out some last-call comments and an AD "discuss".

Jim gave a review of the SSP specification, the latest version of which has just been accepted as a working group document, draft-ietf-dkim-ssp-00. Jim covered the principal changes to the document since the last review, in Prague, and discussed three main items in detail:

  • Issues involving DNS wildcards.
  • The SSP lookup algorithm, as documented in the current spec.
  • What SSP publishers can say, outlining, in particular, a new option called "strong", in addition to the original "strict" (if it stays, the name will change).

There was some discussion of the algorithm, but most of the discussion was about what can be stated, and what the relative meanings of "strict" and "strong" are. We considered the idea that we have developed statements that we think the signers will need, but we have to validate them with those who will benefit the most from signing and declaring signing practices. Dave and Phill agreed to interface with MAAWG and APWG, respectively, to try to get their opinions on this. Phill also stressed his opinion that SSP statements should be declarative, not imperative ("I am a financial institution," rather than, "I would like you to delete mail that appears to be from me that is suspicious.").

Tony presented the status of the DKIM Overview document, currently draft-ietf-dkim-overview-05, which has had a lot of work from its authors but which has had relatively little feedback from the working group. The discussion centered on the goals of the document, and strengthened the result from the Prague meeting: the document should be split into multiple ones, to better achieve its several goals. In particlar, there's normative text in the document now, which isn't appropriate for an informational document. Some of that will be resolved by changes to the text, but some may be resolved by splitting a BCP or standards-track document off from the informational portion. The authors will work on this, and we'll keep the ADs involved as we consider changes to the charter for this.

In the few minutes at the end, Murray led a brief discussion of his authentication-results draft, which the working group will follow, and will consider adding to its charter if we (and the ADs) decide it's appropriate.

Sieve — email filtering language

We had minor discussion of the base-spec update, Editheader, MIME Loops, Reject/Refuse, Environment, and iHave. The most significant discussion in Editheader involved what requirements we place on which headers should not be allowed to be changed, and which should be. That went on for a while, with arguments back and forth — the proposal is to say that Received headers MUST NOT be changeable, and that changes to other headers MAY be disallowed by local policy; the question is whether we should specify headers that MUST be changeable, so a script can be assured of something it can change. In the end, we decided that the text that's currently suggested is best.

We had a status update on Notify, and some discussion of Notify-Mailto and the inclusion of Auto-Submitted headers. Conclusion: I will change Notify-Mailto to have it define an RFC 3834 extension (and an IANA registry) with a “Sieve-Notify” value for Auto-Submitted. Appropriate text will go into the spec to define the inclusion and usage of that field.

There was a good deal of discussion on the changes to IMAP-Sieve, mostly focused on the Metadata changes. My changes have the metadata contain the name of a Sieve script; there was a suggestion (and some consensus) to also allow the metadata to contain the script itself — probably with a level of redirection, where the defined metadata entry gives the name of another metadata entry that contains the script. The main point of contention is the need to have more than one script active at once. Randy gave some good reasons for that, involving multiple clients each needing to have their own scripts active, and each being unable to interpret the scripts of the others. Randy will write text to implement what he's asking for.

We spent some time on External Lists, continuing the discussion from Prague about whether having URIs makes sense. We batted operational examples around, and considered the question of authentication to the targets of the URIs. We also discussed limiting Redirect, aiming to prevent misuse and fan-out attacks, but having the side-effect of eliminating the ability to use Sieve to implement a mailing list. This tied into the External Lists discussion too, considering the concept of getting an external list to which a message would be redirected to. We decided that while there are legitimate reasons to use more than one Redirect on a message, we do not want to position Sieve as a way of implementing mailing lists.

We briefly discussed an in-person interoperability event, and decided that it wasn't necessary: any interoperability testing for Sieve can just as well be done by network.

EAI — email address internationalization

There's not a lot I want to say about this. The session was devoted to discussion of the drafts, as usual, and the open issues that need to be resolved. It was a productive meeting, and the schedule for moving the drafts on in the process looks good.

The main thing I want to record here was an issue in the Downgrade specification, involving what happens to messages with DKIM signatures when the messages are downgraded. There was clear consensus that the document needs to point out the issue. But whether the document should make any suggestions about what to do to meliorate the issue (such as having the downgrader verify the DKIM signature and re-sign on its own behalf) wasn't clear. Chris, as Apps area AD, said that he thinks we have to say something about the interaction, but he isn't going to make a declaration of what that should be. Dave thought that this isn't the place to give DKIM implementation advice. I commented that I'd normally agree with that, but these are experimental specifications, and it would serve us well for them to suggest what experiments might be useful.

I think we decided to include some non-normative suggestions, and that any suggestions that involve DKIM will have to be reviewed by the DKIM working group.

SAAG — Security area open meeting

The SAAG meeting always starts with reports from all the security-area working groups, and then has some invited talks. There were two invited talks this time:

  1. John Klensin presenting internationalization issues related to security.
  2. Morris Dworkin, from NIST, presenting a proposed variant of the Galois Counter Mode, an authenticated encryption mechanism.

John talked about the problems that Internationalized Domain Names (IDNA) was meant to solve, the problems with the first version of IDNA (IDNA2003), and the changes in the update that's under development (IDNA-bis). There weren't any surprises in his presentation for anyone who's been following the work, and there really weren't any security-specific things in it, apart from occasional references to passwords (and the idea that internationalization problems might actually be beneficial for passwords). There was some discussion after the presentation, but, again, nothing especially notable and nothing really security-related.

The GCM presentation was a crypto-technical one, detailing a modification to an encryption algorithm intended to opimize it. I'm not a crypto expert, so I could follow the presentation somewhat, but only somewhat. After the description of the modification, its effects, and its performance, there were some questions and comments from the audience. The discussion involved tag truncation, key size, the speaker's point about “relaxation of IV validation”, and possible other variants of the algorithm (maybe better optimized than this). At the end, the Security area directors asked who in the room thought they could use this variant in their security protocols. No one raised a hand.

HTTP-bis BOF — Updating and advancing HTTP standards

The purpose of the BOF was to propose a working group — they had a proposed charter — to advance RFC 2616 to Draft Standard, along with associated considerations and updates. The principal associated issues were (1) the HTTP authentication mechanisms, (2) cookies (scope and domains), (3) HTTP caching (and association with "logout"), and (4) ETags (involving CalDAV/CardDAV issues).

There were presentations on each of those issues, some of which were a bit hard to follow because of a combination of jackhammers and quiet speakers. The bulk of the discussion then fell on the scope of the work — how much of the numbered stuff above should be included while updating and advancing RFC 2616. In particular: should the HTTP protocol update include the authentication/security issues. Consensus was that the security-related work should be separated, as those with the expertise in the protocol aren't the same as those with expertise about the security aspects.

There was also consensus to include cookies in the HTTPbis work.

It looks like the plans are fairly well baked, and the charter is pretty solid. There was also a good show of hands for people willing to do the work here. It seems that formation of a working group is mostly assured, and that's good.

APM BOF — Application Performance Metrics

The APM BOF's goal was to explain the need for coordination in the definition of performance metrics for higher-layer protocols (SIP was an example given), and to explore the interest in working on such definitions and to look at how to do it (directorate vs working group). Their problem statement points out that the people working on the protocols often do not have the expertise to define performance metrics, and that the documents addressing them do not get much attention (people would rather spend the effort on the protocol).

That sounded good. Unfortunately, once they got past that everything started raveling. The details were ill-defined, there were far more people there than the chairs expected and they asked questions for which the answers were inconclusive or unsatisfying. They proposed a set of options, with apparently no process guidance on how appropriate these options are:

  1. Form an APM Directorate that would consult with and advise working groups on the development of APMs.
  2. Form a long-running APM working group, which would wake up at appropriate times and develop the APMs in partnership with the protocol working groups.
  3. Form a short-term APM working group, which would write a "BCP/framework RFC" and then “evolve” into (1) or (2).

In the end, the chairs had a series of questions that they didn't really have time to get to, which is just as well:

  • Who thinks the IETF should work on developing performance metrics?
  • How should we do it?: (1), (2), or (3), above, or none of the above.
  • How should the choice picked in the the previous question operate?
They got many hands up for the formation of a long-running working group, but I'm sure the people “voting” didn't really know what they were suggesting, or what the process issues are with it. It was not an appropriate question to ask a group of BOF participants.

I think what needs to happen with this is that the organizers should work with the Ops area ADs to better define what they want to do and how they aim to do it, and set this up as a more concrete proposal that defines how it fits into the IETF process. If that happens, they might come back for another BOF that can better explain the result it wants to get, and how it'll be implemented.

vCard and CardDAV BOF

The goal of this BOF is to explore a working group to revise/update the vCard standards, and to define a protocol for storing vCards and moving them around. A number of issues with the current vCard definitions were highlighted, and some proposed updates reviewed:

  • Issues:
    • Internationalization (using UTF-8).
    • There are currently too many combinations of parameters for things like TEL(ephone).
    • There's no way to indicate locale for address formats (local differences in the ordering of street address, for example).
    • There's no namespace for vendor extensions.
    • Properties need review — it's been nine years, and things have changed (for example, how many people specifically have car phones now?).
  • Proposals:
    • Merge RFCs 2425/2426 into one document.
    • Merge current extensions into the document too.
    • Define a new MIME type, text/vcard, which defaults to UTF-8 data.
    • Clarify allowed parameter combinations.
    • Provide unique ID on each multi-occurring property to aid synchronization.
    • Geographic properties should be parameters on ADR, and add a MAP parameter (map program URI).
    • Define IANA process for registering vcard properties.
    • Define an XML variant of vCard syntax.

CardDAV defines vCard extensions to WebDAV, based on work done for CalDAV. Features:

  • Defines a well-structured data model for storage of vCard data.
  • Defines new reports for querying and retrieving vCard data efficiently.
  • Defines a new collection type to represent an address book.
  • Uses WebDAV ACLs for access control.

Cyrus is currently addressing mailing-list comments in the working CardDAV draft, then it's ready for last call (easily done, using CalDAV as a starting point). Next: work on synchronization and nofications (maybe reusing stuff from mail-store notifications work).

They covered considerations of using LDAP for vCard access, mainly looking at the differences between the vCard model and the LDAP model. They also noted that there's no standard LDAP schema for vCard information. Another issue is that vCards need write access (at least by the owner), and write access to LDAP databases by users is not common. Conclusion: LDAP is probably not good for vCards... but it's still probably worth defining a standard mapping. Should this be in scope for a working group formed here?

There seems to be enough demand for this work, and enough people to work on it. It seems to me that a working group is appropriate here.

No comments: