Friday, March 23, 2007

.

IETF 68, Prague

Municipal House, PragueThe IETF meeting in Prague is over, and here's my usual meeting report. My apologies to most of you, who will skip the bulk of this entry.

I can't even tell you much about the city yet. This afternoon I'll start seeing some of it, apart from the little I've seen on the way to and from dinner — the photo to the right (click to enlarge) is of the municipal house, on the way to having dinner in a restaurant there. I get four full days, after today, and I'll be taking lots of pictures. So expect some to be posted here over the next few days if the hotel I'm moving to gives me Internet access. If not, please excuse the gap, and come back Wednesday when I'm home and posting again. Either way, see you soon.

As before, I've put the full meeting report off of this summary page, so click below if you want to see it.

Read the rest of this post...

apparea — Applications Area general meeting

The Applications Area has a new area director beginning with this meeting: Chris Newman is taking over Ted Hardie's slot. Thanks Ted; welcome Chris!

We reviewed the working groups that have finished their work and closed (like OPES, yay!), and those that are finishing up. Eric Burger talked about the Applications Area review team, volunteers to review documents when the authors ask for an apps area view of them. Then we had brief presentations on some BOFs coming up for the week, as well as some other work that needed some presentation/discussion.

In that last category, Ted and Chris asked me to present some upcoming work on a notifcation framework, which we'd like to start as an effort to build an email notification system around a common notification architecture that could be extended for other protocols and uses. After some discussion, we took a poll of the room and found a sufficient group of people who are interested in working on it.

sieve — email filtering language

We reviewed document status and discussed open issues with the documents. We spent about half of the time on two documents about notification (which will be part of the apps-area notification work, if that happens). Discussion was minimal for MIME loops, Date, Environment/Notary, and Ihave. Discussion of Metadata was mostly about the “:create” argument it adds to the “fileinto” action — I don't think it'll be very usable, since a script that tries to use it will fail completely if the extension isn't available. In the end we decieded to leave the option in.

Finally, there was significant discussion on Alexey Melnikov's proposal for externally stored lists. Alexey had planned to use a “list name” string, the meaning of which depends upon the implementation. I very strongly prefer using a URI, so that the script has a chance to be portable. There are certainly disadvantages to that, as well, so there was a good bit of discussion on it. We'll need to discuss this more on the mailing list.

We also had an after-hours session, where we talked about what we have to do in order to complete the interoperability reports needed to move the Sieve base spec and the more mamture extensions (Relational, Subaddress, and Spamtest) to draft standard state.

calsify — calendaring standards simplification

The calsify session covered status review and discussion of open issues with several of the working group's documents. Two significant issues discussed, which between them took an hour of the meeting time, were how to handle leap seconds and what to do about times that don't exist (such as 02:30 on 11 Mar 2007 in the US) or that appear twice (02:30 on 4 Nov 2007).

It might seem that calendars don't need to worry about seconds, and that's mostly true. There's a small issue about meeting overlap, but, really, the decision to essentially ignore the missing or extra second should be quite obvious.

Similarly, the effort spent on enumerating at least eight options for each of the “repeated” and “skipped” times, and the time spent discussing it on the mailing list and in the meeting, seems ill spent. Again, the obvious choices were made: a repeated time is considered to represent the first instance of that time; a skipped time is normalized to the moment it would have been had the clock not changed.

At least they've been decided, and that discussion is over. For now.

lemonade — enhancements to Internet email

Lemonade had two sessions to discuss document status and issues, plus an informal after-hours discussion of notification solutions — how we could satisfy the lemonade requirements while allowing for the framework that might be developed through the apps area notification work.

By far the most significant discussion was about notifications, there being at least five draft documents related to it, in addition to the model/framework issues. We went through document status and resolved a few issues, and in the after-hours session we came up with an initial partially baked proposal for the design, which proposal will be further discussed on the notifications mailing list (and thanks to Randy Gellens for turning our scratchings on the flip charts into a digital version that can be posted to the list).

bliss — basic level of interoperability for SIP services BOF

The problem they're addressing is that SIP has many options, and doesn't specify how those options should be used to implement some “advanced” functions, such as call park, do-not-disturb, and some complex cases of transferring calls. The result is that different implementations have chosen different ways to implement them, and those different choices often don't interoperate.

They would like to propose what they're calling “minimum implementation requirements” for these functions, in the hope that at least one mechanism for each function will work in each implementation, and they will therefore interoperate. They're expecting their output to be a set of BCP (best current practices) documents.

I'm particlarly skeptical of this approach. I think the definition of “minimum requirements” is too fuzzy, and that they'll either wind up with something too minimal, which does not effect the interoperability they're hoping for, or something more fully specified, which will basically mean that they've picked one mechanism and specifically recommended it over the others.

In fact, the latter is what I think they should do: rather than trying to please everyone, the functions need to be hashed out and SIP needs to make strong recommendations for specifically how to implement the problematic functions. There may be legitimate reasons, sometimes, to recommend two — say, a simpler one and a more involved one that performs better but can't be implemented by everyone — giving a common way that always works and another that can be used if it's available.

Because of the fact that different vendors currently implement different mechanisms, this work promises to fall into a rat hole, with each participant holding out for a different result and with no consensus in sight. If this is chartered as a working group, it needs to be carefully monitored and needs two very strong chairs.

eai — email address internationalization

We spent the EAI session discussing the documents and the open issues with them. There was a good bit of discussion about the need to upgrade (or the wisdom of upgrading) back to unicode after a downgrade. Of particular discussion is whether there should be any attempt to upgrade in order to make digital signatures work. Consensus was against that — upgrade to “restore” the downgraded address for display and reply, but don't hope to recover a valid signature after that.

There was discussion about whether the UTF-8 extensions to IMAP commands should be done as a batch of separate commands or to set an operational “mode” with a new “ENABLE” command. Strong consensus for the latter, because of the number of commands affected and the complexity of the protocol already. We then had the same question for POP, and decided to use separate commands there, because there's not much affected and the protocol is very simple.

We discussed how to determine whether a mailing list is “a UTF8SMTP mailing list”. We quickly ruled out the idea of considering the capabilities of the subscribers to determine that, and decided that it just has to be a fixed attribute of the mailing list — either the list supports UTF-8 or it doesn't.

fsm — formal state machines BOF

The instigator of the FSM BOF, Stéphane Bortzmeyer, noted that there are a lot of IETF documents that define state machines and that there's no suggested mechanism for describing them — much the way ABNF describes syntax. He wants a formal language to be documented, which could optionally be used by spec-writers. That'd allow for automated checking of state-machine validity and completeness, for “pretty printing”, and for automated code generation.

He proposes a language that he calls “Cosmogol”, and his examples show how to specify current state, transition trigger, next state, and action to be performed on transition. A floor comment pointed out that SDL can do the same thing, and that he converted Stéphane's examples into a subset of SDL very easily. Another floor comment noted the word “subset”, and pointed out that that means Stéphane's language is simpler.

That led to a discussion about why no one is using the existing options, and whether people would use this. The general runble is that nothing's being used largely because nothing's being used — that it's kind of self-fulfilling, and that documenting a “suggested” mechanism would encourage people to use it (well, yes).

The sense of the room was that it'd be good to pursue this, but no one thought that a working group is needed at this time. I agree. I'd like to see Stéphane and any co-authors proceed with this, and see where they get with it.

dkim — domain keys identified mail

The base specification is in the RFC editor's queue.

The main goals of the meeting were to finish up the SSP requirements document, to move the SSP protocol work along, and to discuss the overview document's future.

On the SSP requirements we had a couple of issues to discuss and took one major one back to the mailing list for a straw poll. The issue is whether the protocol that's developed MUST, MUST NOT, or MAY have the feature of allowing the policy to specify what mechanism(s) the domain signs with. After a presentation and extended discussion on the question, the room was about evenly divided on it. We need more working-group comments/discussion on this.

In San Diego we had four different preliminary SSP (Sender Signing Policy) protocol proposals. With the agreement of the proponents of three of those we decided to proceed based on draft-allman-ssp with a set of changes that were discussed at the meeting. We hope for a firstdraft-ietf-dkim-ssp early in April, and we'd like to get active involvement from some DNS people early on, to make we've adequately considered the effect on DNS.

We then discussed a proposal from the authors to split the overview document into three parts, and to publish them incrementally (either as three separate documents or as one document and two updates to it). That will allow it to be used for early implementation advice for the base specification now, while we work on SSP and other DKIM work. There were a couple of concerns with that: that it will interfere with SSP work, and that it will cause extra work to put out the three iterations as RFCs. Author Tony Hansen noted in his presentation that our charter has the overview document and SSP work going on at the same time, so the “interference” argument isn't really a significant one. And I pointed out that the process overhead for the three documents is mostly an issue for the authors and the chairs, not for the working group as a whole. A poll of the room showed support for allowing the authors to decide if and how to split the document.

saag — Security Area general meeting

The Security Area has a new area director beginning with this meeting: Tim Polk is taking over Russ Housley's slot, as Russ becomes the IETF chair. Thanks Russ; welcome Tim!

After the usual working-group and BOF reports, there were three invited presentations:

  1. Update: Security Work at W3C
  2. Issues of SAAG Interest in the USGIPv6 V1.0 Profile
  3. Extensions to the Internet Threat Model
The last presentation described attacks on the network, and suggested ways to design around them... and it generated quite extensive floor discussion.

An open-mic session followed, with discussion about EAP, IPsec, and vendors who ignore or mis-implement IETF security specs.

operations and administration plenary

At this session, outgoing IETF chair Brian Carpenter turned it over to incoming chair Russ Housley. Thanks Brian, and a welcome to Russ (and see the SAAG summary, above).

Apart from the usual administrative reports, the highlight of this was that the O&A part of the session was shortened to make time for a presentation on scaling issues with routing and addressing, and extensive open-mic discussion after the presentation. The “solution directions” slide from the presentation is a good summary:

  1. RIB/FIB scaling — engineering by microelectronics and router designers
  2. Update dynamics — BGP adjustments, better operational practices
  3. Traffic engineering, multihoming, e2e transparency, and mobility would benefit from architectural changes
    • identifier/locator separation and/or multilevel locators form a hopeful approach
  4. All these are orthogonal to both IPv6 deployment and application level namespace issues

technical plenary

As with the O&A plenary, the highlight of this plenary was a special presentation. There was the usual presentation by the IAB, and a turnover of the chair from Leslie Daigle (thanks!) to Olaf Kolkman (welcome!). Aaron Falk gave a report on the IRTF (Internet Research Task Force, managed by the IAB), and then Aiko Pras presented details of the Network Management Research Group in the IRTF.

Then we had excellent presentations and an open-mic discussion about internationalization (“i18n”), the issues and challenges for internationalizing Internet standards (summary: it's hard, and we have to be careful to get it as “right” as possible).

OK, well, maybe the highlight for me was after that, when the IAB went on stage and introduced ourselves. Being on the IAB has made it a particularly busy IETF week, moreso than usual, but it's been interesting so far. Happily, there was essentially no open-mic action for us — just a question about using the SOAP protocol in a particular context — and we ended the session 15 minutes early or so.

2 comments:

scouter573 said...

Re: Calsify. We all just went through the American revised Daylight Savings Time and now the Europeans appear to have changed, too. This was so painful for many of us that we plan to do it all manually the next time around. Or move to Greenwich. Anyway, my question is: why are we still solving this problem? What have all the time and calendar folks been doing for the last 20 years? I recognize that the IETF problem is larger than just DST, but why are y'all still debating time and date swizzling at the IETF in 2007?

Barry Leiba said...

Well, there's a "yes and no" situation here. What was monstrously silly was to spend the time we did debating this issue in order to come up with the answer that my 14-year-old niece would have figured out if we'd explained the problem to her.

But it's really not just a case of using UTC everywhere. I'm sure you know, but to fill other people in: there are two pieces (at least) to an Internet standard like this. One is the "bits on the wire" — exactly what the things that communicate send to each other, in what order, with what acknowledgements, etc. The other is the semantics — what does the data mean, and what do you do with it. In order for things to interoperate well, we have to be clear about the semantics.

Consider, for instance, a calendar program that was designed to be used in a small office, so it ignored the whole issue of time zones. Suppose they added calendaring standards to it, so the office next door, with different software, could schedule meetings with us. Then suppose someone in a different time zone scheduled such a meeting, and our software just ignored the time zone because it didn't know what to do with time zones at all. It complies with the bits on the wire, but it doesn't interoperate correctly because it doesn't do the right thing with the time zone. This is in that category.

The specific issue is this: Suppose I have a small group that interacts with companies all over east Asia, and we have to have weekly teleconferences on Sunday afternoons, Asian time. I schedule the meeting from 1 a.m. to 2 a.m. New York time, so it starts between 1 p.m. and 3 p.m. in the various parts of east Asia. All fine.

But just scheduling the meeting at 0700 UTC isn't right, because we want the meeting to be at the same local time in January and in July, and there'll be a one-hour difference in the UTC between those two months (for example). We can send UTC on the wire when we send the recurring meeting event, but we can't be sure what the implementations will do with it internally, nor can we tell them what to do with it. We can only tell them what the observable result has to be. So if we do send UTC, we also have to send the original time zone information, and tell the system whether to adjust for local time changes... because the people work in local time, not UTC.

Now, what happens to that meeting on 11 March 2007, when there is no 2 a.m. in New York? Or on 4 Nov 2007, when 1 a.m. occurs twice? Sending UTC, and even storing UTC in our systems doesn't resolve that. We have to say something in the standards specification about what the implementations should do.

I still think we spent waaaaaay too much time on the issue, but it is an issue that had to be dealt with in the spec.

(Side comment: My flight to Prague was an interesting case of time zone oddities. It was scheduled to leave JFK at 5:45 p.m. and arrive in Prague at 8 a.m. the next day. But since New York had moved its clocks ahead by the time the flight happened, that timing was now an hour off. So during the two weeks between 11 Mar and 24 Mar, that flight left JFK at 6:45 p.m. instead, keeping the arrival time the same. I do find it odd that my itinerary hadn't already reflected that (I didn't know until I checked the flight schedule on the web site on the day I left). I mean, it hadn't been a secret: they knew in advance that the time would be different on that date.)