The 76th IETF meeting was held in mid-November in Hiroshima, Japan. It was a long trip from the northeastern U.S., but it worked well, with good weather, mostly good meeting space, and plenty of reasonable food in the area.
This meeting included about 1100 attendees from 44 countries. Not surprisingly, nearly half were from Asia, and only about 30% were from the U.S. As is often the case with IETF meetings in Asia, many key participants did not attend, and a number of working groups did not meet. In contrast, though, there were an unusually large number of BOF sessions exploring or proposing new work.
As usual, I’m keeping the detailed meeting report off the front page (unless you’re reading the RSS/Atom feed). Click here to read the detailed report.
Neither of the working groups that I chair met this time, but neither of those decisions involved the Asian venue. DKIM is in a decision phase right now, having completed its chartered work and deciding whether to re-charter to take on new work. VWRAP is newly chartered, and was not yet ready for a face-to-face meeting this time. The document editors plan to have documents ready between now and IETF 77, and we should be meeting in Anaheim.
Of the other active working groups that I participate in, sieve, eai, idnabis, and httpbis did not meet. IDNAbis has joined calsify and lemonade in “soon to shut down” state. The others were missing key people (including chairs), and did not feel there would be value in meeting in Hiroshima. In addition, morg and vcarddav met, but probably should not have. Key participants “attended” remotely, using jabber and the audio feed, and no work was done that couldn’t have just been done on the mailing list.
Of particular value, though, was an informal session that a few of us set up to do some live document editing on an eai document, downgraded display. The document was sent to the IESG prematurely, and contains some very confusing steps for reconstructing original header fields from downgraded ones. Having had a chance to talk with the original author about the algorithm, I’m now prepared to re-write the problematic sections so that the document can proceed.
morg — Message ORGanization extensions working group
There were very few attendees at the morg meeting this time, with one participating remotely, via Jabber and the audio stream. We reviewed document status: “Status In List” is in IESG processing; “Sort Display” got two new issues during the brief discussion, both of which have since been posted to the mailing list. As working group secretary, I have to post a prod on the mailing list to try to get other documents, such as “Fuzzy Search”, moving. “Multi-Mailbox Search” and “Message Recall” are on my own queue for update and finalizing.
The other point of discussion was a suggestion that we take on some version of Google’s “XLIST” extension. We talked about how we might incorporate it into the regular LIST command, and we’re trying to contact someone who currently works on Gmail, to see if we can agree on an approach that will make everyone happy.
yam — Yet Another Mail working group
The big discussion in the yam working group this time was the IESG’s response to our submission of the first document template, and how the working group should respond to it. The IESG was initially confused by our submission, a problem that turned out to be due to the tools they use to manage their work, rather than anything else. But in the end, while we had decided to start with a very simple document to test the water, they couldn’t see how to do an approval of a simple document that depends on a much more complex one, and asked that we do the more complex one (RFC 5321) first.
After Alexey presented his summary of the IESG’s position, and after much discussion of it in the room, we took a straw vote on whether to close the working group, drop the two-step-process experiment, or go ahead and try the two-step process again with the bigger document. We also considered the idea of processing two documents, one in one step and the other in two, and then comparing the results. We decided to try the two-step process with RFC 5321, and that’s going ahead now.
vcarddav — vCard update and CardDAV protocol working group
vCardDAV was another lightly-attended, short meeting, also with one remote participant (but not using the audio stream). We briefly went over issues with the vCard update draft, and discussed a point about the KIND property and one about specifying the XML property. We followed with a brief discussion of the vcard-xml draft issues.
Finally, we looked at the IESG review of carddav, focusing on a DISCUSS point: the draft proposes to register a service without a port number, which is not currently allowed. There’s a document in the works in another area that will enable that, so we’ll have to wait for it. There was discussion of alternatives, but we decided it would be OK to wait a month or two, and then see where things are with the other document.
tls — Transport Layer Security working group
The big issue for the TLS working group at this meeting was understanding and dealing with the TLS renegotiation vulnerability that was made public in early November. A number of remote participants were on WebEx, including a couple of presenters, and the discussion was lively.
Possible mitigations discussed:
- Disable renegotiation, dismissed as impractical.
- Mitigate in the applications, also impractical and has to be done differently in every application.
- Introduce a TLS change to carry information from before the renegotiation to allow the TLS stack to check continuity. This isn’t an ideal fix, but will help for now.
6lowapp — Application Protocols for Low-power V6 Networks BOF
The 6lowapp BOF aimed to look at protocol issues for low-power V6 applications, to identify potential work in that area. The organizers have a number of application domains in mind; the two that were particularly presented in the BOF are home and building automation (having, for example, every lamp in the building as a separately addressable/controllable device, in addition to dealing with HVAC, fire control, building security, etc.) and power-system management (smart meters, remote-controllable thermostats, etc.).
Some versions of much of this stuff is working now, in limited environments. The goal here is to generalize the solutions and standardize protocols to be used. The key is the initial scope of the work. The BOF presented two protocol ideas, based on REST (representational state transfer) concepts: one for “Constrained Application Protocol” (CoAP), and one for “Constrained-to-General- Internet Intermediates” (CoGII). Service discovery, device discovery, and system security are key points (and there was discussion of the difference between channel security (where (D)TLS may be used) and object security (where CMS may be used)).
- Questions about how much of this fits into the IETF, and how much is already being done in industry consortia and other SDOs.
- Concern about timeline, need for speedy development, normal IETF timeframes. Will the industry be willing to wait for IETF process? Counter: significant momentum behind this, might be able to move through the IETF quickly.
- Concern that this be general, not just restricted to constrained environments/devices.
- Concern about the CoGII portion, related to location of CoGII intermediary, interaction with REST, and other issues.
Consensus is to work on the CoAP side, but to drop CoGII. The group will move ahead with charter plans.
homegate — Broadband Home Gateway BOF
The goal of the homegate BOF was to consider standards for home gateways, to deal with the problems of the inconsistent ways that home gateways operate (and, therefore, don’t interoperate). There are many errors in DNS implementations, lack of IPv6 support, problems with congestion management, and many security issues.
I personally found the security issues to be the most interesting. Of course, the obvious point, which many of us have been railing about for years, is that most gateways are shipped with trivial default administrator access (account name/password), and wireless gateways are shipped with no encryption enabled by default... and a great many customers do not change those defaults. But there are other problems, such as gateways that actually block the use of some security protocols (such as DNSSEC and IPSEC). Surprisingly, some gateways can actually become compromised, as “zombies”, in much the same way as a personal computer can. That can result in autonomous bad behaviour by the gateway, even when the PC is shut down. It can also result in having the gateway re-infect the PC after the latter is disinfected.
The targets of this work are manufacturers of home gateways, of course, but also service providers, who often write contracts with manufacturers. Clear documents about proper operation, and encouragement not to buy devices that don’t meet the requirements, would help a great deal.
The group does not aim to create any new protocols, but to produce BCPs to define how the existing protocols should be used by conforming gateways. The consensus is that the problem statement isn’t yet sufficiently clear and isn’t concisely scoped. There seems to be enough interest to continue scoping and defining, and to continue work on a charter, but a working group is not imminent.
hybi — BiDirectional or Server-Initiated HTTP BOF
The hybi BOF was set up to look at some solutions to the problem of HTTP use cases where the server needs to push something to the client. Currently, this is often done with some sort of polling — either “short polling”, which is the normal type of polling where the client keeps asking the server for any updates, or “long polling”, where the client asks the server something and the server doesn’t answer until it has an update to give, leaving the request open for a long time. Implementers would prefer a “bidirectional HTTP” mechanism, several approaches have been discussed, and hybi has narrowed the field to a few contenders. There is an Internet draft that looks at the design issues and some of the choices.
At the informal BOF in Stockholm, it was clear that the main impediment to getting started is the need to define the scope of the work, and this BOF tried to do that. We went over the design issues, requirements, and problems, and then had quite a bit of discussion of the WebSocket protocol. We followed that with a presentation of the problems that WebSocket doesn’t solve, and a look at alternatives. One possibility is Bidirectional Web Transfer Protocol (BWTP). Others include changes to HTTP and changes to WebSocket to resolve some of its problems/limitations.
There was a discussion of Blocks Extensible Exchange Protocol (BEEP), which the IETF spent time working on and which deals with some of these issues. One question is why we think we can do better here. BEEP is complicated, but it turns out that when you try to ignore many of those complications, you wind up coming back to them anyway — the features actually are needed.
We had a general discussion of the “port 80” issue, wherein everything wants to run on port 80 (or 443, for SSL/TLS), because that’s the port that everyone lets through their firewalls (that is, “because it works”). I think this is a bad idea, but it’s clear that fighting it is an uphill battle, at best, and may simply be an impossible one to win.
It’s clear that there will be both short-term and long-term solutions, and the charter allows for that. There was a strong sense that in the long term, we still may end up with more than one answer, and the charter needs to make it clear that that’s a valid outcome.
Consensus on scoping is for a working group to start with WebSockets, and then expand scope from there.
iri — Internationalized Resource Identifiers BOF
The iri BOF was meant to kick off work on fixing some probelms with the current Internationalized Resource Identifier (IRI) specification. This is a follow-on from an informal discussion in Stockholm, where we thought the work could be done as an individual submission. The decision between meetings, though, was to go ahead with a formal BOF and look toward a working group.
The primary problems here are interoperability of existing implementations because of inconsistencies in the interpretations of the current specs, the extensive effect of any changes to the IRI spec on existing implementations, and the interaction between the IRI specification and other standards (including those by other organizations). The group needs to decide which protocol schemes to work on (http and mailto; what others?).
Discussion included questions about using IRIs as protocol elements, rather than simply as names that get translated at a different operational layer. For example, web browsers can use IRIs internally, but handle all the encodings necessary to create URIs on the protocol end. The problem here is that that’s not the way it works today, and there are non-browser applications of IRIs that do not have a presentation layer. IRIs are already being used as protocol elements, so that horse has left the barn.
Consensus is to continue work on the draft charter, and nail down the issues that have been raised. We might have a working group chartered before IETF 77. The consensus is that the working group will sort out some details, such as which schemes to work on.
aplusp — Address Plus Port BOF
The aplusp BOF presented a proposed mechanism for using “address plus port” to deal with limitations on IPv4 addresses. The concept is that one V4 address is assigned to multiple computers (or interfaces), the different computers are assigned different port ranges, and the address plus the port is used to route data to the computers. The translation is isolated, so the routing protocols and the consumer’s NAT boxes and applications are not affected by this.
For the BOF, the scope is limited to the “dual stack lite” configuration. We saw a presentation that covered the key points of the idea, and which showed various scenarios that demonstrated out it would work. We saw a presentation about how this fits into mobile environments. We saw a presentation about the difficulties that AplusP introduces. In that last, Dave Thaler’s summary is this:
- Port-restricted IPs are a drastic change to the IP model
- Lots of complexity
- Lots of problems known, and probably more
- People will get it wrong
- This architectural change is unnecessary
- Multiple layers of NAT is already bad enough, this is arguably worse
There was a long microphone queue of questions and comments (mostly comments). In the end, when it was time to take a consensus hum, the decision was overwhelmingly not to do this. I don’t think I’ve ever seen such a decisive hum against a proposal in a BOF before.
grobj — Generic Referral Object BOF
The grobj BOF looked at the idea of a “generic referral object” as a means to do protocol-level referrals without the limitations that current referral mechanisms have — specifically, limitations to IPv4 addresses. In a sense, this is looking to design a generic tool like Interactive Connectivity Establishment (ICE).
Essentially, the point is for the “object” to have as many definitions as possible of how to get to the resource it’s referring to, keeping in mind that both IPv4 and IPv6 addresses may be needed, that connectivity isn’t always symmetric, that there may be difficulties going through firewalls and network address translation, and so on. The object will say, “Here is a [prioritized?] set of ways to find what I’m trying to refer you to.” I asked whether they had considered the possibility of including a token given by the service that would allow someone to access it in a way they might normally not be allowed to — such as punching through a firewall. They had not considered it, but the idea sounded interesting.
It was noted that this is a clear “layer violation”, to which the slightly joking response was, “That’s why we’re doing it in the Applications Area,” implying that lots of things have to leak up to the applications layer — too often true, but perhaps true more often than it needs to be.
There was a thought that just changing ICE would be best, and the response to that was that ICE is currently in the RFC Editor queue, and “Please don’t mess that up.” That seems a troubling approach: avoiding interference with another protocol’s progress is not the best reason to design a new protocol. I don’t know enough about ICE to tell whether this proposal would best be incorporated as a revision or extension to ICE.
The “generic referral object” isn’t intended to be incorporated into existing use cases, but to be adopted by new applications. It’s not clear yet what those applications will be, so it’s not clear who will use it.
In the end, there seems clear interest in working on this further, but the problem isn’t sufficiently well defined yet, and opinion was divided on whether it’s a solvable problem, in general. We need to have a defined problem statement document to take to a second BOF, and see where that leads.
decade — Decoupled Application Data Enroute BOF
The decade BOF aimed to start work to define a protocol used by peer-to-peer applications to access data stored “in the cloud”. The initial proposal was for a working group to produce four results:
- A problem statement (informational).
- A requirements document (informational).
- A survey of existing technology (informational).
- An online-data-access protocol (standards track).
The group is aiming the work at peer-to-peer applications, but specifically does not want to include service discovery in the work — it will leave that to the ALTO working group, which is already engaged in the issue of finding good/better peers.
The most significant point of discussion in the room was whether an existing protocol could/should be considered here, perhaps with extensions. WebDAV and NFS were the ones mentioned; the existing survey draft does talk about the latter, but not about the former. It strikes me that it’s very important, here, to reuse as much as possible from a well defined and well tested protocol. File access protocols can be difficult to get right, and if what’s already been done is close to what we need, it would be a bad idea to reinvent it.
One of the Area Directors feels uneasy about chartering a working group to do the preparatory work (problem statement, requirements, and survey), rather than having that done ahead, and chartering a working group to deal with the protocol issues. I don’t agree: I think if it’s done that way, it’s more likely that requirements will change with the scope of the working group, and will thus change the conclusions from the survey. Also, having a working group to focus the effort will make it less likely that the survey might miss something that’s important to consider.
The consensus in the room seemed to be that the problem statement was reasonably clear, but that without a completed survey and set of requirements, we can’t know what comes next: defining a new protocol, or using or adapting an existing one. The sense is that we should move toward chartering a working group, but only charter it for the first three items (the informational documents), and then re-charter when we know where we want to go from there.
Technical plenary presentation on internationalization
This meeting’s technical plenary presentation was about “Internationalization in Names and Other Identifiers”, presented by John Klensin and Stuart Cheshire. They gave an interesting presentation showing why the problem is as hard as it is, and looking at some of the ways we’re trying to solve them.
This presentation ties into the work of the idnabis and eai working groups, and the iri bof.
ISOC presentaion: Internet Bandwidth Growth
Following successful previous press events, on IPv6 and DNSSEC, the Internet Society held a press event over lunch on Tuesday, titled “Internet Bandwidth Growth: Dealing with Reality”. There were no surprises here, as this really was meant for publicity, rather than technical work. Apart from the ISOC introduction, four speakers talked about their experience and predictions about the growth of data transmission on the Internet. I think this was a less interesting session than the two previous ones were.