Standards work update

In the past half year, there have been a number of developments regarding the standards drafts I have been working on.

TLS Multi Stapling draft has become a TLS Working Group item

At the March IETF meeting in Paris, the TLS Working Group decided, and confirmed by the mailing list in late April, to accept my draft for fixing and expanding OCSP Stapling into "Multi Stapling" as a Working Group item.

Since then, the draft have been updated twice. The update released this summer included, as a trial, support for the Server-Based Certificate Validation Protocol (SCVP) mechanism. The text for that was mostly contributed by Sean Turner, one of the IETF Security Area Directors.

Unfortunately, there was not much interest expressed from the Working Group for or against this expansion of the draft. I therefore decided to remove the SCVP text from the draft when I updated it a few weeks ago.

How to prevent version rollback attacks against TLS clients and servers?

The SSL and TLS protocol has a mechanism that is intended to allow clients and servers that support different versions to negotiate the highest mutually supported version (the client sends its highest supported version, the server picks the lowest of its highest version and the client version) and to prevent an attacker from forcing the parties to negotiate an older version of the protocol (a version rollback attack) that might be easier to break.

This is done in two different ways:

  • First, the integrity of the entire handshake is checked when the client and server exchange the first encrypted packets. As long as the method used to check the integrity (a digest method or hash function, e.g., SHA-1 and SHA-256) is secure, this will prevent a successful version rollback attack.
  • Second, the RSA-based method for agreeing on the TLS encryption key is defined in such a way that the client also sends a copy of the version number it sent to the server and against which the server is then to check against the version number it received. This would protect the protocol version selection, even if the hash function security for a version is broken. Unfortunately, a number of clients and servers have implemented this incorrectly, meaning that this method is not effective. Additionally, this method is only possible to use for methods like RSA and not others like those based on Diffie-Hellman (DH) or Elliptic Curve Cryptography (ECC).

However, ever since TLS 1.0 was introduced by TLS clients, they have had to deal with legacy servers that did not respond well to being presented with a TLS 1.0 (or higher) version number from the client. The servers would just shut down connections, return error codes, or respond in other bad fashions. Once clients started supporting TLS Extensions, the situation got even more complex, since a large number of older servers would not accept connection attempts from clients sending TLS Extensions, despite all versions of TLS requiring it.

In order to let users actually access such sites, if the connection initially failed, the browser clients offered an older protocol version (down to SSL v2 while that was offered, then SSLv3) until one of them worked.

The result: All clients voluntarily subject themselves to a version rollback attack, and none of the built-in protection mechanisms work.

This has been an issue for at least 13 years, and it has never been fixed, since doing so would break "too many sites" (at present, 1.36% of all servers). Another factor is that the issue is still considered hypothetical, since there are no known attack able to break any version of SSL 3 or TLS 1.x.

In December 2009, during the work with the TLS Renego patch specification, I suggested to the TLS WG that clients could use the Renego Indication extension (the Renego patch) being used by servers as an indication that the server is TLS Version and Extension tolerant, and that they should not perform version rollback when connecting to such servers. While the outcome of that discussion was a reminder in RFC 5746 (the Renego patch specification) that servers MUST accept TLS Extensions and higher version numbers than they support, unfortunately the Working Group did not decide to add a requirement that clients must not permit version rollback recovery against Renego patched servers. I did, however, implement such a policy in our implementation of the patch in Opera 10.50.

Recently (in the past year, or so), there have been more discussion of this topic, starting with a suggestion by Eric Rescorla (aka EKR), one of the TLS WG co-chairs, about using special Cipher Suite values in the TLS handshake to indicate the version, and Adam Langley from Google followed up with a slightly different version of the same concept.

I do not like the proposed solution, for several reasons:

  • It would require updates of all clients and all servers. Considering that, 3 years after the Renego disclosure, a serious protocol vulnerability, we have still not passed 75% patch coverage in my TLS Prober sample (it is currently 73.7%), and that this is will likely be considered a less serious problem to patch, I believe we would be very lucky to see over 50% coverage after the same time period.
  • The logic around such a system would be complex, and needing a value for each defined TLS version would make it even more complex. There are also issues with how the server and client should behave in the event a mismatch is noted between the two version indication systems. Should the server upgrade the connection? Should it return an error code, and, if so, what should the client do? Something else?
  • Any server that would be updated with this system would, by definition, already be version and extension tolerant. It would also already be patched for the Renego problem, probably years earlier.
  • This solution would do nothing to protect connections with servers that are version and extension tolerant, which is 98.3+% of servers in my sample.
  • This suggestion reuses a concept, the Special Cipher Suite Value or SCSV, that was introduced by the Renego patch as a hack to allow clients to signal their Renego patch status to the server when they are not sending TLS Extensions, such as when connecting to extension-intolerant servers. That was done to fix a serious security vulnerability, for which there was no other channel to convey the necessary information. In this case, I believe there are other, better methods to convey or deduce the information.
  • The SCSV concept should be only be used as a last resort, when nothing else will do the job. If its use is allowed too often, it becomes easier to select it, and it ends up becoming a substitute for the TLS Extension mechanism. In fact, there has been at least one other suggestion to use SCSVs as a signaling mechanism in the last year, which did not have a really good rationale for using the concept, in my opinion (and the Working Group did not think it would produce the desired security result, either).

In my opinion, it is better to discover and use a way to reliably discover that the server is version and extension tolerant, and use that as a proxy indication, and I think the TLS Renegotiation Indication Extension (the Renego patch) is as nearly a perfect proxy indicator as we are going to find.

  • The Renego patch RFC contains, as mentioned above, a reminder to implementers that version and extension tolerance is expected, and this recommendation has (mostly) been followed.
  • While this method would not protect all of the 98.3+% of tolerant servers, it would immediately protect connections with the 73.7% of all servers that already support the Renego patch (based on the 571000 servers sampled by the TLS Prober).
  • Of the Renego patched servers, only 0.17% are version and/or extension-intolerant, compared to 4.7% of the unpatched servers.
  • Of the patched, but intolerant, servers, 33% are in a special category: they do not tolerate a specific TLS extension, either the Server Name Indication extension or the Certificate Status extension. I have yet to discover the reason for this intolerance among a small group of servers, but there are indications that some kind of TLS front-end is involved, and the largest collection of servers we have found is a group small online banks hosted on a single 256 IP address subnet by the banking ISP Jack Henry & Associates.
  • When clients stop supporting Renego unpatched servers, they can immediately remove the code supporting version rollbacks, too, since the Renego patched servers will not require that functionality.

This means that it would be possible for clients to use the Renego patch to protect connections with 73.7% of the servers on the web, immediately, by not allowing version rollback when connecting to a patched server.

While the small number of problematic servers could cause some to be concerned, I believe that this small number of servers (712 of 421,000 servers) can be quickly fixed by their owners once they learn of the need to do so. I think it is much more of a problem what most users will be connecting to these sites using the obsolete, 17-year-old, SSL v3 protocol.

As there were other proposals on the table, I decided to write up my proposal as an Internet Draft and submit it to the IETF TLS Working Group for consideration. So far the topic has been discussed at one meeting, in Vancouver, but no decision about which approach to use has been made yet.

In my opinion, my proposal is the quickest and best way to remove the current system of automatic version rollback from use, while still maintaining compatibility with Renego unpatched servers that are version and/or extension-intolerant.

As mentioned, Opera has already implemented the proposed method, and thus has already protected its users against a version rollback attack when connecting to Renego patched servers. Unfortunately, this does create a couple of problems, since sites such as www.glacierbank.com, www.banksafe.com, and www.bigskybank.com do not follow the TLS specification, and require the client to perform a version rollback, even if they are Renego patched, because they do not tolerate the Certificate Status extension; therefore Opera is not able to connect to them.

Public Suffix information moving into the DNS?

Over the past several years, I, among others, have been working to convince the IETF to look at the problems around domain limitations around cookies and other cross-site information sharing, as demonstrated by the difficulty of preventing cookies being sent to all web servers in domains like co.uk, most commonly called Public Suffixes. This has been difficult, since many in the DNS community are very opposed the concepts that Public Suffixes involve.

In the meantime, the world has moved on, Mozilla completed its work on creating the Public Suffix List, and, in other areas of internet technology, concepts that required information provided by the Public Suffix List kept showing up.

One of the main issues with a system like the Public Suffix List is the problem of maintaining the list as an up-to-date resource. Currently, the information in the list has to be manually collected from many diverse sources, checked, and included in the list. As the number of Top Level Domains increase, not just with new internationalized country code TLDs (such as for many non-western countries), but ICANN is currently working to introduce hundreds of new generic TLDs every year, the amount of work needed to maintain the list will increase rapidly, and the list would in any case never be really up to date.

At the March IETF meeting in Paris, in a small meeting with several interested parties Andrew Sullivan, Co-chair of the IETF DNS Extensions Working Group, volunteered to investigate how to better implement the public suffix system in DNS. His suggestions are available in his document "Asserting DNS Administrative Boundaries Within DNS Zones".

As we now have started on a route to system that I hope will be better than the one I have been proposing, I am now suspending work on the SubTLD Structure draft. Opera will still generate the XML files we use to distribute Public Suffix information, but the work of developing the XML format into a suggested standard for distribution and aggregation of Public Suffix information will end.

Work on other drafts suspended

At present, the IETF HTTP Working Group is busy defining a new HTTP 2.0 specification, based on SPDY. Therefore, it is unlikely that my drafts for Cache Context and Cookie Origin will be taken up by the HTTP Working Group. It may be that this group, and the planned HTTP Authentication Working Group, will take a look at these areas and try to solve the issues those two drafts seek to address.

Given the low probability of an IETF Working Group taking up the drafts as Working Group items, and the general lack of interest expressed from other parties, I am suspending work on these drafts, as well. However, I may resume work on them if enough interest is expressed from relevant parties, particularly web developers and relevant websites. With enough interest and participation, it might be possible to refine either, or both, of these drafts and submit them for consideration as Individual Submission RFCs.

Archive:

draft-ietf-tls-multiple-cert-status-extension-00 (archive)

draft-ietf-tls-multiple-cert-status-extension-01 (archive)

draft-ietf-tls-multiple-cert-status-extension-02 (archive)

draft-pettersen-tls-version-rollback-removal-00 (archive)