Last weekend I noticed some reports about how everybody's favorite "bad guy", Kim Dot Com, had launched a new web site, called "Mega".
I do not have any public opinions about this site, or the various controversies surrounding the site and its owner, but given the pre-launch emphasis on security and privacy, I decided to take a look at the site using the TLS Prober, to see how it measured up on the points that the TLS Prober checks, as well as a couple of other checks. I did not test the site's other security features, but Lee Hutchinson at Ars Technica already tested some of them.
Based on the results from the two servers I tested (the main site and the EU static server), the servers seem to be mostly standards compliant, and the main site actually supports TLS 1.2 :up:.
Unfortunately, I still found some issues:
- The main site server is NOT patched for the TLS Renego issue! Almost three (3) years after the IETF TLS Working Group published the Renego patch (RFC 5746>, installing a new server that supports TLS 1.2, but is not patched for the Renego issue, does not look good, particularly when you are trumpeting the security of the site services. The server does not accept client-initiated renegotiation, so the server does not appear to be vulnerable to the full Renego attack, but there is no way to know if there is a back door that is vulnerable. (The static server is patched, but it only supports TLS 1.0, not TLS 1.2)
- The only encryption method (cipher suite) supported by the tested servers is RC4 with MD5. There are a number of people — and I am one of them — that are skeptical to using this particular cipher suite, as the MD5 method is severely weakened. At Opera, I did at times try to disable this cipher suite in Opera, but since 0.18% of servers, including some banks might break, it has not been practical to do so. I am sure that the server administrators chose this particular method because it is faster than the other methods available, but there is another cipher suite that is almost as fast (and also invulnerable to the BEAST attack): RC4 with SHA-1.
- While the main site uses a 2048 bit RSA key, the static site uses a 1024 bit RSA key. Given that 1024 bit RSA is to be phased out completely from CA issued certificates by the end of 2013, using that key size is not a good idea.
- The static server has SSL v2 enabled, and does accept and answer connections using SSL v2 (rather than displaying a "not allowed access" message that some servers do). As I have said before, SSL v2 is old, and it is unsecure. Additionally, no SSL/TLS-capable browser client published in the past 16-17 years has only supported SSL v2.
- The static server's certificate is not properly configured, as it is missing an intermediate CA certificate, and "assumes" that the client either knows the certificate or can fetch it.
- The main server's certificate is also not properly configured. It contains 3 extra certificates, in addition to the site certificate and the above mentioned intermediate. The extra certificates are probably unnecessary, since all modern clients ship with the Root certificate that issued the intermediate certificate (and the static server's configuration assumes knowledge about the intermediate and, implicitly, the issuing Root), the result is that the server sends a couple of kilobytes of unnecessary data in each full TLS handshake. The certificates are also not ordered properly.
- The static site apparently does not support TLS Session Resume, despite sending resumable sessions, which significantly increases the load on this server. I suspect that the reason session resume is apparently not supported is that the servers hosting the site are not sharing session information, as this server also supports the new Session Ticket extension used to store session information on the client, and does resume such sessions. The main site did resume normal sessions, but it did not support Session Tickets.
Do these issues, by themselves, make the sites unsecure? The answer is "not really", since none of the problems are critical. However, from the outside, there is no way to be sure that the TLS renegotiation functionality is completely locked down, which is a problem, since the Renego attack allows the attacker to choose the URL; so, if renegotiation is enabled for some areas of the site, these URLs could be abused by an attacker.
I would still recommend that the administrators of Mr. Dot Com's servers fix the above issues as quickly as possible.
Mega is paying people who find flaws.