Over the past few years browser vendors have been steadily raising the bar for what can be called secure connections and documents. SSL v2 has been phased out, as has 40- and 56-bit encryption. Extended Validation (EV) has been introduced to provide better identity assurance, and the W3C is about to release the Web Security Context UI specification to make the security indicators more consistent.
There is, however, (at least) one area where this development has not come as far as I think it should have: Unsecured content is still allowed in secure documents. Some clients do give the user a warning (MSIE for example, Opera and FF do not), but as far as I know none show a padlock for such pages.
Some may wonder why such mixing is a problem, so here is a short description of some issues:
- Unsecured content can be read by anyone listening in the connection (Not very polite of them)
- Even worse, somebody listening in is almost always in a position to change the content (But, that's … criminal, right?)
- Some content, like images, can tell the eavesdropper what you are doing.
- Other content, like JavaScript and CSS, can directly change the content or appearance of the webpage. If modified by an attacker, the page could be made do things you neither you nor the designer would like, like send your password and credit card details off to the attacker.
Therefore, using unsecured content in an otherwise secure document can, in some cases, leak information, while in others it can compromise the security of the *entire* site.
A very "popular" excuse for using mixed security in a site is that it reduces load on the server, because of the cost of SSL connections. This excuse was debunked by Bob Lord (formerly of Mozilla) a few years ago. The most costly operation for a secure secure server is the private key decryption step of the initial SSL handshake, and on a well-configured server that is an operation done no more than once a day per client. The remainder of the extra work is the encryption, and modern encryption methods like AES have been designed to be as efficient as possible. Both of these operations can be performed using hardware accelerators, which should reduce your total hardware cost if you do have heavy traffic to your site.
By ensuring that your server is serving HTTP 1.1 content, that it properly support and specify cache directives, persistent connections and pipelining, that you can maximize your server's efficiency. If you still think your site is under too heavy a load, perhaps you can reduce the complexity of the documents, simplify images and scripting, and rely on CSS to place the content correctly?
In most cases where we see mixed-security content used these days, the reason is not cost reductions, it is due to copying code, for example an analytics JavaScript tag, without examination of the code, forgetting to change the URL to https, and during testing not noticing the security warnings some browsers do show (unless the testing is not done on a secure site, but an unsecure one, which might explain some of the curious aspects of this).
In my opinion such mixing is long past its use-by date. It may have been (or seemed) necessary in 1998, but not in 2010, especially with the much more hostile environment that now exists, such as phishing, malware, and the possibility of compromised WiFi networks.
During beta testing of IE7 Microsoft tried to block mixed-security content by default, but apparently found that "too many" sites broke, and backed down.
I wish Microsoft had not backed down. The "problem" with secure sites not working "properly" when blocking unsecured content will not disappear, and will have to be handled if/when blocking is reactivated. In my opinion, any such problems with broken sites will be short-lived, and the sites will probably be fixed within weeks of a policy shift.
One might hope that fewer sites are doing using mixed-security content when browsers start to block that type of problem, but I would not hold my breath.
However, I continue to see hotel booking sites, bank sites, and other important sites loading unsecured content as part of their secure pages, and even new sites do it.
In recent weeks I have noticed two high profile sites with such mixing of secure and unsecured content: Twitter and Citibank. In both cases the pages in question included external JavaScripts, which would allow an attacker to completely replace the content of the site, and, for example, to use it to steal passwords.
Twitter (contacted June 3rd) have recently been updating their site, including a secure site interface, but have not yet removed all the references to unsecured content on its main page. According to my information, they are working on it.
In <http://www.citi.com/> Citibank's case (contacted June 17th) there are two problems: Their main landing page is not encrypted, and the "Unknown page" (404) error page (e.g. the page for https://www.citibank.com/foo ) on one of their secure severs includes unsecured scripts, meaning that a MITM attacker can replace their main page (by editing the network traffic, for example in a WiFi-router) and is able to redirect the user to a server where he is (by replacing the unsecured script) in complete control over the content. The only way for a user to discover that he is under attack is to notice that there is no padlock and/or that the URL is not the one Citibank actually use for the service they are supposedly using, even though the URL belongs to Citibank.
I think that this is a big chink in the bank's security armor, but apparently Citibank does not think so, referring to the fact that the vulnerable server is not their main site any more (which does not matter as long as the server is online), that their services are secured with encryption (and that does not matter if the attacker can trick the user into believing they are on the right site) and that they have other security policies in place to help users in case their account is compromised. My impression is that they are not planning to fix the problem.
The problem with mixed-security sites will not disappear while they are allowed to work, so we do not gain anything by waiting, rather the opposite.
While I think blocking unsecured content in secure pages is the best solution, there are others who think that one should warn the user with a dialog, or display the page without any indication that it is partly loaded from a secure server, for example by removing the "https" in the URL displayed in the addressbar, or showing it as crossed out, because we "should allow sites to shoot themselves in the foot if they absolutely want to".
The problem with the warning dialog is that most users will usually just click through it to continue their business. Crossing out https is not much different from what clients do today, by not showing a padlock, and the problem with allowing sites to "shoot themselves in the foot" is that the foot actually belongs to the user.
In my opinion, blocking the mixing of secure and unsecured content is the best option, perhaps with an option for the user to manually trigger loading of the unsecured elements. It may break sites during a transition period, but such sites will be quickly fixed if the issue is critical for the site. The end result will be a more secure web.
In standards work, there is a suggestion being circulated which might help in this area: HTTP Strict Transport Security(a video of a presentation by the author is available here). HSTS is designed to let sites specify HTTPS as the only allowed connection method for their site or domain, amd it contains a suggestion to browser implementations that they block mixed-security content. However. it would only work for sites that ask for such a policy.
This is an area where few browser vendors, if any, can make a move on their own, but by joining forces we might be able to do something about the problem.
I therefore urge Adobe, Google, kHTML, Microsoft, Mozilla, Opera, and Safari, as well as other browser and framework vendors, to discuss a common solution to this issue, and agree on a time-line for when we will ship browsers and frameworks that implement that solution.
Previous articles on this topic:
:knight: Great article
Why did Opera never offer optional blocking of unsecure content ? If not in the Security Preferences, there’s always room in opera:config for an unchecked-by-default View only securely delivered content checkbox.
Dan: We are actually considering that now, and given the definition of HSTS, we would need the same kind of functionality there, if we implement it.
Originally posted by yngve:We are actually considering that now,It would be great if the outcome would be positive – there are still too many pages that deliver mixed content and some of them I can’t avoid, even if I would like to.When I looked into the source of some of these pages I saw that most of those pages just link to irrelevant content like images or – as you wrote in your blog post – analytics or other not really necessary stuff, so they would work just fine but just look a little bit different. May be a really ugly placeholder instead of the insecure delivered content saying: “This content was blocked because it was delivered from an insecure server” would help to sensitize the users of that page and with a little luck lead one or the other user to inform the site owner about that issue …
I like QuHno’s idea, as although it would be best to convince the site administrator to change the code, they are unfortunately sometimes unrelenting. The placeholder on the offending would identify the problem to users in a way that would be non-threatening. If Opera could convince the other browser makers to adopt the same strategy, it could help to improve the Web as a whole.