Every month Microsoft releases security updates on the second Tuesday of the month in order to remove security problems in the installations of the Windows Operating System.
January 9th was no exception, but this time there was a problem. One of the updates (KB5034441) failed with the error number 0x80070643, and kept failing on several machines. I paused updates and when updates resumed a week later the patch was gone.
Fast forward to last week, February 13th, and the problematic update is back, and still failing.
According to the MS support article this patch is for something in the Windows Recovery system, related to a problem with the BitLocker file system encryption, and it updates a special partition on the disk (on the systems I have checked it is 500 MB) and the patch claims it needs 250 MB free (there is no information about how much is available in the Windows Computer Management disk info).
Following the failures on many systems, Microsoft posted another article with instruction of how to manually resize the partition so that the patch would apply.
I have several problems with those instructions:
The instruction are very advanced, requiring the user to resize an existing disk partition with data to free up space. This is an operation I only undertake when installing a system, before I actually store data on the system. The worst that can happen is that the data in the partition are lost.
Further, once the partition has been reduced, the user have to run a highly specialized command as administrator in the terminal window to resize/recreate the recovery partition.
Neither of these are actions are something I (a fairly advanced user) would like to undertake on my own production systems, much less in combination, and I suspect that a normal user would refuse to even consider it.
Actually, I further suspect that most normal users wouldn’t even be aware that the update was failing. There is no indication or notification in Windows that a patch failed to apply, and a normal user will just find their PC rebooted the morning after it applied patches, and conclude that the PC is fully up to date and secure. I only know the patch is failing because the last two months I wanted to control when and how my PC rebooted to apply the patches so that it didn’t disrupt what I was working on.
What this means that, assuming this is a patch for a severe issue (and as it have something to do with bypassing the BitLocker disk encryption, it is severe), most users for which this patch is failing are probably blissfully unaware that they have an unpatched security problem on their machine.
Where is the Press?
What I have noticed about this issue is that AFAICT few of the online news services I monitor seems to have reported on the problem. I have noticed at least one Twitter thread, some MS forums threads, but no news media articles.
The Register (which bills itself as “Biting the hand that feeds IT”), did post an article in January, but have not yet followed up after the February repeat. Others, like ArsTechnica and The Verge seems to not have noticed.
What needs to be done?
What Microsoft needs to do about this patch is that it must fixed so that it is able to safely complete its operation without disturbing the user, or requiring the user to manually change their system.
I also think that the patch should be made to complete successfully without changing partition sizes. To paraphrase what Bill Gates reputedly said: “500 MB should be enough for any recovery partition.”
Ars Technica is one of the major technology news sites I follow, as it carries a lot of interesting stories about computer, general technology, and science news.
Last week, however, reading the site became much more difficult.
In relation to the California Privacy law going into effect January 1st, the owner of Ars Technica (and Wired), Condé Nast, put up a pop-up dialog over the front pages of these sites (and maybe others), and required visitors to click through the dialog to access the sites.
However, the click through did not take. On the next visit to the front page, the dialog showed up again. And on the next visit, and the next…
A couple of tweets to Ars Technica’s Twitter account has so far not resulted in any response. It seems like Ars Technica are not monitoring their mentions, and based on a previous case a few months ago (related to their new GDPR dialog popping up several times a week), they are not monitoring their DM channel, either.
I had an early suspicion about what was causing the problem. I have been browsing with third-party cookies disabled for the past couple of years, after I got tired of ads about products I had no intention to buy following me around the net for weeks on end.
Considering Ars Technica and Wired’s target audience, I would guess that a lot of their readers are disabling third-party cookies, too.
Except for one banking site, that have mostly worked fine. Until now.
A bit of testing determined that my initial guess about the cause was correct: The Condè Nast dialog requires that third-party cookies are enabled.
That isn’t a privacy improvement, it is a privacy invasion!
Condé Nast: Please fix this.
Update Jan 10: The Ars Technica site has now been fixed AFAICT. According to info I got yesterday, the problem also affected users of Privacy Badger.
Two weeks ago I posted an article about the occasional problems of getting false positives in security software fixed, and specifically about our recent problems when trying to solve a problem related to a Sophos security product. A user had reported being prevented from using Vivaldi to browse the net by their company’s firewall.
Some commenters thought we were either too hard on Sophos, or hadn’t properly checked the issue before contacting Sophos.
These comments ignore a few of the issues we mentioned:
We had a report about the users being blocked.
We also had information from the same report about Sophos customer support claiming we did not support an API, the implication being that we were being blocked because of this.
Either of these would be reason good enough to contact Sophos to learn about why this was happening, especially given that we should have the same API support as other Chromium based browsers.
We then spent 5 weeks not getting answers to our questions.
Part of our goal with the article was to inform Sophos publicly (just like we had at at least one occasion done privately) that we were not satisfied with how the process was going, and to try to get it escalated.
The next day it got escalated to a support manager, and we started getting real answers to our questions.
First of all, there was no central block by Sophos regarding Vivaldi; the block had been configured by the administrator of the customer installation. We are not yet clear on why the administrator did this, although our not being on the filtering feature support list has been mentioned as a possibility. This particular piece of information was never forwarded to us, and as far as we can tell was not provided to the original reporter, either.
The second part was that the API support was NOT something required to be supported by the browsers. The APIs in question concerned Windows API functionality used by Sophos to configure firewall and network filtering for specific applications.
This functionality is not presently enabled for Vivaldi, because those features had not been tested with Vivaldi. Sophos is now moving to get this functionality enabled and tested with Vivaldi, probably to be released in early Q1 2020.
A part of the confusion regarding Vivaldi and Sophos concerned this functionality, and some of it may have been caused by different understanding of phrases like “Product X is supported”. In many cases a vendor will write this and mean “We only answer support questions about X, not Y”, while most users will read it as “Since Y is not listed, Y does not work with this vendor’s product”.
Regarding Sophos, their page regarding their filtering functionality, they listed a number of browsers for which this feature was enabled (thus “supported”); it said nothing about whether or not other browsers worked on a system using Sophos.
Much of the rest of the confusion that developed in this case was likely caused by misunderstanding information provided to the people at the reporter’s company, and more details may then have been lost when they were passed on during the several steps it passed through before it got to us. A possible way to reduce such confusion is to always use email for questions and answers, any chat logs should be archived.
One of the things we realized in the aftermath of this is that our Bug reporting form and help pages did not ask for details about any third-party software that might be involved in the problem, and we have now updated the bug reporting help page to specify what we need in such cases: Product name and version, relevant error messages, and if available information about any support contacts, such as support case numbers.
The lack of product and version info about the installation was part of the problems we had when contacting Sophos support, since it made it difficult to get in touch with the right people.
We are quite satisfied with the responses from Sophos in the past two weeks.
False positives causing a legitimate application to be blocked is a common problem with security software, and if not handled properly and quickly, it is one that could hurt, or even destroy a security product’s credibility, or in the worst case, the credibility of the entire sector.
It is therefore very important that whenever a security vendor’s product is incorrectly flagging a legitimate product that the vendor resolve the issue within hours, or at most a couple of days of being notified about the problem. Such problems should really be handled with a priority just barely short of problems threatening the customer’s system (like security vulnerabilities).
If a user cannot use their chosen, legitimate products because a security product blocks it, they are far more likely to disable, or uninstall, the security product, than to change their chosen product.
If the problem is caused by some actual problem with the flagged product, the security vendor should immediately contact the application vendor with detailed information about what the problem is, and how to solve it.
Easier said than done
As an example of how to not go about handling such cases, consider this recent case.
About a month ago, in early September, the Vivaldi users at a small German company discovered that they were no longer able to use Vivaldi, since their Sophos firewall was blocking it.
They contacted Sophos customer support and were effectively told that “The block was a management decision”, “Vivaldi does not support content filtering”, “Vivaldi does not support a required API”, “Submit a feature request, we can’t do anything before we receive that” (the latter had been filed over a month before this case started).
No information was provided about which API support was “missing”, or why “management” had decided to block Vivaldi.
Since Vivaldi is based on Chromium, just like Google Chrome, if the blocking was really due to missing support for an API, then Sophos should be blocking Google Chrome as well. We have the same feature support as other Chromium-based browsers. The only real difference is that (e.g. on Windows) our executable is named “vivaldi.exe”, not “chrome.exe” and our UI is implemented differently.
After receiving the replies from Sophos, one of the users in the company reported the problem in a post to our German language forum, and it was then forwarded to those of us in the security group.
I decided to look into the Sophos support site, and did find their chat support, but after two hours of back and forth, being passed from one person to another, their response was effectively “We need a support ticket number, file it from the upload site”.
There were several problems with that upload site, mainly that there was no option to upload a file as “Affected vendor”. You had to be either a “registered user” or “evaluating before purchase”. It was also difficult to choose the right product or product category, and the upload size limit was 30 MB (Vivaldi’s installer is ~55MB), although an FTP option existed.
Since I could not upload Vivaldi’s installers, I uploaded an empty text file, and told them in the message where to get the installers. Their Labs people explained that they were not allowed to download installers from the Web.
After an FTP upload, and a few days wait, they reported that the “problem has been fixed”.
The users said “No, it hasn’t been fixed”.
55+ emails back and forth later (to Sophos and the user), direct involvement with the customer, and 5 weeks after this all started, the problem still hasn’t been resolved. Effectively, they have acted like a brick wall.
In my opinion Sophos has not handled the case well. They never told us, or the customer, what is causing the problem, and they have so far spent at least 5 weeks not fixing the problem, so they definitely did not drop “everything else” to solve it.
I recommend that all security software vendors check their processes to make sure they can handle false positives quickly and efficiently.
Problems I have seen during the process with Sophos
The support people kept assuming I was the customer using their product, and repeatedly asked for information I could not possibly provide. My suggestion is that they create a separate support ticket category for application vendors.
They were unwilling to contact the reporter via the forum thread, saying they were not allowed to do support except through their issue system. My suggestion is that they communicate with reporters through the reporters’ chosen channels, and then invite them to use the vendor’s own channels. This will improve the impression of their customer service.
As mentioned, the upload system is not suited to normal-sized applications, or affected vendors. The minimum size should be increased significantly, and I think they should offer SSH upload via SCP instead of FTP.
An unsophosticated test
While working on this article, I started thinking about the question of exactly how Sophos blocks Vivaldi. My conclusion based on what I know about other firewalls, was that the most likely method is to just check the process name which, as mentioned above, in our case is “vivaldi.exe” on Windows, not “chrome.exe”. It could be that they are doing something more sophosticated, but I doubted it.
So yesterday I created a special version of Vivaldi 2.8 where I undid the changes that rename our Windows executable to “vivaldi.exe”. Even if this experimental build would not be able to get through the firewall, we would learn something about just how sophosticated Sophos’ implementation is.
This morning we sent this special build to the reporter and asked him to run a quick test for us. He has just reported back that the special build was able to access the internet through the firewall.
For other affected Sophos users, the special build (which works as a Snapshot channel, so you might want to disable updates for this particular installation) is available for download here. It should be installed as a standalone version using the advanced installation dialog, NOT over the main Vivaldi installation.
Similar cases from the past
This is not the first time we have had similar problems, either in Vivaldi or back when many of us worked in Opera, and they are usually resolved quickly, without much publicity. For the most part an exchange of a couple of emails were enough to get the problem solved.
There were two cases that didn’t get resolved quickly, and which required a bit more work. One was the old 2003 Opera Bork edition targeting Microsoft and MSN, and the 2016 Vivaldi case when some AV software decided they did not like “Vivaldi Technologies AS” as a text string in our installer, “Vivaldi Technlogies AS” (without the first “o”-letter) worked fine. In both cases our public response caused the issues to be resolved very quickly.
In a more recent example, Eric Lawrence from Microsoft’s Chromium Edge team was trying to chase down why recent versions of a Chromium support executable was triggering warnings from a significant number of Anti-Virus scanners. Although he never actually found the problem (it disappeared in newer builds), as he closed in on what triggered the problem, it started to remind me about our 2016 case, which is why I sent him a link to our 2016 snapshot announcement, and it subsequently made a short appearance on Twitter.
Encryption usage by Norwegian online shopping sites (2016 edition)
Over the past several years I have performed occasional surveys of Norwegian shopping sites and their use of encryption. I decided to limit my surveys to Norway, because I concluded that limited knowledge would make collecting a representative international list of foreign shopping sites difficult, and would probably only contain large stores, not small ones.
The last survey I wrote about was performed in early 2015, and while I did not publish an article about it, I did perform a second survey a few months later, in order to get an impression of the effects of some actions initiated after my article. The changes at the time were not significant enough to change what I presented in the previous article, so I did not publish an article discussing those updated results. Continue reading “Secure online X-mas shopping? Big stores encrypt, the corner-store doesn’t”
In December it was announced that several TLS server implementations were affected by a problem similar to an SSL v3 issue called POODLE disclosed by Google researchers in October. This attack worked by modifying the padding bytes of the encrypted SSL/TLS records that are used to make the records into even multiples of 8 or 16 byte blocks of data, checking how the server responded, and used this to deduce the plain text of the transmitted data, one byte at a time, with just a few tries.
In October last year, researchers from Google published details about an attack on SSL v3, called POODLE. This attack worked by modifying the padding bytes of the encrypted SSL records that are used to make the records into even multiples of 8 or 16 byte blocks of data, as used by 3DES and AES encryption in the “CBC” mode, checking how the server responded, and used this to deduce the plain text of the transmitted data, one byte at a time, with just a few tries. Continue reading “The POODLE has friends”
As I wrote in my previous article about this, in October a group of Google security researchers had discovered a problem, called POODLE, in SSL v3 that in combination with another issue, browsers’ automatic fallback to older TLS and SSL versions, allowed an attacker to quickly break the encryption of sensitive content, like cookies.
The main mitigating methods for this problem are disabling SSL v3 support, both server side (now down to 66.2%, but slowing down) and in the client, and to limit the automatic fallback, either by not falling back to SSL v3 (which is now implemented by several browsers), or by a new method called TLS_FALLBACK_SCSV (introduced by Google Chrome and others). Continue reading “Not out of the woods yet: There are more POODLEs”
Three weeks ago a group of researchers from Google announced an attack against the SSL v3 protocol (the ancestor of the TLS 1.x protocol) called POODLE (a stylish abbreviation of “Padding Oracle On Downgraded Legacy Encryption”). This attack is similar to the BEAST attack that was revealed a few years ago, and one of the researchers that found the POODLE attack was part of the team that found BEAST.
POODLE is able to quickly discover the content of a HTTPS request, such as a session cookie, but only if the connection is using the SSL v3 protocol, a version of SSL/TLS that became obsolete with the introduction of TLS 1.0 in 1999. As almost all (>99%) secure web servers now support at least TLS 1.0 (which is not vulnerable to the attack, provided the server is correctly implemented), it might sound like this attack is not very useful. Unfortunately, that is not so. Continue reading “Attack of the POODLEs”