Soooo … you say you want to maintain a Chromium fork?

Tree with branches at Innovation House Magnolia
The branches of a tree at Innovation House Magnolia

Photo by Ari Greve

(Note: this article assumes you have some familiarity with Git terminology, building Chromium, and related topics)

Building your own Chromium-based browser is a lot of work, unless you want to just ship the basic Chromium version without any changes.

If you are going to work on and release a Chromium-derived browser, on the technical side you will need a few things when you start with the serious work:

  • A Git source code repository for your changes
  • One or more developer machines, configured for each OS you want to release on
  • Test machines and devices to test your builds
  • Build machines for each platform. These should be connected to a system that will automatically build new test builds for each source update, and your work branches, as well as build production (official) builds. These should be much more powerful than your developer machines. Official builds will take several hours even on a powerful machine, and requires a lot of memory and disk space. There are various cloud solutions available, but you should weigh time and (especially) cost carefully. Frankly, having your own on-premises build server rack may cost “a bit” up front, but it lets you have better control of the system.
  • A web site where you can post your Official builds so that your users can download and install them

Now you are good to go, and you can start developing and releasing your browser.

Then … the Chromium team releases a new major version (which they do every 4 or 8 weeks, depending on the track) with lots of security fixes. Now your browser is buggy and unsecure. How do you get your fixes to the new version?

This process can get very involved and messy, especially if you have a lot of patches on the Chromium code. These will frequently introduce merge conflicts when updating the source code to a newer Chromium version because the upstream project have updated the code you patched, or just nearby, but there are a few things you can do about that to reduce the problems.

There are at least two major ways to maintain updates for a code base: A git branch, and diff patches to be applied on a clean checkout. Both have benefits and challenges, but both will have to be updated regularly to match the upstream code. The process described below is for a git branch.

The major rule is to put all (or as much as practical) of your additional independent code that is whole classes and functions (even extra functions in Chromium classes) in a separate repository module that have the Chromium code as a submodule. Vivaldi uses special extensions to the GN project language to update the relevant targets with the new files and dependencies.

Other rules for patches are:

  • Put all added include/imports *after* the upstream includes/import declaration.
  • Similarly, group all new functions and members in classes at the end of the section. Do the same for other declarations.
  • Any functions you have to add in a source file should always be put at the end of the file, or at the end of an internal namespace.
  • Generally, try to put an empty line above and below your patch.
  • Identify all of your patches’ start and end.
  • Don’t change indentation of unmodified original code lines, unless you have to (e.g. in Python files).
  • Repetitive patching of the same lines should be fixuped or squashed. Such repetitions have the potential to trigger multiple merge conflicts during the update, which could easily cause errors and bugs to be introduced.
  • NEVER (repeat: NEVER!!!) modify the Chromium string and translation files (GRD and XTB). You will be in for a world of hurt when strings change (and some tools can mess up these files under certain conditions). If you need to override strings add the overrides via scripts, e.g. in the grit system merging your own changes with with the upstream ones (Vivaldi is using such a modified system; if there is enough interest from embedders we may upstream it; you can find the scripts in the Vivaldi source bundle if you want to investigate).

Vivaldi uses (mostly) Git submodules to manage submodules, rather than the DEPS file system used by Chromium (some parts of Vivaldi’s upstream source code and tools are downloaded using this system, though). Our process for updating Chromium will work whichever system is used, with some modifications.

The first step of the process is identifying which upstream commit (U) you are going to move the code to, and what is the first (F) and last (L, which you create a work branch W for) commit you are going to move on top of that commit. If you have updated submodules you do this for those as well.

(There are different ways to organize the work branch. We use a branch that is rebased for each update. A different way is to merge the upstream updates into the branch you are using, however this quickly gets even messier than rebasing branches, especially when doing major updates, and after two years of that we started rebasing branches instead.)

The second step is to check out the upstream U commit, including submodules. If you are using Git submodules you configure these at this stage. This commit should be handled as a separate commit, and not included in the F to L commits.

Then you update the submodules with any patches, and update the commit references.

The resulting Chromium checkout can be called W_0

Now we can start moving patches on top of W_0. The git command for this is deceptively simple:

git rebase --onto W_0 F~1 W

This applies each commit F through to L (inclusive) in sequence onto the W_0 commit and names the resulting branch W.

A number of these commits (about 10% of patched files in Vivaldi’s source base) will encounter merge conflicts when they are applied, and the process will pause while you repair the conflicts.

It is important to carefully consider the conflicts and whether they may cause functionality to break, and register such possibilities in your bug tracking system.

Once the rebase has completed (a process that can take several workdays) it is time for the next step: Get the code to build again.

This is done the same way as you normally build your browser, fixing compile errors as they are encountered, and yet again registering any that could potentially break the product. This is also a step that can take several work days. A frequent source of build problems are API changes and retired/renamed header files.

Once you have it built and running on your machine, it is time to (finally) commit all your changes and update the work branch in the top module and push everything into your repository. My suggestion is that patches in Chromium are mostly committed as “fixups” of the original patch; this will reduce the merge conflict potential, and keeps your patch in one piece.

Then you should try compiling it on your other delivery platforms, and fix any compile errors there.

Once you have it built and preferably have it running of the other platforms, you can have your autobuilders build the product for each platform, and start more detailed testing, fixing the outstanding issues and regressions that might have been introduced by the update. Depending on your project’s complexity, this can take several weeks to complete.

This entire sequence can be partially automated; you still have to manually fix merge conflicts and compile errors, as well as testing and fixing the resulting executable.

At the time of writing, Vivaldi has just integrated Chromium 104 into our code base, a process that took just over two weeks (the process may take longer at times). Vivaldi is only using the 8-week-cycle Extended Stable releases of Chromium due to the time needed to update the code base and stabilize the product afterwards. In our opinion, if you have a significant number of patches, the only way you can follow the 4 week cycle is to have at least two full teams for upgrades and development, and very likely the upgrade process will have to update weekly to the most recent dev or canary release.

Once you get your browser into production every couple of weeks you are going to encounter a slightly different problem: keeping the browser up to date with the (security) patches applied to the upstream version you are basing your fork on. This means, again, that you have to update the code base, but these changes are usually not as major as they are for a major version upgrade. A slightly modified, less complicated variant of the above process can be used to perform such minor version updates, and in our case this smaller process usually takes just a few hours.

Good luck with your brand new browser fork!

Microsoft! You broke my backup system!

Backing up the data on your computer is one of the most frequently given advice to computer owners, and there are a number of ways to accomplish it.

The oldest way is to copy the data to an external media. Originally this was tapes, today it will frequently be one or more external harddrive or SSD. Swapping between at least two complete backups is recommended, with the inactive drives stored off-site to avoid destruction or loss in case of fire, theft, or other disasters (and if your area is prone to major disasters, it might be an idea to occasionally store a backup copy in a safe location hundreds of kilometers away; storage over a network connection could be an option for this).

More recently, online backup storage has become more common. Personally, I am slightly skeptical of these, mostly due to the loss of access control, but also because cloud services occasionally have service disruptions, and in some cases lose the data entrusted to them. In case you use such a service, my recommendation is to make sure the data are encrypted locally with a key not known to the service before they are uploaded; this prevents the service from accidentally or intentionally accessing your data, as well as preventing other unauthorized access. Another problem with such services is that they occasionally shut down business with little or no warning, so even if you use such a service, a local backup is recommended anyway. Backing up locally is also recommended when using online application services; these services are useful for working with others, but you might lose access when you most need the access.

There are various ways to perform a backup, from just using a simple copy command, to using more advanced backup applications in the OS, to purchasing commercial backup tools. Trial or Freeware versions of many such tools are frequently included on external harddrives.

My backup system

In my system at home I swap between two external SSD harddrives, and use Windows’s Backup software to manage the backup. Previously, I used a similar system with a commercial tool, but once I moved to Windows 10, I found that the Backup software in Windows seemed to work better for my purposes and I switched to it.

Better does not mean “perfect”, though. There are a few issues, but reasonably minor: 1) Swapping drives destroys the backup configuration, so I have to re-enter it when connecting the second drive. 2) The software does not resume backing up data from where it left off on the reconnected drive, causing it to use a lot more disk space, and requires occasional cleanup to remove old backups.

All this was manageable. At least until last week.

Microsoft breaks the backup

Recently, I finally caved in and allowed Windows 10 on my home computer to be updated to Feature Update 2004. Considering the problems that had been reported about loss of data in Chromium-based browsers, maybe I shouldn’t have, but Windows was now insisting on updating.

A couple of days after the update I switched backup disks, cleaned up some very old backups that were no longer needed, and set up the backup configuration again, and started a backup. A backup that failed! No data was copied to the drive.

I found no errors reported in the normal Event Viewer logs, until I dug down into the application specific logs for “File History backup”, where I found this meaningless warning: “Unusual condition was encountered during scanning user libraries for changes and performing backup of modified files for configuration <name of configuration file>”, with no information about what the “unusual condition” was.

As I usually do when having a problem like this, in order to find out what caused the problem, I started to test with the default configuration and then add more source drives for the backup to see which one broke the system.

The default configuration did copy those files, but it also copied a directory from one of my other drives, the main data drive, the copied directory is where I store all my photos. This directory was not part of the configuration. This directory may have been included because it is the configured default destination folder for the Windows photo import software.

However, when I added the rest of that drive to the list of folders to copy, no further files were copied (although a couple of days later some of the upper level folders did get backed up, none of the important folders were copied).

Removing that drive from the list, and adding the other drives I have for various tasks, projects, and software, those drives did get copied properly.

Going back to the problematic drive, further experiments did not succeed at backing up that drive more than the mentioned top level folders. Even experimentally adding some sibling folders of the Photo folder did not work; they weren’t even added to the list of folders to backup.

Eventually, I was forced to do a manual copy of that drive to a separate area of the backup drive, to make sure I did have a copy of it.

At present my conclusion is that in Feature update 2004 Microsoft did
something to the Backup/File History software, and it broke my system for

My initial guess at the cause of this problem is that the addition of the photo folder conflicts with adding the rest of the same drive to the list of files and folders to back up. Such overlapping lists should be merged, not create a fatal error.

A backup problem like this may not be a Security Vulnerability(TM), but it is definitely a Security Problem.

I have reported this via the Windows Feedback App, as well as to the @MicrosoftHelps Twitter account, but have so far not received any information about how to fix this problem (so, no help, so far).

Microsoft, there are some systems that should never break in production systems. The file system is one, account storage is another, and the backup software is one of the others that should never break. In this release it looks like you broke two such systems. And at least one is still broken 5-6 months after the public release!

Please fix this. Immediately!

Photo by Markus Spiske on Unsplash

Ars Technica’s privacy-invading Privacy Policy update

Ars Technica is one of the major technology news sites I follow, as it carries a lot of interesting stories about computer, general technology, and science news.

Last week, however, reading the site became much more difficult.

In relation to the California Privacy law going into effect January 1st, the owner of Ars Technica (and Wired), Condé Nast, put up a pop-up dialog over the front pages of these sites (and maybe others), and required visitors to click through the dialog to access the sites.

Condé Nast dialog
The dialog displayed over Ars Technica’s front page

However, the click through did not take. On the next visit to the front page, the dialog showed up again. And on the next visit, and the next…

A couple of tweets to Ars Technica’s Twitter account has so far not resulted in any response. It seems like Ars Technica are not monitoring their mentions, and based on a previous case a few months ago (related to their new GDPR dialog popping up several times a week), they are not monitoring their DM channel, either.

I had an early suspicion about what was causing the problem. I have been browsing with third-party cookies disabled for the past couple of years, after I got tired of ads about products I had no intention to buy following me around the net for weeks on end.

Considering Ars Technica and Wired’s target audience, I would guess that a lot of their readers are disabling third-party cookies, too.

Except for one banking site, that have mostly worked fine. Until now.

A bit of testing determined that my initial guess about the cause was correct: The Condè Nast dialog requires that third-party cookies are enabled.

This means that in order to register “accept” of an update to Condè Nast’s privacy policy, the users have to permanently enable third-party cookies, allowing all web sites (not just Condé Nast’s) and all advertisers to track them across all sites on the net that are linked into their various tracking systems.

That isn’t a privacy improvement, it is a privacy invasion!

Condé Nast: Please fix this.

Update Jan 10: The Ars Technica site has now been fixed AFAICT. According to info I got yesterday, the problem also affected users of Privacy Badger.

Sophos: An update

Wide open sky
Photo by Agustinus Nathaniel on Unsplash

Two weeks ago I posted an article about the occasional problems of getting false positives in security software fixed, and specifically about our recent problems when trying to solve a problem related to a Sophos security product. A user had reported being prevented from using Vivaldi to browse the net by their company’s firewall.

Some commenters thought we were either too hard on Sophos, or hadn’t properly checked the issue before contacting Sophos.

These comments ignore a few of the issues we mentioned:

  • We had a report about the users being blocked.
  • We also had information from the same report about Sophos customer support claiming we did not support an API, the implication being that we were being blocked because of this.
  • Either of these would be reason good enough to contact Sophos to learn about why this was happening, especially given that we should have the same API support as other Chromium based browsers.
  • We then spent 5 weeks not getting answers to our questions.

Part of our goal with the article was to inform Sophos publicly (just like we had at at least one occasion done privately) that we were not satisfied with how the process was going, and to try to get it escalated.

The next day it got escalated to a support manager, and we started getting real answers to our questions.

First of all, there was no central block by Sophos regarding Vivaldi; the block had been configured by the administrator of the customer installation. We are not yet clear on why the administrator did this, although our not being on the filtering feature support list has been mentioned as a possibility. This particular piece of information was never forwarded to us, and as far as we can tell was not provided to the original reporter, either.

The second part was that the API support was NOT something required to be supported by the browsers. The APIs in question concerned Windows API functionality used by Sophos to configure firewall and network filtering for specific applications.

This functionality is not presently enabled for Vivaldi, because those features had not been tested with Vivaldi. Sophos is now moving to get this functionality enabled and tested with Vivaldi, probably to be released in early Q1 2020.

A part of the confusion regarding Vivaldi and Sophos concerned this functionality, and some of it may have been caused by different understanding of phrases like “Product X is supported”. In many cases a vendor will write this and mean “We only answer support questions about X, not Y”, while most users will read it as “Since Y is not listed, Y does not work with this vendor’s product”.

Regarding Sophos, their page regarding their filtering functionality, they listed a number of browsers for which this feature was enabled (thus “supported”); it said nothing about whether or not other browsers worked on a system using Sophos.

Much of the rest of the confusion that developed in this case was likely caused by misunderstanding information provided to the people at the reporter’s company, and more details may then have been lost when they were passed on during the several steps it passed through before it got to us. A possible way to reduce such confusion is to always use email for questions and answers, any chat logs should be archived.

One of the things we realized in the aftermath of this is that our Bug reporting form and help pages did not ask for details about any third-party software that might be involved in the problem, and we have now updated the bug reporting help page to specify what we need in such cases: Product name and version, relevant error messages, and if available information about any support contacts, such as support case numbers.

The lack of product and version info about the installation was part of the problems we had when contacting Sophos support, since it made it difficult to get in touch with the right people.

We are quite satisfied with the responses from Sophos in the past two weeks.

The problem with unsophosticated customer support

Do not enter internet
Photo by: Joshua Hoehne @mrthetrain

False positives causing a legitimate application to be blocked is a common problem with security software, and if not handled properly and quickly, it is one that could hurt, or even destroy a security product’s credibility, or in the worst case, the credibility of the entire sector.

It is therefore very important that whenever a security vendor’s product is incorrectly flagging a legitimate product that the vendor resolve the issue within hours, or at most a couple of days of being notified about the problem. Such problems should really be handled with a priority just barely short of problems threatening the customer’s system (like security vulnerabilities).

If a user cannot use their chosen, legitimate products because a security product blocks it, they are far more likely to disable, or uninstall, the security product, than to change their chosen product.

If the problem is caused by some actual problem with the flagged product, the security vendor should immediately contact the application vendor with detailed information about what the problem is, and how to solve it.

Easier said than done

As an example of how to not go about handling such cases, consider this recent case.

About a month ago, in early September, the Vivaldi users at a small German company discovered that they were no longer able to use Vivaldi, since their Sophos firewall was blocking it.

They contacted Sophos customer support and were effectively told that “The block was a management decision”, “Vivaldi does not support content filtering”, “Vivaldi does not support a required API”, “Submit a feature request, we can’t do anything before we receive that” (the latter had been filed over a month before this case started).

No information was provided about which API support was “missing”, or why “management” had decided to block Vivaldi.

Since Vivaldi is based on Chromium, just like Google Chrome, if the blocking was really due to missing support for an API, then Sophos should be blocking Google Chrome as well. We have the same feature support as other Chromium-based browsers. The only real difference is that (e.g. on Windows) our executable is named “vivaldi.exe”, not “chrome.exe” and our UI is implemented differently.

After receiving the replies from Sophos, one of the users in the company reported the problem in a post to our German language forum, and it was then forwarded to those of us in the security group.

I decided to look into the Sophos support site, and did find their chat support, but after two hours of back and forth, being passed from one person to another, their response was effectively “We need a support ticket number, file it from the upload site”.

There were several problems with that upload site, mainly that there was no option to upload a file as “Affected vendor”. You had to be either a “registered user” or “evaluating before purchase”. It was also difficult to choose the right product or product category, and the upload size limit was 30 MB (Vivaldi’s installer is ~55MB), although an FTP option existed.

Since I could not upload Vivaldi’s installers, I uploaded an empty text file, and told them in the message where to get the installers. Their Labs people explained that they were not allowed to download installers from the Web.

After an FTP upload, and a few days wait, they reported that the “problem has been fixed”.

The users said “No, it hasn’t been fixed”.

55+ emails back and forth later (to Sophos and the user), direct involvement with the customer, and 5 weeks after this all started, the problem still hasn’t been resolved. Effectively, they have acted like a brick wall.

In my opinion Sophos has not handled the case well. They never told us, or the customer, what is causing the problem, and they have so far spent at least 5 weeks not fixing the problem, so they definitely did not drop “everything else” to solve it.

I recommend that all security software vendors check their processes to make sure they can handle false positives quickly and efficiently.

Problems I have seen during the process with Sophos

  • The support people kept assuming I was the customer using their product, and repeatedly asked for information I could not possibly provide. My suggestion is that they create a separate support ticket category for application vendors.
  • They were unwilling to contact the reporter via the forum thread, saying they were not allowed to do support except through their issue system. My suggestion is that they communicate with reporters through the reporters’ chosen channels, and then invite them to use the vendor’s own channels. This will improve the impression of their customer service.
  • As mentioned, the upload system is not suited to normal-sized applications, or affected vendors. The minimum size should be increased significantly, and I think they should offer SSH upload via SCP instead of FTP.

An unsophosticated test

While working on this article, I started thinking about the question of exactly how Sophos blocks Vivaldi. My conclusion based on what I know about other firewalls, was that the most likely method is to just check the process name which, as mentioned above, in our case is “vivaldi.exe” on Windows, not “chrome.exe”. It could be that they are doing something more sophosticated, but I doubted it.

So yesterday I created a special version of Vivaldi 2.8 where I undid the changes that rename our Windows executable to “vivaldi.exe”. Even if this experimental build would not be able to get through the firewall, we would learn something about just how sophosticated Sophos’ implementation is.

This morning we sent this special build to the reporter and asked him to run a quick test for us. He has just reported back that the special build was able to access the internet through the firewall.

For other affected Sophos users, the special build (which works as a Snapshot channel, so you might want to disable updates for this particular installation) is available for download here. It should be installed as a standalone version using the advanced installation dialog, NOT over the main Vivaldi installation.

Similar cases from the past

This is not the first time we have had similar problems, either in Vivaldi or back when many of us worked in Opera, and they are usually resolved quickly, without much publicity. For the most part an exchange of a couple of emails were enough to get the problem solved.

There were two cases that didn’t get resolved quickly, and which required a bit more work. One was the old 2003 Opera Bork edition targeting Microsoft and MSN, and the 2016 Vivaldi case when some AV software decided they did not like “Vivaldi Technologies AS” as a text string in our installer, “Vivaldi Technlogies AS” (without the first “o”-letter) worked fine. In both cases our public response caused the issues to be resolved very quickly.

In a more recent example, Eric Lawrence from Microsoft’s Chromium Edge team was trying to chase down why recent versions of a Chromium support executable was triggering warnings from a significant number of Anti-Virus scanners. Although he never actually found the problem (it disappeared in newer builds), as he closed in on what triggered the problem, it started to remind me about our 2016 case, which is why I sent him a link to our 2016 snapshot announcement, and it subsequently made a short appearance on Twitter.

Where did all the nice things go? SmartGit project dropdown

For modern software developers, there are a number of must-have tools: An editor, a compiler (called a web browser by HTML/JS devs), and a debugger. Further, if you are developing a non-trivial project, especially as part of a team, you will need a version control system.

A version control system is a very important tool when developing software, as it maintains the history of your project, chronicling every change, whether small or major, and it allows you to share your code easily with others. Using the information stored by this system it is possible to specify the source code version to use for a public version of the product or to discover which modification introduced a bug (or fixed it).

There are many different version control systems available. Among the more common ones are CVS, SVN, Hg, and currently the most popular, Git.

All of these systems are generally implemented as command-line tools, and this lets the user perform many advanced actions, especially using scripts to repeat the operations. However, maintaining source code updates only via command line operations becomes difficult very quickly, it does not scale well. A graphical UI (GUI) for the version control system is needed for any major project.

Some of the systems, like Git, also have some basic GUI applications, but their usability is limited. This has opened up a market for more advanced version control GUI applications, such as the one I have used for 10 years, or so, SmartGit.

Syntevo’s SmartGit, like most such tools, has an advanced display of projects, updated files, conflicted files, differences between the currently stored version, and the currently edited version, graphical representation of the project history, etc.

All of this is very useful to developers, especially when they are working on big projects.

Five years ago I wanted to update to the then newest version of SmartGit, v6.5 (the most recent is v19), from my then current version, v4.6.

Unfortunately, I discovered that the SmartGit developers had made some UI changes that broke the tool for me.

One of the UI features I use frequently is a dropdown menu on top of the directory explorer panel listing all the projects I am working on, and which allows me to easily open a second project in a new application window.

In the new version, they had removed this dropdown, and moved all the projects into the explorer panel, alongside the directory views, and the default operation was to open all of these in the same application window (it _is_ possible to open a second window). When you, like me, are working on 20+ various project checkouts, half of them having more than 500 000 files each, having more than one of them opened in the same application window is, in a word, unworkable. Even having the list of all the projects in that panel is in my opinion unworkable with a setup like mine.
I reported this issue to the developers at Syntevo, informing them that this was an upgrade blocker for me. Their answer boiled down to “Won’t fix it”. I responded with “Won’t upgrade”.

I have stayed with SmartGit 4.6 ever since, despite the other issues with it, such as it being slow, leaking memory, some of which could be due to it being implemented as a Java application.

I have explored other similar tools, but the ones that look most useful have one major problem: They all require that before you start, you must create an account with their code repository and log the application into it.

That is unacceptable to me, because I am not generally going to be using an external repository. We have our own local servers holding the repositories for our projects.

I do not mind buying a license for a useful product, but I do mind having to run a product logged into an external service, especially one I don’t need.

So, if Syntevo, could just fix the GUI in this area (and haven’t broken anything else important), they would sell at least one license, and since this would make the tool work better again for Chromium sized projects, they would likely sell even more licenses.

Syntevo: Make it an option!

Microsoft, keep your hands off my keyboard!

The keyboards connected to our computers are essential to controlling every aspect of our computer experience, and to our communications with everybody we communicate with. A very basic aspect of the keyboard, and of our personal choice (it is really a major aspect of our national identity), is the layout of the keys. In my case, I am using a keyboard with a Norwegian layout, which is essential when writing text in my native Norwegian language.

What happens when someone, or something, changes how the keyboard is working?

About a year and a half ago I started working on a Windows 10 machine at work (having used Windows 7 until then), but after I while I started running into a particularly obnoxious problem: The keyboard layout would, occasionally, automatically be changed to the US layout, instead of my Norwegian layout.

For somebody who is reasonably competent at typing without looking on a Norwegian keyboard (aka. the “Touch” method), that is rather irritating, because keys like “<“, “:”, “-“, “æ”, “ø”, and “å” suddenly produce completely different characters. The result is a disruption of my current activities.

After some searching I discovered this thread about it, started in 2016 (and still active), and there are indications in the thread’s references that the problem first appeared in Windows 8, at least as early as 2012, maybe 2011.

Based on information in the thread and its references, what seems to be happening is that Windows 10, being “concerned” that the user’s configuration might not be correct in the context of his or her environment, scans the other Windows 10 machines on the network, or obtains information from computers it connect to, and possibly other information, such as the machine’s geographical location, and automatically reconfigures the enabled keyboard layout based on this information.

I do not know if this is correct, but the name of a registry value mentioned in this information, “IgnoreRemoteKeyboardLayout”, indicates that there may be something to it.

This problem seems to have been affecting many users from non-English
speaking countries, especially those working in multilingual, global companies, or those having moved to a different country.

In Vivaldi, I work with colleagues from many countries and we are all using different keyboard layouts, including German, Icelandic, and US layouts.

The thread I found discusses various workarounds, some of them requiring
you to edit the registry (one of which I used to fix my problems), which is something the average user should never be required to do.

Recently, though, I have run into this again with my personal laptop, and as far as I can tell the workarounds are not just not working anymore, it seems
that the workarounds I did apply earlier were removed somehow, possibly by the recent major Windows 10 update.

The keyboard layout of my laptop keeps changing to the US layout several times a day, even several times an hour. In fact, I have had it happen in the middle of writing emails!

And what is happening to my laptop is not an isolated case: One of my colleagues has reported the same thing has started happening to his laptop, too.

So, I think Microsoft is being too “helpful” in this case.

I have configured my PCs the way I want them configured, with the UI language I want, and the keyboard layout I want to use, and I did so when I installed Windows on the PC, and I have no plans to change them.

Microsoft, keep your hands off my keyboard!


Update June 24: The jury is still out on this, but a couple of days ago I decided to try two changes: I removed all the extra languages and keyboard layout combinations (again), and also disabled the keyboard shortcuts for switching between these settings.

If this continues to work, it may have “solved” my problem.

However, it is still a “solution” for a problem that should never have existed, the automagic addition of languages and keyboard layouts, and it may be that the workaround only hides the issue.

It also points to what I think is a bad design choice by Microsoft: The choices for the keyboard shortcuts are Ctrl+Shift and Left Alt+Shift (never mind that Norwegian keyboards only have one Alt key, the left one; the other is the AltGr key that is an alias for Alt+Ctrl, used to type various characters like “@”, “{“, and “€”). Both of these shortcuts are used as part of various keyboard shortcuts, and the Alt+Shift key variation is part of the “Switch to previous Application” shortcut Alt+Shift+Tab. What happens if you start to press this shortcut, and decides to not change application after the first two keys are pressed? That’s right: The keyboard layout changes!

And even if these two actions “solved” the problem, it should never have been an issue for my systems, since I never added extra languages or keyboards. Microsoft added them without asking, then a bad choice of keyboard shortcuts exacerbated the problem.

And users that, for various reasons, do have multiple languages and/or layouts enabled, may still be having problems.

Update June 27: After rebooting the laptop, the US layout returned, despite having been manually removed, and the keyboard shortcuts being disabled.

What is

Executive summary: The TLS Prober is a tool that gathers information and statistics about the state of the SSL/TLS protocol security features and vulnerabilities across the internet. It does nothing that will harm your server.

The TLS Prober is a tool I developed while I worked I worked at Opera Software, originally to track the progress of the TLS Renego problem, and which I was allowed to take with me when I left Opera in early 2013. It is primarily used to scan a set of 23 million hostnames, most of the names derived from Alexa top million domain names, resulting in tests of about 500000 unique servers, for their support of SSL and TLS features, as well as checking for various interoperability issues and vulnerabilities.

Similar tools are also in use by others, such as the Qualys SSL Labs prober. Continue reading “What is” now showing EV-green in browsers

Friday evening (20 December) those who keep an eye on the browser UI would have observed a small but significant change take effect at The browser turned on the Extended Validation “Green Bar” for us, indicating that the identity of our website was now better assured than it has been, though the encryption is just as good as before.

Previously, while we were developing the site and during the first days of it being live, we used a Domain Validated SSL/TLS certificate for our sites that indicated that we had control over the domain, but not who we are. This is a useful level of web site identity verification for smaller sites that only need to present information securely and without any major collection of personal information. 

For users of a web site that collects or manages personal and payment information, it is not just important to know that the people managing the web site are in control of the domain. It is even more important to know, or be able find out, who they are, legally speaking, in case there is a problem.

This need for verifiable identity information was why a group of Certificate Authorities, such as Verisign and Entrust, and Browsers, such as Microsoft, Mozilla and Opera (including yours truly), gathered to found the CA/Browser Forum so that we could define what eventually became the Extended Validation (EV) Guidelines for CAs, and the associated “Green Bar” in browsers.

When Jon decided to start the social web site project, one of my suggestions was to have an encrypted site. Given recent revelations (e.g., NSA) it is now, or should be, unthinkable to have a social web site that is unencrypted. While many sites have been using a hybrid approach where the login, account management, and sometimes authoring, is encrypted, there are just too many ways to sniff information that way, so the whole site needs to be encrypted. Another of my suggestions was to use EV certificates on the sites, to provide better identity information and assurances to our users.

While I would have wished to have unveiled on Wednesday with an EV certificate, the process of obtaining one was intentionally designed to include a lot of paperwork that has to be completed before the certificate can be issued, and that paperwork was not completed by our CA, GlobalSign, until early evening Friday.

So, go ahead and enjoy, assured that it is Jon’s company, Vivaldi Technologies AS, that is operating it.