Browser selection

A lot of my work is manual testing of websites on real devices, and a big part of that is cross-browser testing. One of the conundrums around this is deciding which browsers to test on - it's impractical to test on everything, so we have to target what we do towards those that will bring us the biggest return in terms of defects found. (N.B. I'm using "browsers" here to also include choice of device, as the two are often interlinked).

Now, the required browser set will vary from project to project, particularly if the site is aimed at a particular national market - some countries have particular browsers that are peculiar to them (e.g. Yandex has a 10% share in Russia, and in Vietnam there's about a 15% share for the "Coc Coc" browser, which I had never heard of). The type of client will also have an effect: customers for premium goods may have a preference for Safari (as used on the latest iPad/iPhone); sectors where IT is not a priority (or underfunded) may be on older versions (e.g. a healthcare project I worked on a few years ago required support for older versions of IE as many doctor's surgeries were unable to afford an upgrade); and websites aimed at the technology or youth sectors may need to include cutting edge technologies.

The client infrastructure is also important - I've worked on a couple of sites that were destined to be displayed on bespoke in-store equipment where the OS/browser would never be updated, and so testing had to be specific to a particular version of the chosen browser. And for an internal intranet site, it's worth looking at the standard equipment issued to members of the company - there's no point tailoring the site to Windows if everyone is going to be using Macs. Some clients already have a very clear view of the browser coverage they expect, and failing that most will be able to provide some statistics on the current usage profile (from Google Analytics or other sources). But taking that information at face value may not provide the most efficient list for testing.

An example. I've used the stats from http://gs.statcounter.com/ for UK browser market share as it's publicly available and they make the data available in CSV format, which has let me dig into the numbers a bit deeper. The first thing I did was to consolidate the figures for different versions of the same browser, as this is usually the same user updating to a new version (especially since the introduction of automatic updates). The figures seem to bear this out, e.g. for Chrome the individual browser versions have large peaks, but the total usage stays about constant, implying it's the same user base updating:


Given that we are probably most interested in recent trends (a year is a long time in browser updates!), after consolidating the versions I took the average of the Market Share percentage values for each browser for the first three months of 2018, sorted by size, and took the top 20 entries (excluding the rogue category "Other"), giving the following:



Assuming we don't have time/budget to test on all of these, it makes sense to start testing from the top of the list. But where do we stop? This graph shows the browser coverage compared to the number of browsers included:


To me that seems to break into three separate trend lines:


So I would definitely test on the browsers on the first (orange) line (Chrome, iPhone, Chrome on Android), and then as many as possible on the second (grey) line (iPad, IE11, Firefox, Safari, Edge), leaving those on the third line unless the project was high-budget or mission critical (e.g. where there is a legal requirement for the site to be available on all platforms). That list looks pretty similar to what I would have expected, so it's good to get some stats to back me up!

Of course, browser share shouldn't be the sole reason for choosing your candidate list - other factors like screen size and operating system should be taken into consideration, but it's a good place to start.

Comments