10 Flares Filament.io 10 Flares ×

The main selling point of the five second test method, and of using online tools such as fivesecondtest.com, is that you can get specific feedback about a design quickly and fairly effortlessly. It is therefore very dispiriting to receive the results of a test and see multiple instances of empty or “I don’t know” responses. (Indeed, experience has shown that in crowdsourced tests, respondents are more than willing to communicate the “I don’t know” response in more creative ways.) Design and user experience research can be difficult to justify from a time and resource standpoint – results like this undercut the research effort and make the job that much more difficult. It is therefore critical that precautionary actions be taken to minimize the likelihood of “empty data,” so that the researcher has not wasted his/her time.

For our purposes, the “pass” and “I don’t know” answers are considered to be forms of non-response – that is, an instance where the required information is not obtained from a potential respondent. This differs somewhat from the definition commonly used in survey research, which specifies the absence of representation in a sample due to factors beyond the control of the researcher – e.g., the inability to contact potential respondents; the inability to persuade respondents to cooperate in a survey; and/or “not-able” factors, such as illness or language barriers. Regardless of any differences in definition, the two share the major negative consequence: unusable responses that reduce sample size, introduce the possibility of bias into the results, and/or result in wasted time, effort and resources.

At first glance, five second tests would appear to be insulated from many of the “factors beyond the control of the researcher” noted in the definition. For example, they presume a captive audience motivated in some way to provide feedback: in lab tests, participants have been specifically recruited (and likely compensated) for their participation, while participants in crowdsourced tests have likely come to an online testing site with the possibility of compensation, or to “do a good deed” for a fellow researcher in need of data. In either case, by the time the test is administered, contact has been made successfully, some level cooperation has been secured, and “not able” factors have been overcome. (In fact, the far more likely possibility in five second tests is a lazy, frustrated and/or disinterested participant using a non-response as an easy means of moving on to something else).

Five second tests are also very likely to be free from any emotional factors that could contribute to non-participation. Research has shown that when non-response options are offered in online surveys (“I don’t know”, “prefer not to answer,” etc.), people tend to use them. There is of course some validity in their inclusion in surveys — they allow respondents to indicate that they have not given any substantive thought to a particular issue, do not wish to give their opinions about a controversial subject, or do not have enough factual knowledge about a given topic to answer in an informed way. Conversely, participants in five second tests are focused on an impersonal experience, based exclusively on the reaction to a common visual stimulus, and thus are not likely to activate emotional triggers that might cause hesitation.

The main issues seen in five second tests lie not in the uncontrollable factors, but rather in research approaches and test structures that encourage non-responses. As an example, one of the tests I analyzed involved the home page of a website for a retail outlet selling quincenera dresses. Included amongst the questions was: “What is the quality of the dresses sold here?” At a macro level, the problem is that the nature of the question almost guarantees a non-response. Conceptually, commenting on the quality of a dress would require some level of interaction with the garment itself — it needs to be held, examined, worn, and “experienced.” It is simply not reasonable to ask a person to render an opinion on item quality based on a visual exposure to the home page of a website — and certainly not when that exposure is limited to a mere five seconds. By asking such a question, the researcher has wasted much of what limited opportunity exists for getting meaningful feedback.

The quincenera test is a somewhat egregious example of ensuring wasted time and effort by not giving enough thought to the test. In most cases, the risk of getting non-responses can tied to the type of test you choose to run. For instance, in a memory dump test (respondents simply listing as many things as they can remember about what they saw), it’s virtually impossible to get an “I don’t know” answer – everyone is likely to remember and report something about what they saw. (The number of items remembered will vary from person to person, and the likelihood of a non-response will increase with each response requested, but you’ll rarely if ever get a completely “empty” set of answers.) Likewise, people are very rarely without an opinion or attitude about how a design makes them feel or react, so attitudinal tests inherently discourage the non-response. Tests that ask for factual data about a design are trickier – given the limitations of a five-second exposure, a respondent might legitimately not know where on a screen a specific button was located, or remember the words of a website tagline or marketing slogan.

To a certain degree – especially in online unmoderated tests – the data delivery mechanism can also impact the likelihood of non-responses. In online surveys, the risk can be mitigated somewhat by using response constructs that employ radio buttons or checkboxes, by not offering a non-response option, and by requiring that a question be answered before proceeding. In the current crop of five-second test tools, the only option for submitting an answer is to enter text in a box, meaning that the possibility of the “I don’t know” or “I can’t recall” response cannot be eliminated.

The easiest way to minimize the likelihood of the non-response is to test, test, and test some more before formally launching the test. Pilot testing with friends, colleagues and associates will help indicate the relative risk of non-response, and will help identify any corrective actions to take before the formal launch. Keeping this rule firmly in mind from the start can not only help ensure useful data, but can also help stretch the limits of what can be done with the method.

Paul Doncaster

Paul Doncaster

Manager, User Experience at Thomson Reuters
I have spent the past 7 years working on highly-complex UX projects within the domains of course technology, legal and intellectual property. I am a 2007 graduate of Bentley University’s HFID master’s program, and have written and spoken on many UX topics, including designing for emotional response, online readability, and designing for tablet users in the legal domain.
Paul Doncaster


Manager of User Experience at Thomson Reuters. Proud daddy of beautiful daughters. Repository of useless trivia. Cringes when people say eck cetera.
@billmaher Wondering if #BillyChrystal is having a deja vu moment? https://t.co/MX9aoi4MFr - 1 year ago
Paul Doncaster