Paul Doncaster

January 14, 2014

Five Second Tests: Avoiding the Non-Response when Using Online Tools

The main selling point of the five second test method, and of using online tools such as, is that you can get specific feedback about a design quickly and fairly effortlessly. It is therefore very dispiriting to receive the results of a test and see multiple instances of empty or “I don’t know” responses. (Indeed, experience has shown that in crowdsourced tests, respondents are more than willing to communicate the “I don’t know” response in more creative ways.) Design and user experience research can be difficult to justify from a time and resource standpoint – results like this undercut the research effort and make the job that much more difficult. It is therefore critical that precautionary actions be taken to minimize the likelihood of “empty data,” so that the researcher has not wasted his/her time. – Read More –

October 10, 2013

The Five-Second Test: A Wealth of UX Data


The Five-Second test — also known as “timeout test,” “exposure test” and/or “memory test” — is one of the easiest and most convenient rapid testing methods available.  Displaying a visual or informational design for five seconds and asking what aspect(s) were recalled most easily or vividly can help pinpoint (a) what stands out most about a design or product, and (b) how the viewer’s perception of the overall design is impacted.

However, the method’s value can be compromised by ignoring its restrictions, and by designing the tests to encourage empty or unhelpful responses.  After participating in dozens of such tests using widely available unmoderated testing tools, I found myself giving far too many responses like “I have no way of knowing this” or “I cannot answer this after only 5 seconds of exposure” — and getting far too many similar responses to my own tests.

Convinced there was a better way, I set out to examine the method more closely — how it become an established UX method, how it has evolved in light of new technologies, and whether users are using the tools effectively. – Read More –