If you KNOW HOW AND WHAT TO ASK
The Five-Second test — also known as “timeout test,” “exposure test” and/or “memory test” — is one of the easiest and most convenient rapid testing methods available. Displaying a visual or informational design for five seconds and asking what aspect(s) were recalled most easily or vividly can help pinpoint (a) what stands out most about a design or product, and (b) how the viewer’s perception of the overall design is impacted.
However, the method’s value can be compromised by ignoring its restrictions, and by designing the tests to encourage empty or unhelpful responses. After participating in dozens of such tests using widely available unmoderated testing tools, I found myself giving far too many responses like “I have no way of knowing this” or “I cannot answer this after only 5 seconds of exposure” — and getting far too many similar responses to my own tests.
Convinced there was a better way, I set out to examine the method more closely — how it become an established UX method, how it has evolved in light of new technologies, and whether users are using the tools effectively.
Origins OF THE Five-second test in the lab
The earliest instances of the test as a UX method, can be traced back to the collaborative efforts of Christine Perfetti, Tom Tullis and Jared Spool (see this article from 2005). They devised a very simple set of steps to measure whether the purpose of a specific content page was obvious or not:
- Establish the test context. Set the test participant at ease and communicate the reasons why the research is being conducted.
- Present the instructions. Participants are told that a page will be displayed for 5 seconds, and are asked to remember as much as possible of what they see in this short period.
- Display the entire page for 5 seconds, then remove it.
- Recollection. The participant writes down everything they remember about the page.
- Success Verification. The participant answers two specific questions, to assess whether users can determine the page’s purpose.
Under the method’s original intent, its use was restricted in the following ways:
- It was always used true to the original intent: measuring the obviousness of a content page‘s purpose (as opposed to homepages, landing pages, etc.)
- It was always used it as part of a larger study — never as a stand-alone test for making design decisions.
- It was always administered with a moderator present — never in an unmoderated/uncontrolled environment.
Fast forward 10 or so years, and we see a number of companies — UsabilityHub, Verify, and Userzoom being the most prominent examples — offering free or fee-based online tools for designing and executing some variation of the five-second test. The availability and proliferation of these tools has led to increases in usage of the method. UsabilityHub alone reports that more than 100k unique five-second tests were completed on its site in the last year.
But take a close look at how the tools are marketed, and you’ll notice a distinct movement away from Perfetti et. al.’s original intent of focusing on content pages. UsabilityHub, for example, promotes its tool as ideal for testing whether landing pages are easy to understand. Userzoom likewise notes the method as useful for optimizing landing page conversion.
In truth, my activity on sites like these show them being used for a wide variety of page types, as well as the testing of individual logos and icons, comparison of design elements, and other “non-standard” uses. This may be due to a misunderstanding of (or disregard for) the original intent of the method. More likely, it is a reflection of its perceived strengths — by limiting the amount of exposure to a design and eliciting gut-level reactions to them, the method’s usefulness seems ideally suited for going beyond the original focus on content pages and moving into measuring emotional response, identification of specific visual elements, etc. on all types of pages and designs.
Lack of standards/guidelines = lots and lots of bad tests
After taking part in a few online tests, it became clear that there was precious little information available about getting the most out of the method using these remote unmoderated tools, or how to structure tests to get the best possible data. In one test, I reluctantly answered, “I don’t know” to every question asked, because each was impossible to answer after only five seconds’ worth of exposure to a page design.
To see if there were any identifiable trends, I spent a few months participating in online tests and documenting what I saw. In the end, I had a collection of more than 300 five-second tests and, after some analysis, discovered that more than 70% contained at least two contaminating “bugs”, including (but certainly not limited to):
- Instructions that confuse, misguide or do not set the proper expectation(s)
- Forced scrolling of large images, which limit the tester’s ability to recall specifics
- Content-dense images or pages
- Inefficient ordering of response questions
- Questions that ensure the “I don’t know/I can’t remember” response
- Questions that encourage overly lengthy responses
- Questions that cover too many different design aspects within a single test
In upcoming posts, I will discuss the analysis of my 300+ test sample (with plenty of examples), explore what types of tests are (and are not) conducive to the method, present specific strategies for designing tests that provide useful and usable data; and offer some “outside the box” uses for solving different types of design problems using the five-second test.