Skip to content

Both.org

News, Opinion, Tutorials, and Community for Linux Users and SysAdmins

Primary Menu
  • About Us
  • Computers 101
    • Hardware 101
    • Operating Systems 101
  • End of 10 Events
    • Wake Forest, NC, — 2025-09-20
  • Linux
    • Why I use Linux
    • The real reason we use Linux
  • My Linux Books
    • systemd for Linux SysAdmins
    • Using and Administering Linux – Zero to SysAdmin: 2nd Edition
    • The Linux Philosophy for SysAdmins
    • Linux for Small Business Owners
    • Errata
      • Errata for The Linux Philosophy for SysAdmins
      • Errata for Using and Administering Linux — 1st Edition
      • Errata for Using and Administering Linux — 2nd Edition
  • Open Source Resources
    • What is Open Source?
    • What is Linux?
    • What is Open Source Software?
    • The Open Source Way
  • Write for us
    • Submission and Style guide
    • Advertising statement
  • Downloads
  • Home
  • How many usability testers do you need?
  • Open Source 101

How many usability testers do you need?

It doesn't take many testers to get results that are "good enough" to improve your design.
Jim Hall June 28, 2024 5 minutes read
automation-gears-person

When you start a usability test, the first question you may ask is “how many testers do I need?” The standard go-to article on this is Nielsen’s “Why You Only Need to Test with 5 Users” which gives the answer right there in the title: you need five testers.

But it’s important to understand why Nielsen picks five as the magic number. MeasuringU has a good explanation, but I think I can provide my own.

Overlap in finding issues

The core assumption is that each tester will uncover a certain amount of issues in a usability test, assuming good test design and well-crafted scenario tasks. The next tester will uncover about the same amount of usability issues, but not exactly the same issues. So there’s some overlap, and some new issues too.

If you’ve done usability testing before, you’ve observed this yourself. Some testers will find certain issues, other testers will find different issues. There’s overlap, but each tester is on their own journey of discovery.

How many usability issues one person can find is up for some debate. Nielsen uses his own research and asserts that a single tester can uncover about 31% of the usability issues. Again, that assumes good test design and scenario tasks. So one tester finds 31% of the issues, the next tester finds 31% but not the same 31%, and so on. With each tester, there’s some overlap, but you discover some new issues too.

The math behind the number

In his article, Nielsen describes a function to demonstrate the number of usability issues found versus the number of testers in your test. For a traditional formal usability test, this function is:

1 – (1-L)n

…where L is the amount of issues one tester can uncover (Nielsen assumes L=31%) and n is the number of testers.

I encourage you to run the numbers here. A simple spreadsheet will help you see how the value changes for increasing numbers of testers. What you’ll find is a curve that grows quickly then slowly approaches 100%.

Note at five testers, you have uncovered about 85% of the issues. Nielsen’s curve suggests a diminishing return at higher numbers of testers. As you add testers, you’ll certainly discover more usability issues, but the increment gets smaller each time. Hence Nielsen’s recommendation for five testers.

Again, the reason that five is a good number is because of overlap of results. Each tester will help you identify a certain number of usability issues, given a good test design and high quality scenario tasks. The next tester will identify some of the same issues, plus a few others. And as you add testers, you’ll continue to have some overlap, and continue to expand into new territory.

Visualizing the overlap

Let me help you visualize this. We can create a simple program to show this overlap. I wrote a Bash script to generate SVG files with varying numbers of overlapping red squares. Each red square covers about 31% of the gray background.

#!/bin/bash

max=1
if [ $# -eq 1 ] ; then
  max="$1"
fi

cat<<EOF
<svg viewBox="0 0 819 819" xmlns="http://www.w3.org/2000/svg">
<rect x="0" y="0" width="819" height="819" style="fill:lightgray"/>
<!-- $max overlapping red squares -->
EOF

# iterate n-many red squares

# this assumes 819x819 gray square, and 456x456 red squares.
# the gray square has area 670761 and the red square has
# area 207936. That's 31.00001342% .. so basically L=31%.

# pick a starting x and y start value from 0-363 (that's 819-456)
# for each red square.

for n in $( seq 1 $max ) ; do
xrand=$(( $RANDOM % 363 ))
yrand=$(( $RANDOM % 363 ))

cat<<EOF
<rect x="$xrand" y="$yrand" width="456" height="456" style="fill:red;opacity:0.5"/>
EOF
done

cat<<EOF
</svg>
EOF

If you run this script, you should see output that looks something like this, for different values of n. Each image starts over; the iterations are not additive:

As you increase the number of testers, you cover more of the gray background. And you also have more overlap. The increase in coverage is quite dramatic from 1, 3, and 5 (top row), but compare 7, 9, and 11 (bottom row). Certainly there’s more coverage (and more overlap) at 9 than at 5, but not significantly more coverage. And the same goes from ten to fifteen.

These visuals aren’t meant to be an exact representation of the Nielsen iteration curve, but they do help show how adding more testers gives significant return up to a point, and then adding more testers doesn’t really get you much more.

“Good enough” results

The core takeaway is that it doesn’t take many testers to get results that are “good enough” to improve your design. The key idea is that you should do usability testing iteratively with your design process. I think every usability researcher would agree. Ellen Francik, writing for Human Factors (2015) refers to this process as the Rapid Iterative Testing and Evaluation (RITE) method, arguing “small tests are intended to deliver design guidance in a timely way throughout development.”

Don’t wait until the end to do your usability tests. By then, it’s probably too late to make substantive changes to your design, anyway. Instead, test your design as you go: create (or update) your design, do a usability test, tweak the design based on the results, test it again, tweak it again, and so on. After a few iterations, you will have a design that works well for most users.

Tags: usability

Post navigation

Previous: Intro to the Linux chgrp and newgrp commands
Next: Celebrating 30 years of FreeDOS

Related Stories

Resources_MainImage_03
  • Linux 101
  • Open Source 101

Make your first contribution to open source

Jim Hall March 2, 2026
Drawing of a man with beard sittingat a laptop with applications in the background.
  • Open Source 101

Screencasts for usability testing

Jim Hall June 26, 2024
testing-linux
  • Open Source 101

Usability testing in open source software

Jim Hall June 24, 2024

System upgrades this Sunday, May 3

Tools illustrationFedora 44 was released this week and I’ve upgraded all my systems except for the two that directly affect Both.org. I’ll be upgrading the hosts for my server and firewall to Fedora 44 this Sunday afternoon, May 3.

Both.org will be down for most of the afternoon for these upgrades.

Thanks for your patience.

Random Quote

On Linux you’re an admin
On Mac you’re an end user
On Windows you’re a victim

— @sudoku.bsky.social

Why I’ve Never Used Windows

On February 12 I gave a presentation at the Triangle Linux Users Group (TriLUG) about why I use Linux and why I’ve never used Windows.

Here’s the link to the video: https://www.youtube.com/live/uCK_haOXPFM 

Why there’s no such thing as AI

Last October at All Things Open (ATO) I was interviewed by Jason Hibbits of We Love Open Source. It’s posted in the article “Why today’s AI isn’t intelligent (yet)“.

Technically We Write — Our Partner Site

Our partner site, Technically We Write, has published a number of articles from several contributors to Both.org. Check them out.

Technically We Write is a community of technical writers, technical editors, copyeditors, web content writers, and all other roles in technical communication.

Subscribe to Both.org

To comment on articles, you must have an account.

Send your desired user ID, first and last name, and an email address for login (this must be the same email address used to register) to subscribe@both.org with “Subscribe” as the subject line.

You’ll receive a confirmation of your subscription with your initial password as soon as we are able to process it.

Administration

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

License and AI Statements

Both.org aims to publish everything under a Creative Commons Attribution ShareAlike license. Some items may be published under a different license. You are responsible to verify permissions before reusing content from this website.

The opinions expressed are those of the individual authors, not Both.org.

You may not use this content to train AI.

 

Advertising Statement

Both.org does not sell advertising on this website.


Advertising may keep most websites running—but at Both.org, we’re committed to keeping our corner of the web ad-free. Both.org does not sell advertising on the website. Nor do we offer sponsored articles at this time. We’ll update this page if our position on sponsorships changes.

We want to be open about how the website is funded. Both.org is supported entirely by David Both and a few other dedicated individuals.

 

 

Copyright © All rights reserved. | MoreNews by AF themes.