PG's Super Practical Guide to Running User Studies (and Giving Live Demos)

I summarize practical tips for running software-based user studies for your research. Some of these tips are also relevant for giving live software demos.

This article was originally about user studies, but after writing it I realized that most of it is also relevant to giving live software demos. That's because a live demo is basically doing a user study on stage where you're the user!

Check out this related article for more details: Tips for giving a live software demo

I've been taking a break from online writing lately as I've shifted my focus to videos and podcasts, but I came back just for this!

Over the past ~15 years, I've run a bunch of user studies for research projects in human-computer interaction, computing education, and software engineering. Many of my systems-related papers feature user studies as part of their evaluations.

Nowadays my students run most of our lab's user studies, and I keep repeating the same cautionary tales to them, which in retrospect make me sound super paranoid. But I've seen all sorts of things go wrong in user studies over the past ~15 years, so I feel that it's best to be prepared for the worst.

Here's what underlies my paranoia: If you're building prototype software for your research, then you're pushing the limits of your computer's software ecosystem. The chances of something going wrong for you during a user study are far greater than something breaking for the average user playing within the safe confines of well-tested, production-grade software.

As researchers who are hacking on prototype software, we don't work with our computers in the same ways as normal users do; we stress them to the max by installing all sorts of weird libraries, compilers, plug-ins, and toolchains, and then fiddling with all manner of obscure settings in our web browsers, text editors, IDEs, and even operating systems to get our prototypes to work just the way we need it to for our next demo or paper.

All of this ad-hoc twiddling makes our software extremely brittle so that even a slight jiggle to our computer's internal setup will cause something to break in a hard-to-debug way. And user studies are usually conducted during the frantic rush right before a stressful paper submission deadline, so we're making tons of last-minute changes and forgetting to document anything with any level of discipline. Plus when we get (gasp!) REAL USERS in front of our prototype software, they're going to stress it in ways that we couldn't ever have predicted.

Rule #1: Never Upgrade. Never upgrade!!!

All of this hyperbole brings me to my first rule of user studies:

Never upgrade any of your software and especially don't ever ever ever upgrade your operating system right before running user studies.

Not Now!


Wait until after your user studies are all done before upgrading. Even if a particular software update seems innocuous enough, or is in some app that doesn't seem at all related to what your research prototype depends on, still just say No! You can't see all the hidden dependencies behind the scenes.

I tell everyone in my lab not to upgrade important software and especially not their operating system before user studies; inevitably during every deadline somebody ends up upgrading and something breaks in a weird way; then they burn a ton of valuable time trying to fix it, and we all flip out.

Frustratingly, not upgrading software is getting harder and harder to do, as more software on our computers auto-update themselves without our knowledge. When possible, turn off auto-updates on the most relevant software during the week before user studies. In case you missed something, test your prototype the morning before each study session, and keep all of those apps open. (But one problem with keeping apps open is that your code might have a memory leak that will cause it to freeze up later when your study participants arrive; tradeoffs!)

For instance, one of my students was building an Atom IDE add-on for his research. Just last week, unbeknownst to him, Atom auto-upgraded itself right before his first ever user study, causing his prototype to completely shatter in a non-obvious way.

(Yes, I know security updates are important, but those are some of the riskiest for breaking the stability of your research setup since they usually need privileged permissions to apply their patches. It's ultimately a risk-reward tradeoff that you have to make.)

Rule #2: Don't Install New Software

A related rule is that you shouldn't install new software, plug-ins, browser extensions, or anything that might “rock the boat” right before running user studies.

In 2018, today's reality is that most students mix personal and work activities on their computers, so it's tempting for them to install a bunch of seemingly-unrelated software for personal use on the same computer as the research prototype that's currently under development. Unfortunately, every new piece of software threatens to interact with your prototype in some quirky way that you didn't plan for. Be patient; wait until after the studies are done, then install whatever you want.

In the “old days” (when we walked to lab uphill in the snow both ways) research software prototypes were built on specialized lab computers meant for work use. Back then, it was less tempting to install personal software on those computers, mostly because they were viewed as work machines and not one's personal laptop. But now that most students work from their own laptops, this work-life separation isn't nearly as clear.

In sum, don't install new software ... and especially don't install anything that requires privileged or root access, because that further increases the chances of something going wrong.

“But, but, but this software has nothing to do with my research prototype! It's totally harmless!” No, just say no! Wait until after your user studies are done. Trust me.

Your computer should basically be on full lockdown mode the week prior to your user studies.

Back up off-site and prepare to reinstall

It's the day before your first scheduled user study and someone steals your laptop. What do you do?

You should already have all of the code and setup documentation for your project backed up somewhere not on your computer so that you can survive such a scenario. Sure, it may be a huge pain to reinstall and configure your prototype on another computer so that it works the same way, but it sure beats not being able to.

Prepare to borrow/hijack your labmate's computer on short notice in case yours gets broken, lost, or stolen.

If you're feeling smug, you may want to now evangelize the latest and greatest in tech for making reproducible development environments so that you can spin up new copies of software on other computers without breaking a sweat. If you know how to do this, great! If you don't, then don't worry; just make sure to back up your code and data so that you have the basics covered.

Even with the latest and greatest of reproducible tools, again because your research prototypes are likely pushing the boundaries of software environments, chances are that it will be harder for you to reproduce your environment on another machine than you expected. For instance, last year one of my students wanted to install his prototype on a new iMac that I had just ordered for the lab, but he couldn't figure out why it wouldn't work right before a scheduled user study (and he's a great hacker, too!). It turned out that the iMac, which we all thought came fresh from Apple, was actually first intercepted by our university's I.T. department, where they proceeded to install some hidden crapware on it for some unknown reason. That crapware interfered with his prototype in a bad way, which took forever for us to discover. We didn't even know it was on the machine since it ran behind the scenes with no other noticeable effects. Ugh.

Recruiting participants

I'm assuming you're doing in-person user studies here; I don't have as much experience with online studies. Some notes:

  • It's a lot easier to get participants if you pay people, even a small token amount; e.g., a $10 Amazon or Starbucks gift card.
  • Overbook more people than you ideally need for your study, since some people will inevitably cancel, sometimes at the last minute. The chances of everyone showing up are near-zero.
  • Also, some people will show up but not meet your recruitment criteria, so you won't be able to use them for your study. It's frustrating, but you should politely turn them away rather than trying to proceed with someone who doesn't meet the criteria. This is another reason why it's critical to overbook, because not everyone who signs up will meet your actual requirements.
  • If it's more convenient for you to go to where the participant is (e.g., in their office, lab, or dorm) to do the study there, then that can also increase success rates. This assumes that you don't need any special lab equipment for your study.

Preparing for your study

  • Print out plenty of copies of any required IRB or consent forms.
  • Print out a checklist of steps for your study's protocol along with the estimated amount of time that each step takes. Stick to this checklist so that you don't forget anything. Checklist, checklist, checklist!
    • You should print rather than put it on your computer since you'll probably need to use your computer for the study itself or to take notes on. Print print print!
  • Test your computer software setup the morning before each study. Otherwise there might have been some nasty hidden software auto-update that ran overnight, and you'll be in for a rude awakening when your participant arrives.
  • Make sure you have plenty of hard disk space free, since screen recording software (see next section) eats up lots of disk space.
  • Start a new fresh account for your user studies, and make sure that all unnecessary widgets stop running in the background.

Finally, make sure you can get into your user study room in the first place :) This year the university facilities folks decided at the last minute to schedule some construction work in our lab, so we weren't allowed in for an entire week:

Fortunately my students finished their studies the week before!

During your study

Assuming you have the proper consent from your participants:

  • Use Camtasia, OBS, or other screen recording software to record the video and audio of the test computer's screen.
  • Use your phone's audio recorder to record a backup audio stream in case the screen recorder fails.
  • Take handwritten observational notes in your notepad or on a laptop. Timestamp your notes so that you can more easily find those points in the video if you need to review them later.
  • If you have custom data logging needs, my hunch is that you should track as much information on the test computer as is feasible without slowing down its performance for your participant (and without eating up all the hard disk space ... see next section). You can always filter the data later.

After each study session

Immediately after each session, save the audio/video recordings somewhere not on the test computer. Again, if the computer explodes for some reason, you want to be able to keep that data. An external hard drive works, and cloud storage also works. I don't care what you use; just back up your data! If you lose that data, you'll have lost most of the evidence from that study session.

Check that there is enough hard disk space remaining on the test computer. It would be a disaster if the next study failed due to lack of disk space. When disk space runs low, all sorts of weird stuff might happen on the test computer.

Resist the temptation to fiddle with your prototype software to fix minor bugs that you noticed during the study. You'll inevitably notice something that annoys you, and you'll want to fix it right away. Resist! Sure, if something is a real killer, you should invest in fixing it. But be very very very cautious about changing your software while you're in the midst of user studies, since you will likely be in a massive rush and introduce new bugs quickly. If you really want to change your code, make sure you have good backups of your code and also make a note of whatever environmental state you fiddled with. Don't rock the boat and break what was just working a minute ago!

Finally, be a friendly human being :) If your participant shows an interest in your project, tell them a bit about it. Offer to answer any questions they might have about either your research or your glamorous life as a grad student, etc.; after all, they took valuable time out of their day to participate in your study. End on a positive note!

Created: 2018-03-24
Last modified: 2018-03-24