Few people except my wife know how lazy I am. If I imagine the dread of a chore for more than a minute, I’ll avoid it completely. I’ll only be moved to act if I discover some shortcut, tool, or timeline investment that will save me a ton of time down the road.
Fixing Hudl’s bloated, out-dated information architecture (IA) has felt like a chore to me for over a year. The problem is that our app has two pages, “Library” and “Manage,” that have become dumping grounds for every new feature we release. Five years ago when we first built Hudl, these two pages made sense. But, over time, our product has outgrown its original structure.
Instead of just guessing at new ways to organize our content and features, we wanted to do it right. We discovered a user-centered research tool that could help us–online card sorting.
What is “Card Sorting” and What’s it Like to Do Online?
Card sorting is the process of giving a user a set of cards labelled with all the different content and features of your app and letting them organize it for you. Participants are then asked to label their sets of cards as well. It’s an effective method for learning a user’s vocabulary and uncovering their mental model. It’s an antidote to a typical “inside-out” navigation pattern you find on lots of corporate websites.
An online card sort is similar except that the research participant can complete the task at home. This method is very common for e-commerce websites and I’d never considered using it for an application IA.
Recently, I met the founder of OptimalSort at a UX conference. He convinced me our team should be using it whenever we were unsure about how our users thought about the relationships between content, features, and navigation.
Why Do You Call this a “Shortcut?”
If you’re a product owner or designer, you must realize that there are efficient research tools out their to augment your team’s intuitions. They’re fast, inexpensive, and they’ll save you tons of time. You just need an afternoon to set it up and it’ll run itself for a few days.
Typical card sorts are rigorous. You have to recruit several people, schedule office visits, sit them down for an hour, conduct the test, compile the results, and run the numbers yourself.
Online card sorts are pretty hands-off. After I set up the test and emailed my participants a link, the work was mostly done. All that’s left is the fun part: reading the charts.
I sent an invitation to 1,000 coaches and had 250 complete the study in less than 24 hours. Admittedly, 250 responses is overkill but could you imagine the cost to organize even 50 users? It would take months.
Here’s how we did it:
Step 1 - Decide What You Want to Test
The hardest part about card sorting is deciding what you want to test, scoping it, and avoiding bias when you label your cards.
In our case, we wanted to know how coaches would group the top 25 high-level functions of the app. Some of those things are nouns, others are verbs. That’s OK.
You have to be very careful of making it too easy for your participants by accidentally “giving them the answer.” It’s tricky to avoid a biased grouping in your labels. For example, if you labelled 3 cards:
- Video Activity
- Video Settings
- Video Uploads & Downloads
You could bet you’d find those three things within most participants’ groupings (probably called “Video”). This is the trickiest part of setting up your test so it’s worth your time make sure you don’t make this mistake.
Remember, you’re trying to unearth the user’s mental model, not complete a matching puzzle. It’s better to use synonyms and intentionally muddle your terms so you get a true sense of how users group items. Using our previous example:
- See your team’s film watching activity
- Video settings
- Listing of uploaded or downloaded movie files
Step 2 - Recruit Users to Participate
I knew I wanted a hefty batch of users to test with so I went to our UserVoice feedback forums and grabbed a small subset of 1,000 emails.
Then, I created a quick MailChimp campaign to ask them for 10 minutes of time. It’s important to set a realistic expectation of the commitment you’re asking for.
I wrote a simple, personal, plain-text email to persuade them. That’s about all it took.
Step 3 - Wait a Bit, Then Dig for Insights
As results began to flow in, I instantly wanted to jump in and review. In hindsight, that was a mistake. I let a few early results start to color my perception of what I was seeing. I was just so anxious.
Insight: The “Shit I never use” group
This one stung a little. Coaches effectively told us very plainly which things they used on a daily basis and which they wouldn’t miss. As product-builders we are optimistic about every feature we release but this kind of research tempered that with a needed dose of reality.
Insight: The “Coaching Tools” bucket
This was a great example of research validating a hypothesis that the team had. A couple of our features layer nicely in with the video tools but they’re not video-centric. What would coaches call those tools and how would they cluster? In this case, the obvious example came through loud and clear in the results.
I felt like an idiot waiting so long to try this out. Don’t make my same mistake.
It’s a monstrous task to re-think an application’s information architecture. With the help of online card-sorting tools, I’m confident that an exercise like ours will give your team a dose of customer-centered reality and will improve your application’s IA.
Do you have any experience with card sorting? Leave me a comment below.
Photo credit: Rosenfeld Media