Usability Testing SharePoint Sites: A little testing can make a big difference

Improving Usability and User Success in SharePoint Sites

Even as an experienced information architect, I know that I will never get the design precisely "right" the first time.  I always use paper prototypes to walk through a design.  But even a design that has been reviewed and accepted by the "client" from paper can benefit from actual usability testing.  This week, we did a very simple usability test that proved once again that a little bit of effort can yield major results in improving "finability" in SharePoint deployments.  Here is what we did ...

The situation: we had configured a top level portal site that is designed to essentially provide navigational guidance to a variety of key functional areas that are the responsibility of an operational support business unit for a major company.  Our objective: ensure that the most common information that users need from this business unit is easy to get to on the site.  Our design includes a visual navigator with icons and words that describe the key functions (topics) supported by the business unit. Users can click on either the icon or text to get to more information on the topic.  Our icons were originally grouped into three "collections" based on our initial concept of how the icons were related.  In addition to using icons to navigate, users can search for key terms where we have defined search "best bets."

We designed the site with five user "personas" in mind. Each "persona" represents a typical user of our site: a senior staff member, an associate staff member, a member of the management team, a member of the administrative support staff, and a "friend of the business unit" (a user who has a parallel or related role in a different business unit).  For the usability test, we identified a set of typical activities or questions that each representative user might expect to be able to answer on our site.  For example, one of the activities was to "find out how many active contracts we have with Partner ABC."  We also prepared index cards with each function/topic name so that we could ask users to organize the topics into groupings of their choice.

Our original plan for the test was the following:

  • We identified 8 people representing our 5 personas and invited them to participate in one-on-one tests of 30-45 minutes each.  (Some personas got more than one tester.)
  • The first testing task was a card sorting exercise: we handed the users our stack of topic cards and asked them to place them in groups that made sense to them.  We then asked them to "name" each group.  (The grouping task was no problem.  The naming task proved to be a lot harder - "I know these belong together, I just don't know what I would call them.")
  • We then asked users to put the cards into our pre-defined groups.  (As you read on, you'll see that we eventually scrapped this task since we quickly learned that our groups weren't as good as the ones our testers came up with.)
  • Finally, we gave users a list of 16 activities/questions that we knew that the support group gets asked a lot - things that we knew users expect to be able to find from this business unit.  We asked the testers to read each question out loud and make sure they understood the question.  Once they acknowledged an understanding of the task, we started a stopwatch to time how long it took for them to get to the information.  Since not all the pages of the site had content on them yet since we were still in the early process of deploying the site, we defined "success" when the tester got to the page on which the information would be located.  We also counted the number of "clicks" the tester executed to find the information.  For example, if the tester clicked on a page where we hadn't expected the information would reside, we told them to keep going and didn't stop the timer or count until they got to the page where the information existed.  (However, we kept track of where users thought information would be and plan to use this information to encourage content owners to be sure that their page content includes references to related information that users might expect to find, even if that information is technically provided by another department.)
  • We scheduled the tests over 2 days so that if we learned something important on Day 1, we could make changes to the site in the evening of Day 1 and see if we saw improvements on Day 2.

I'll admit up front: our testing process was completely un-scientific, violating just about every rule of a good scientific experiment.  For example, we changed the order of the testing tasks after the first few testers.  Instead of starting with the card sorting exercise, we started with the "scavenger hunt" after the first group of about 4 testers.  At the end, I don't think the order of activities really made a difference in our tests.  In addition, we immediately saw that the first three testers consistently grouped topics into categories that we had not considered.  Our original model grouped tasks more or less by whether they were "supporting" topics or "operational" topics (supporting the business versus doing the business).  Our testers consistently grouped topics based on who "cared" about the topics - essentially "managers" versus "workers."  In other words, their mental model was consistently role based.  So, in between Day 1 and Day 2, we re-organized and re-named our groups based on tester feedback.  Our original group names were nouns.  Our ending group names were verbs.  One really interesting insight: among the activities supported by our business unit are Legal, Compliance, and Quality Assurance.  One of our testers grouped these topics into a category that he called "Doing the Right Thing."  This is the name that stuck and virtually every other tester said - without prompting - "I really like calling these activities 'doing the right thing.'"

We made some major changes between Day 1 and Day 2 and additional minor changes in between testers on Day 2.  We decided to measure the differences between the first 3 testers on Day 1 and the last 3 testers on Day 2, essentially representing our "before" and "after."  The results were significant.  After we made all the changes based on our initial user feedback, we reduced the average task completion time by 40%.

Here are a few of the specific lessons that we learned:

  • What testers said they did and what they actually did weren't always the same thing. We had three test observers - one person timed, one recorded, and one facilitated and observed.  I was the facilitator/observer for all the tests.  We asked people to talk out loud as they were trying to figure out what to do.  At the end of the tests, we asked the users if they used the icons or the words to guide them to the write content page.  While everyone told us they used the words not the pictures, I could tell that that was absolutely not the case - virtually everyone looked at the pictures and when we asked them directly if the pictures were related to the topic name, they all said yes - so they clearly noticed them!  Take a look at this article by usability guru Jakob Nielsen if you want more justification for his "first rule of usability: don't listen to users." http://www.useit.com/alertbox/20010805.html
  • Only one of our testers used Search - everyone else used our "navigator."  Search is one of the SharePoint features that takes a while to get right.  Every tester said that they don't use search on SharePoint because the results are either overwhelming or not what they expected.  Even on our small site collection, we quickly realized that we needed to do some search "tweaking."  Here's a few things we did (or will do) to reduce the number of incorrect results:
    • Exclude all "reference" tables, libraries, and lists from search results.  This very easy fix can dramatically improve search relevancy.  For example, we learned that we needed to exclude image libraries where site images were stored as well as "look up lists."
    • Implement keyword best bets in the site collection so that the sites that we want users to find "bubble up" to the top of search results.
    • Demonstrate search to users.  Every tester we asked clearly used search as a primary information finding strategy on the public internet.  No one, however, indicated that it was a primary strategy in SharePoint.  We will definitely need to do some training and promotion so that users become confident that search will actually yield meaningful results.
  • Sometimes, we needed to add white space and larger fonts for content that we really wanted users to find.  Perhaps the most unexpected observation was that not a single user found a web part with both an image and table containing specific information when it was in the upper right hand corner of the page surrounded by other web parts in the "out of the box" page layout.  But, almost every user saw it when, for Day 2, we added white space around the web part to separate it from other content.  (We did this quickly by inserting a "blank" Content Editor Web Part with a "chrome" set to None on top of our "key" web part. We are still technically following the "required" site branding - we just made one of our web parts "stand out" from the crowd.)
  • Almost no testers noticed the out of the box "I need to ..." web part in the upper right hand corner of our page. While this might change if more sites have this feature, more testers found the information when we used a regular Links list to "expose" the contents of the list (rather than expecting the user to "open" the drop down).

Our specific list of observations is longer and more specific to our site but here's the important take away: even a small investment in usability testing can yield significant improvements to your site.  For more information about usability testing, check out these great other references (and a gazillion more) on Jakob Nielsen's bi-weekly Alert Box.

Top 10 Application Design Mistakes

Why You Only Need to Test with 5 Users

How Users Read on the Web

Related:
8 highly useful Slack bots for teams
  
Shop Tech Products at Amazon