Feedback as part of evaluation

Our discussion leaders this week asked about other options besides precision and recall to measure the effectiveness of a digital library. A digital library can be a more interactive experience than the simple measurements of precision and recall imply. The give and take of research online happens in a flowing manner with multiple questions being asked and multiple answers provided not necessarily in a straight, linear fashion.

Chapter four of the Reeves reading opened with five areas of usability: 1) ease of learning, 2) high speed of user task performance, 3) low user error rate, 4) user satisfaction & 5) user retention. Without a system that is easy to understand which provides a satisfactory experience users will not be sticking around to determine the success or failure of precision and recall. One of the simplest ways to know if a DL is performing well is to look at user retention: Are the number of users growing? Are they staying on the site longer? What areas of the site are being used the most?

While measurements of user interaction can be determined by data collected on the website I think E&A for digital libraries needs to include user feedback in their own words. This is not just about surveys but about observation of users in real time. The discussions of usability labs and videotaping users as they searched the digital library were very interesting. This goes back to the idea of the flowing manner of research online. In real time users can express why they are using the site in particular ways, if the website layout was intuitive, if they were in need of services that did not exist, ect.

While I doubt a majority of libraries can afford to hire out a lab to test their digital libraries I still believe this type of evaluation can be performed on a smaller scale. Even spending a half an hour watching an individual interact with a site could provide useful feedback. It gives a chance for feedback of more depth than which a plain survey cannot always provide.

Advertisements

6 responses to “Feedback as part of evaluation

  1. You bring up an excellent point about the affordability for testing – these can often be out of reach of a small or even medium-sized library. And the data collection is the easy part. The hard part is analyzing the data. So I was thinking that it might be easier and cheaper to use evaluation tools created for businesses and creatively adapt them to the digital library’s needs. I came across an old article on Mashable that listed just such tools:
    http://mashable.com/2011/09/30/website-usability-tools/

    While admittedly these may not be able to give a full E&A analysis, it might be a good place to start.

  2. I like your premise about the non-linearity of information seeking in DLs. Also the measurement of user retention is helpful and should be more feasible than in traditional libraries, as your suggested.

  3. The testing you outlined here really isn’t difficult to replicate. When ALA redid its information architecture, we had very little budget. So our IT staff asked each division to recruit 3-5 members during Annual Conference to do exactly what you describe — they were given a URL and a sheet of tasks, then everyone was recorded using video cameras as they worked on the sheet of tasks. I would think, in the age of Jing and smartphones, this would be even easier to implement and create some useful feedback for others.

  4. I also found the discussions of usability labs and user observation very interesting. To me, it sounds like a much richer way to assess functionality of design and user experience than any survey OR more quantitative site stat/system log analysis method. I can imagine many cases where the site stats or system logs miss important information. So many issues of usability – and especially those user experience aspects we discussed in the previous unit – are hard to analyze qualitatively. Even more quantitative-seeming criteria can be skewed with a numbers-only approach. Maybe the reason a certain page is getting more traffic than others is because its button or link is at the top of the menu or is in easier to read font. etc.

  5. Excellent post, and rich comments from all. I greatly appreciate the call for more qualitative testing mechanisms, as well as the underscoring of the importance of the user perspective. I also appreciate Stevie and Tammy’s demonstration of how these things can be done without a massive budgetary outlay. Great!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s