Our discussion leaders this week asked about other options besides precision and recall to measure the effectiveness of a digital library. A digital library can be a more interactive experience than the simple measurements of precision and recall imply. The give and take of research online happens in a flowing manner with multiple questions being asked and multiple answers provided not necessarily in a straight, linear fashion.
Chapter four of the Reeves reading opened with five areas of usability: 1) ease of learning, 2) high speed of user task performance, 3) low user error rate, 4) user satisfaction & 5) user retention. Without a system that is easy to understand which provides a satisfactory experience users will not be sticking around to determine the success or failure of precision and recall. One of the simplest ways to know if a DL is performing well is to look at user retention: Are the number of users growing? Are they staying on the site longer? What areas of the site are being used the most?
While measurements of user interaction can be determined by data collected on the website I think E&A for digital libraries needs to include user feedback in their own words. This is not just about surveys but about observation of users in real time. The discussions of usability labs and videotaping users as they searched the digital library were very interesting. This goes back to the idea of the flowing manner of research online. In real time users can express why they are using the site in particular ways, if the website layout was intuitive, if they were in need of services that did not exist, ect.
While I doubt a majority of libraries can afford to hire out a lab to test their digital libraries I still believe this type of evaluation can be performed on a smaller scale. Even spending a half an hour watching an individual interact with a site could provide useful feedback. It gives a chance for feedback of more depth than which a plain survey cannot always provide.