My experience with the usability evaluation process is limited besides an attempt for my Digital Tools, Trends, and Debates class, where we needed to do usability testing on the digital library we had created. To do this my group conducted a pluralistic walkthrough. We created five questions that asked users to conduct specific tasks related to our digital library. These were then handed out to friends or family members to complete. The downfall of our method was that each group member only handed out one survey and that the surveys were only given to friends and family. This meant that we lacked substantial feedback and that our audience was not a representative enough sample of our user base. Despite this, we did receive some information on things we had overlooked and suggestions on improvements. What this demonstrated to me is that despite receiving limited feedback, some feedback is still better than receiving none at all.
Additionally, what I found interesting in this weeks readings, was the large array of methods to use in evaluating your digital library. I have to admit that I am one of those people in the Reeves article who think of having to do an evaluation or usability test and say, “Well, let’s do a survey.” Even for my DL project in Digital Tools, Trends, and Debates, our group went straight to conducting a survey. I learned the importance this week of other forms of evaluation and how these may be more beneficial to the evaluation processes overall success.