Search   |   Back Issues   |   Author Index   |   Title Index   |   Contents

Articles

spacer

D-Lib Magazine
July/August 2003

Volume 9 Number 7/8

ISSN 1082-9873

User Evaluation of the Montana Natural Resource Information System (NRIS)

In-Depth Evaluation of Digital Collections Using Snowball Sampling and Interviews

 

Elaine Peterson
Associate Professor/Catalog Librarian
Montana State University-Bozeman
<elainep@montana.edu>

Vicky York
Associate Professor/Distance Education Coordinator
Montana State University-Bozeman
<vyork@montana.edu>

Red Line

spacer

Abstract

This article describes an evaluation technique used to assess the services of the Montana Natural Resource Information System (NRIS). With the goal of improving NRIS services, and after considering several evaluation options, the choice was made to conduct a user survey based on a combination of non-random stratified and snowball sampling in order to discern patterns of use. The authors conclude that this type of evaluation may be valid for other digital collections that serve as clearinghouses for specific types of digital information.

Introduction

The Montana Natural Resource Information System (NRIS) was created in 1985 as a division of the State Library of Montana. Currently, NRIS is an integral part of the state library's digital library initiative. Its role is to serve as a clearinghouse and access point for state natural resource information. There are three primary components in the NRIS Web site and digital collection:

  1. The Natural Heritage Program for information on Montana's biological diversity, emphasizing declining and vulnerable species and high quality natural habitats (rare and endangered plants and animals);
  1. The Water Information System for locating water resources information such as data on surface water, groundwater, water quality, riparian areas, water rights, climate data and more; and
  1. The Geographic Information System (GIS) for maps, map data, analytical services, and technical assistance for the growing number of users of computerized mapping programs.

In its unique position as a program of the state library, NRIS is a neutral, non-biased source for gathering, organizing and disseminating natural resource information.

With the growth of GIS and digital technology, NRIS envisions itself as a "one stop shopping" access point for users wanting to map, manipulate, and interpret natural resource data. Currently, the NRIS Web site averages over 1,000,000 hits per month. In the first three months of 2003, the site averaged 1,972 visitor "sessions" per day, with the average session lasting 13 minutes. In addition, requests for mediated searches and staff assistance have remained steady; in the first three months of 2003, NRIS staff received nearly 550 requests for assistance [1].

In the summer of 2001, the authors and a Montana State University (MSU) department of political science colleague were asked by NRIS to undertake an evaluation of the NRIS Web site, its linked digital collections, interactive features and user services, in order to assist NRIS in its goal of developing a Web-based, user-friendly information distribution system accessible to a broad community of users.

Evaluation Techniques Considered

Increasingly, there has been collective agreement within the digital library community that digital libraries and services are in need of evaluation [2]. Web sites can easily be monitored by domain name, yielding both numbers of hits to the sites and the users' originating affiliations [3]. However, many developers of digital collections want more information about the users of their collections than log files provide.

Embedded Web surveys are also a useful measurement tool, even though the participants in these embedded surveys are usually self-selected and thus do not reflect a representative sampling of all users. (Examples of such surveys permeate the Web, especially on commercial sites.)

Another viable option would be to conduct a very large, statistically valid sampling of thousands of users, not unlike the approach taken by the Nielsen Corporation for television viewers. Although desirable, such a large-scale approach is no doubt too expensive for most digital library developers to undertake.

Various researchers have discussed the idea of in-person sampling of individual users [4]. For example, this methodology was employed by Brown and Sellen to give a picture of general Web usage by a very small group of people, but not as an exploration of users reactions to digital library collections on the Web.

The survey undertaken for the Montana Natural Resource Information System (NRIS) described in this article takes the in-person survey methodology and applies it to a particular Web-based digital collection. This type of user survey — one based on non-random, stratified and snowball sampling — might well provide a possible first step for other digital library developers seeking an evaluation tool appropriate to their collections. The method can provide a more in-depth look at users than merely counting them and the method also provides valuable qualitative information. In the context of evaluation theory, the NRIS survey was designed as user-centered rather than system-centered [5].

The Methodology

The purpose of the NRIS survey was to discern patterns of use and to collect qualitative statements regarding the use and improvement of the various NRIS components. The evaluation process began with a meeting between the authors and the NRIS staff. Discussion at the meeting centered mainly on the makeup of the user groups as well as what the questionnaire should evaluate. Five broad user groups were identified:

  1. Federal agencies,
  2. State agencies,
  3. Academic users,
  4. Local government agencies (including libraries), and
  5. Private/commercial users and non-profit groups.

For each of these groups, the evaluation team began to identify individual names of contacts. In addition to this initial list of names, further suggestions of possible survey respondents came from the technique of "snowballing" (users recommending others to survey) and from agency websites. Each appropriate person was then contacted to arrange an in-person interview. In many cases, up to an hour was spent interviewing the respondent. Interviews took place throughout the state. In a few cases, personal visits with particular respondents could not be arranged. In those instances, the survey was administered either via telephone or email.

The survey administered to users of the NRIS site was based on a combination of non-random stratified and snowball sampling. (Snowball sampling has usually been employed to access hard-to-reach populations [6].) Stratifying the sample assured that members of each user group and their interests would be represented. The evaluation team used this sampling method in order to solicit a richness and depth from respondents not possible with traditional sampling and survey techniques. The purpose of the NRIS survey was to discern patterns of use and obtain qualitative statements from selected users, not to represent the global population of those who access the site. In all, nearly 50 people were interviewed, representing 37 organizations or agencies.

The survey was designed to mirror the relevant layers of the NRIS site, and included a mix of both quantitative and qualitative questions. The first draft of the survey was pre-tested with several users, and after comments by NRIS staff, a final draft was printed (see Appendix). The consultants (the authors and their MSU colleague) directly administered the survey. Emphasis was placed on scoring the quantitative portions of the survey objectively without leading respondents to preconceived answers. A script was developed for several survey questions with respect to the use of examples, etc. Some time was spent making sure the interviews were conducted as identically to each other as possible. Additional emphasis was placed on probing answers for the qualitative portions of the survey. Three questions in particular requested an open-ended response. (See Appendix for survey.)

Outcomes of the Survey

After an analysis of the completed survey, the evaluation team produced a written report, which was shared with the NRIS staff. For the questions using a 4- or 6-point scale, a Likert scale [7] was used to score the data. For all bar charts, an interpretive narrative was supplied. For the open-ended questions (e.g., "What would you like to see NRIS do to most improve the site or their service?"), the responses were directly compiled in an appendix to the written report.

A discussion of the survey report between the evaluation team and NRIS staff provided several interesting insights. It was no doubt encouraging to staff to discover that areas of the Web site they were developing or knew had problems were indeed mentioned by the regular users of the NRIS site as areas for improvement. In addition, the report provided some surprising and helpful information, such as respondents' prevalent use of raw data versus data from an NRIS supplied application. Many users were sophisticated, using their own applications, and they only wanted accurate and timely data to plug into their models. Entrusting NRIS to supply neutral and reliable data was a high priority for such users.

The appendices to the written report, which contained statements from surveyed users, were perhaps the most revealing outcome of the survey. Those surveyed were regular users of the site, and their statements were specific and insightful. Unlike a random, general survey of all users, these comments were easily understood, and NRIS could act on many of them.

Conclusions

Overall, there was a high rate of satisfaction with the NRIS Web site and related data. The majority of survey respondents agreed that the various NRIS collections saved them time and money. It is interesting that most of the substantive suggestions came about as users described what they did with NRIS data and what they would like to do with it using their own specific applications. For example, most respondents indicated that they add to the basic NRIS data in various ways. In some cases, they are actually collecting new data to add to the NRIS site; in other cases they are manipulating the data already in NRIS to come up with other sets that could be useful to a larger audience. In other words, users are adding value to the collection. Another user-suggested improvement was that of building more links from NRIS to external resources. Users would like to see links to their agency or organization within the NRIS site.

A crucial question for undertaking any type of evaluation is the cost and the benefit of conducting it. Costs for the type of survey can be roughly estimated as one hour per user interviewed, not including travel time. Apart from the benefit of user feedback about the Web site, another by-product turned out to be user education. With every contact made, positive steps were taken in a variety of ways:

  • Users realized that NRIS Web developers cared about them and about making their information gathering as efficient and productive as possible.
  • Users were asked about parts of the Web site of which they were previously unaware, thus providing a positive educational opportunity.
  • Users frequently commented that they knew they should be using a certain portion of the Web site, but that they had not taken time to explore on their own. Participating in the survey helped initiate such use.

The in-depth, person-to-person user survey may not be valid for the evaluation of all types of digital collections, but it does seem to have validity for those sites where the users may be part of the collection building process and where the overall goal is to serve as a comprehensive clearinghouse for a specific kind of information.

Notes and References

[1] Natural Resource Information System (NRIS). Report, January 25, 2002-April 1, 2003. Retrieved April 30, 2003 from <http://msl.state.mt.us/admin/Commission/April03/NRIS.pdf>.

[2] T. A. Peters, ed. "Assessing digital library services," Library Trends (Fall 2000).

[3] P. Shepherd, "Keeping count," Library Journal (1 February 2003), p. 46-48.

[4] B. Brown and A. Sellen, "Exploring users' experiences of the Web," FirstMonday (September 2001), <http://www.firstmonday.org/issues/issue6_9/brown/index.html>.

[5] T. Saracevic, "Digital library evaluation: toward an evolution of concepts," in Library Trends (Fall 2000), p. 363-364.

[6] R. Atkinson and J. Flint, "Accessing hidden and hard-to-reach populations: snowball research strategies," Social Research Update (Summer 2001). <http://www.soc.surrey.ac.uk/sru/SRU33.html>.

[7] "A widely used technique for scaling attitudes, Respondents are presented with a number of items, some positively phrased and some negatively phrased, which have been found to discriminate most clearly between extreme views on the subject of study." Concise Oxford Dictionary of Sociology, New York, 1994.

(July 21, 2003: added the word "Information" to the name "Montana Natural Resource Information System" which serves as a hyperlink in the Introduction to the article.)

Copyright © Elaine Peterson and Vicky York
spacer
spacer

Top | Contents
Search | Author Index | Title Index | Back Issues
Previous Article | JCDL Conference Report
Home | E-mail the Editor

spacer
spacer

D-Lib Magazine Access Terms and Conditions

DOI: 10.1045/july2003-peterson