Search   |   Back Issues   |   Author Index   |   Title Index   |   Contents

Articles

spacer

D-Lib Magazine
September 2006

Volume 12 Number 9

ISSN 1082-9873

Computational Science Educational Reference Desk

A Digital Library for Students, Educators, and Scientists

 

Diana Tanase
Shodor Education Foundation
60 Forty Avenue
London, UK, HA9 8LQ
44-7960-221124
<diana@shodor.org>

David A. Joiner
Kean University
1000 Morris Avenue
Union, NJ, 07083
<djoiner@kean.edu>

Jonathan Stuart-Moore
Shodor Education Foundation
300 W. Morgan Street Suite 1150
Durham, NC, 27707
919-530-1911
<jwsm@shodor.org>

Red Line

spacer

1. Introduction

The Computational Science Education Reference Desk (CSERD) is a portal to the National Science Digital Library (NSDL) that opens its virtual shelves to those interested in educational resources for computational science. If one wants to find trustworthy resources on the behavior of tsunami waves, on creating computer models for building bridges, simulations of molecule interaction, or many other related topics, the CSERD portal is designed to help in that search. And trust is a key issue. While the lack of quality control on the web is not an issue unique to computational models, the many computational models that are on the web have their assumptions, their logic, their very science hidden beneath the veil of the graphical user interface, and the user often does not even have any text to read to discern whether or not they should believe the information they are receiving. CSERD addresses this issue through the use of both professional and community reviewing built into the portal infrastructure [1].

Computational science educators, students, and scientists, through disparate efforts, have been creating numerical models and computational tools for various disciplines (e.g., science, mathematics, engineering) in the last decades. As the user approaches these models, determining whether the material is useful for the user fundamentally breaks down into three questions: is the simulation well crafted (verification), is the science accurate (validation), and is the audience appropriate (accreditation)? Also, the unfiltered web being as broad and inclusive as it is, searching on standard search engines can fail to produce acceptable results. Using Google to search for "learn probabilities interactively", outputs a list of 188,000 links. Getting to the "hit resources" is a daunting task.

CSERD addresses this problem by (a) starting with a smaller, targeted set of resources, (b) extending traditional keyword searching with a multi-level browsing option, and (c) augmenting item records with both professional and amateur reviews.

The browsing feature allows the user to create a smarter search by association of predefined keywords. If you select "Browse: By Subject" on the home page, you will discover a very straightforward way of creating a query by selecting a subject, a keyword, an audience, and an educational level. This multi-faceted browsing will get the user to the relevant resources eliminating the "sort on your own" step.

In addition to the above use case scenarios, CSERD allows users to read and enter reviews of the indexed resources (learning objects). Besides the reviewing mechanism, CSERD encourages the submission of new resources to be incorporated (annotated and tagged) by the library (see Figure 1).

Chart showing the interaction between the CSERD system and its primary types of users

Figure 1 summarizes the interaction between the system and its primary types of users: anonymous user, reviewer, editor, and content contributor.

2. Assessment of Digital Library Objects

2.1 Assessment Steps

As items are submitted to CSERD, they first are tagged in extended Dublin Core using fields typical for education objects. Additional assessment steps augment this metadata in three mandatory stages: verification, validation, and accreditation (VV&A) of the materials by experts in the corresponding domain, as well as students and teachers. The VV&A process, put simply, addresses questions such as: does the software run as advertised, is the science right, and what is the appropriate target audience?

In order to handle the functional constraints introduced in the previous paragraph, the CSERD developers decided to build a system with a transparent web interface that brings together online resources, a searchable metadata catalog, and a web-based evaluation process. Two open source projects have been joined and customized: a content management system (Plone [2]), and a metadata repository system (Collection Workflow Integration System -- CWIS [3]). Plone was used for implementing the VV&A tool, while CWIS became the metadata repository for CSERD. Verification involves testing whether the model runs properly. Traditionally, this includes an evaluation of the logic that went into the model, but as that is often hidden from the user, this primarily involves bug testing on many platforms and on many browsers.

Validation is an evaluation of the validity of the science learned by the end user, essentially, is the science being presented correct, and what are the limitations? Many models are perfectly valid for a reasonable range of input, but fail if unrealistic inputs are used.

Accreditation is a measure of the educational usefulness of the learning object. Clearly, if the object has bugs or if the science is incorrect or inappropriate to the user's learning goals, the learning object will not help the user learn; but even a bug-free, scientifically accurate model can fail in education if it is not made accessible to users of the target grade level and made to fit in with the standard curriculum.

2.2 The VV&A tool

The VV&A tool is a storage space for reviews, as well as a management tool for the workflow of the reviews associated to each CSERD material. It has a logical, intuitive structure with search and browse tools to explore the different reviews, an easy way of submitting electronic reviews, and a simple way of tracking them by the editors.

The VV&A tool allows users to submit each level of review individually, as many of the research scientists developing and evaluating scientific models are not the most able to determine if a resource is properly correlated to national educational standards, and classroom teachers may not have enough content expertise to evaluate the specifics of a science module, particularly those based on recent discoveries. Each level of review allows for either a free-form entry format (advanced review) or a bulleted list of target questions (guided review).

All reviews are approved, and if necessary edited, by CSERD staff before being made visible online, and for items that have been reviewed in all three assessment areas, a summary review and rating are created by CSERD staff and made accessible to other digital libraries through CSERD's metadata repository.

3. The metadata repository

One of the goals of the CSERD project is to assemble, organize, and share its collection of metadata. Of existing metadata management systems, a package that was found to meet our requirements while also integrating well with other systems is the Collection Workflow Integration System (CWIS). In addition to providing a customizable front-end to the portal, this software generates Open Archive Initiative (OAI) files that can be incorporated by NSDL. Customizing CWIS required importing existing metadata, changing its look and feel, and linking it to the VV&A tool.

Once the customized Plone and CWIS instances were deployed, the next step was to create a seamless communication channel between the two. Specifically, hyperlinks that included the catalog item id of the resource were added to send the user back and forth between the two platforms for services as needed: to Plone for editing or submitting reviews, and to CWIS for browsing or searching through the metadata.

The integration between the two systems was finalized by creating a unique look and feel. This was possible since both packages support a set of page templates, called "faces" or "skins," that determine exactly how a document displays to the user, including images and styles surrounding the content. After the common interface was developed for the entire project, it was translated into a CWIS face and a Plone skin. With the skin/face in place, the user experienced one consistent and cohesive system. Figure 2 presents the main components of the system. Note that the metadata pushed in CSERD originates both from internal Shodor repositories as well as from external ones. The metadata editor interface is currently being developed and will be integrated later with CSERD.

Chart showing the components of the CSERD architecture

Figure 2 describes the components of the CSERD architecture.

4. Community Involvement

The task of not only reviewing the materials in the library but also determining the impact of providing such reviews on actual use of items by teachers requires substantial community involvement to accomplish.

4.1 Student participation in review creation

As the bulk of the material in CSERD is geared towards undergraduate level or lower, reviewing the validity of material need not be performed only by content professionals if qualified students with appropriate supervision can be found to manage the task. At Kean University, students of the Center for Science, Technology, and Mathematics Education are performing VV&A reviews of learning objects under the direction of faculty. Students at the CSTME are involved in a content-rich 5-year combined Bachelors/Masters degree program in science or math education. Validity testing can be an excellent student project, with typical validation reviews requiring 4 hours of student time and less than an hour of faculty time, and this could be extended to student projects for science majors in other programs. Accreditation testing by education students typically requires an additional 4 hours, and can be a good way of introducing students to their state standards as well as national standards in their field. Verification testing of most online models requires even less expertise, and is often performed using a guided checklist by a large number of high school students at the Shodor Education Foundation, Inc.

4.2 Teacher participation in portal evaluation

As part of our effort to include teacher feedback in CSERD evaluation, several workshops have been conducted during Summer 2006 to introduce teachers to the site. In particular, teachers were asked to help in the accreditation and tagging process. Some of the workshops included a requirement for the participants to present a lesson that incorporates computational tools at the NCSTA. The first workshop was held June 19-20 with the North Carolina Department of Public Instruction (DPI) and the North Carolina School of Science and Math. Future workshops are planned with the New Schools of North Carolina, Asia Society's International Studies Schools Network (ISSN) Summer Institute, Kenan Institute and Kenan Fellows, and UNC School of Education.

5. Conclusions

CSERD continues to strive to provide high quality resources to teach science and math with computing, as well as to teach computational science itself. The tools are in place to make getting information to and from educators in math and science easier, but there is still much work to be done in checking the material continually being created on the web to make sure it is ready for classroom use. Our initial efforts show that the use of student partnering with faculty projects can be a method of generating high quality reviews that allows for minimal invasion of faculty time and excellent experience in doing and writing about math and science for undergraduate students, both among science students and education students. This suggests an opportunity to spread this practice beyond our initial tests, and we welcome the opportunity to work with other faculty in publishing their students' reviews on CSERD.

We are also hoping to expand our efforts to reach out to classroom teachers and gather their feedback on CSERD's resources, and we hope that any teachers at all levels looking for opportunities to add interactive exercises that bring not just technology but modern scientific computing into their classroom will browse by and join the community of CSERD users and reviewers.

Notes

[1] Joiner, D., Gordon, S., Lathrop, S., McClelland, M., and Stevenson, D. E. "Applying verification, validation, and accreditation processes to digital libraries." In Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries, Denver, CO, 2005.

[2] McKay, A. The Definitive Guide to Plone. Apress, Berkeley, CA. 2004.

[3] The Internet Scout Project. Collection Workflow Integration System webpage. <http://scout.wisc.edu/Projects/CWIS/>.

Copyright © 2006 Diana Tanase, David A. Joiner, and Jonathan Stuart-Moore
spacer
spacer

Top | Contents
Search | Author Index | Title Index | Back Issues
Previous Article | In Brief
Home | E-mail the Editor

spacer
spacer

D-Lib Magazine Access Terms and Conditions

doi:10.1045/september2006-tanase