Search   |   Back Issues   |   Author Index   |   Title Index   |   Contents



D-Lib Magazine
December 2002

Volume 8 Number 12

ISSN 1082-9873

Who is reading on-line education journals? Why? And what are they reading?


Lawrence M. Rudner
ERIC/University of Maryland

Marie Miller-Whitehead

Jennifer S. Gellmann
ERIC/University of Maryland

Red Line


One thoughtful examination of the literature estimates that a typical article published in a scientific journal in the U.S. is read about 900 times [Tenopir, 2000]. In contrast, some of the electronic journals in education appear to be having a far greater impact. It is not uncommon for an article in Education Policy Analysis Archives (EPAA) to be read more than 10,000 times; several articles have been viewed more than 30,000 times. The 100 articles in Practical Assessment, Research & Evaluation (PARE), a more specialized electronic journal, had averaged more than 7,000 views per article as of February 2002. In September 2002, PARE readership reached the one million mark.

This large difference between print and e-journals indicates that the readership of electronic journals is clearly not the same as the readership of traditional print journals. The questions addressed by this article, then, are the three posed in the title: Who is reading selected electronic education journals? Why? And what are they reading? The answers to these questions should prove useful to authors as well as future editors of on-line journals, as both authors and editors usually want to be responsive to the needs and desires of their readers.

The present study sought to answer these questions by compiling and analyzing the following data sources: (a) the results of an on-line survey of readers of the journals, (b) access statistics for the two journals, and (c) a content analysis of the most popular articles of each of the journals.

Related Literature

Much of the research into the use of electronic scholarly journals has examined the publication process and various technological aspects (for example, [Edwards, 1997; Peters, 2000]). There have been fewer studies of the actual users of electronic journals.

One major study [Eason, et al., 2000]examined user logs to determine readership and use. Based on the frequency, breadth, and depth of visits to particular journals and to specific articles, Eason and his colleagues defined a variety of user categories:

  • "Enthusiastic users", who viewed many journals and articles. This group was small (0.9%), composed primarily of social scientists and post-graduate students.
  • "Focused regular users" (4.9%), who viewed few journal titles, but accessed them frequently. This group consisted mostly of research scientists in the "hard" sciences and post-graduate students.
  • "Specialized, occasional users" (11.6%), who infrequently accessed a few specific journal titles. They were divided between scientists and social scientists, and were again typically post-graduates or academicians.
  • "Restricted users" (23.1%), who were similar to the Specialized, Occasional users, but accessed journals even less frequently. They were primarily biological scientists.
  • "Lost", "Exploratory", "Tourists" or " Searchers", who were non-repeat users. The users grouped under these terms either used the system only once, or simply registered for the SuperJournal project but didn't return to explore system again. They crossed all academic disciplines. Some liked the service but did not have time to use it thoroughly, while others simply did not like the project.

One of the major findings of this study was that enthusiastic users represented a very small percentage of total users. Most were "Restricted users" or "Tourists".

Another major user study [Liew, et al., 2000] presented an in-depth survey to a purposive sample of 83 graduate students to study their use and perceptions of e-journals. The results of the study revealed that a vast majority of graduate students (73%) preferred e-journals over print journals. Commonly cited reasons were links to additional resources, searching capability, currency, availability, and access ease.

Citation analysis is commonly used to evaluate the impact of an article, author, or journal. Examining citation data for 39 scholarly, peer-reviewed e-journals, Harter [Harter, 1996] found that the great majority had essentially no impact on scholarly communication in their respective fields. Few articles were cited and those that were cited were not cited frequently. Citation analysis, however, may not be fully appropriate here. The citation rate in education is terribly low. Rudner, Burke, and Rudner [Rudner et al., 2001] found that articles in four educational measurement journals were cited an average of only 1.2 to 2.5 times within three years by authors of other articles in the same journal. Further, the two journals under investigation here—Education Policy Analysis ARchives (EPAA) and Practical Assessment, Research and Evaluation (PARE)—are relatively new and are not fully included in citation databases.

An alternative to citation analysis is link analysis. As described by Ng, Zheng, and Jordan [Ng et al., 2001], link analysis has been successfully applied to infer "authoritative" web pages. In the academic citation setting, it can also be used to identify "influential" articles. Thus, citations may not be as revealing as links. In March 2002, reported 2,360 links to EPAA and 686 links to PARE. These values are quite respectable for academic journals.

Logically, it follows that the more links there are to a journal or a journal article, the more likely it is that a potential reader will find it on the web and access it. The potential effect on readership is similar to that which might be expected if a print journal were made available in 10,000 libraries compared to one available in only 10 libraries. The "link effect" may, in fact, help to offset the rather poor search strategies of many of ERIC's patrons that can affect the number of times a journal article is accessed. Hertzberg and Rudner [Hertzberg & Rudner, 1999] and [Rudner, 2000a] examined the quality of on-line ERIC database searches and found that while 95% of search strategies were relatively unsophisticated (or even "horrible"), even the most diligent searchers examined only about 5 or 6 of the citations resulting from a query. The results of their survey of all ERIC users indicated that about half used the resource for report preparation.


To study the readership of two on-line education journals, we used two methods: 1) a brief readership questionnaire, and 2) an in-depth content analysis of the more popular articles in EPAA and PARE, including an analysis of key words and retrieval counts.

The readership questionnaire built on a brief questionnaire used by Rudner [Rudner, 2000a]. For a few days, a short questionnaire popped up in a small window the first time a user accessed the home page of either EPAA or PARE. Users could readily see that they were being asked only two short questions—one about their position and another concerning the purpose of their visit. To minimize the obnoxiousness factor, a cookie was left with the user's browser, regardless of whether he or she responded, to prevent the survey from being launched a second time from that computer. In the past, we have had extremely high response rates (> 80%) using this technique.

The first author of this article conducted a content analysis of the publications to identify underlying constructs of the most-often-accessed articles of each journal. The research question of interest was, "What were common themes, topics, methodologies, or perspectives of the most-often-accessed PARE and EPAA articles?" The number of times the articles were accessed from the journal web site was taken as an indicator of readership. The content analysis also considered the following features:

  • ERIC descriptors assigned to each article (when available)
  • article titles
  • words used frequently by the authors within the full-text version of the articles.

Using a count of the number of times an article has been accessed electronically as an outcome variable has several limitations, not least of which is that for each time an article is accessed or downloaded electronically, it may be printed, disseminated, and read numerous times, almost ad infinitum. However, the use of access statistics is a commonly accepted method of analyzing web site traffic and computer usage and so was included as an outcome variable or category of interest in this analysis, under the assumption that over a given period of time, the articles that are accessed most often are most reflective of the interests of a journal's primary readership.

A preliminary step in conducting a content analysis is to become familiar with the data, in this case the journal articles. The next step is to identify and code "independent, mutually exclusive, and exhaustive" categories [Stemler, 2001]. For this article, the analysis sought to identify themes, methodological approaches, policy perspectives, or topics in the most-often-accessed PARE and EPAA articles.

In conducting a content analysis, the categories of interest may be specified a priori, such as those provided by the Thesaurus of ERIC Descriptors, or they may be identified by the researcher as categories specific to the research questions and texts of interest in the analysis. The current analysis of recent PARE and EPAA articles included ERIC descriptor categories as well as researcher-defined categories that emerged during the course of textual analysis and that were identified by (a) close readings and (b) computer-assisted parsing of the articles. Although ERIC descriptors provided useful information in a content analysis, they did not provide the same information as a formal textual analysis. Authors may not define their discussions in terms of ERIC descriptors; thus the connotative implications of word choice offer insight into an author's perspective on issues. According to Hertzberg and Rudner [Hertzberg & Rudner, 1999], overlap in descriptors often presents a challenge in the "social sciences in general as terms are less well defined, more fluid and less strictly hierarchical than in the physical sciences." Because article titles and words and phrases from the ERIC abstracts provide on-line database users with information to assist them in selecting those articles most relevant to their criteria (and serve as motivators to access the articles), these data were also analyzed. In fact, journal titles and ERIC descriptors, identifiers, and abstracts provide a potential journal reader with a "first impression" that results in either motivating him or her to read the article or to continue the search for more relevant sources.

The present study analyzed articles published on-line since September of 1999 because this was the date the e-journal PARE came on-line. This decision was based partly on correcting for "time in publication" because EPAA full-text articles have been available on-line since 1993. The outcome variable that emerged from this process was defined as the average number of accesses per month of PARE and EPAA articles published from September 1999 through December 2001, thus providing data for articles available electronically for a period of time ranging from several months to two years. While we recognize that readership often peaks shortly after publication, we felt this was a convenient and useful metric for this study.

The following section presents results based on both highest number of accesses since publication as well as the highest average accesses per month. A little more than half of the most-accessed articles also had the highest number of accesses per month since publication. With 261 titles published in EPAA and 100 in PARE, the complete data set consisted of 361 titles.



The first question on the pop-up survey asked the users to identify their primary role. As shown in Table 1, there was a great deal of commonality across journals and across time. Graduate students comprised the largest group of users, followed by teachers, researchers, and undergraduate students.

Table 1. Self-reported Primary Roles of PARE and EPAA Visitors


February 2002

November 2000


















College Professor





K-12 Teacher/Admin





K-12 Teacher





K-12 Librarian





K-12 Staff





K-12 Administrator





College Students





Undergraduate Student





Graduate Student










College Librarian




















The researcher category is somewhat anomalous because there are simply not that many non-academic positions in education bearing the title "Researcher." Perhaps this was a bias introduced by the fact that it was the first response option.

There are some 3,000,000 K-12 teachers [McDowell, 2001], 986,000 undergraduate students in colleges of education [USDE, 2000], 604,000 graduate students in colleges of education [USDE, 2000], and 89,000 professors in colleges of education [Market Data Retrieval, 2002]. If readership were proportionate to audience type, one would expect approximately 1% of the readers of electronic journals to be college professors, 40% to be teachers, 13% to be undergraduate students, and 8% to be graduate students. Thus, electronic journals appear to be disproportionately attractive to graduate students and college professors. Based on the numbers, teachers would appear to be under-represented. However, the journal reading behavior of teachers in the classroom cannot be expected to be the same as the reading behavior of professors and students. Thus, the fact that approximately 20% of readers self-identify as K-12 teachers and administrators may be viewed as a relatively high percentage. In terms of the Eason, Richardson, and Yu [Eason et al., 2000] study, it appears that PARE and EPAA readers can be categorized as "Enthusiastic users", "Focused regular users", or "Specialized, occasional users".


The second survey question asked the users to identify the purpose of their visit. As shown in Table 2, there was again a great deal of commonality across journals. The two most common reasons for reading PARE were research report preparation and class assignment. Given the scopes of the two journals, it is not surprising that relatively more EPAA readers are looking for information to inform policy and relatively more PARE readers are looking for teaching resources. It is encouraging to note that approximately 18% of the readers are visiting the sites for personal or professional interest.

Table 2. Self-reported Reasons for Visiting the PARE and EPAA Web Sites[1]




Background for policy




Class assignment



Research report




Professional interest



Personal interest



Find teaching resources







PARE and EPAA require that authors adhere to various technical specifications, including article length. PARE articles in the content analysis ranged in length from approximately 1,700 to 4,100 words and averaged 2,500 words. In comparison, the most-accessed EPAA title was nearly 6,500 words long and several were more than 9,000 words in length. Therefore, the journals were considered separately for several of the following analyses.

In studying the PARE titles most often accessed, several dominant themes emerged. First, a "rubric-standard-criteria" triad dominated the word count of articles that were most often accessed. Second, although the categories were not mutually exclusive, the most often accessed titles were discussions of teachers/teaching, scores/scoring, grades/grading, students, evaluations, and assessment. In a sample of 18 PARE articles selected from those most and least accessed since publication, the following word counts were noted:

Table 3. Word Counts of PARE Sample

N = 18





Assess, Assessment(s)




Evaluate, Evaluation(s)




Grade(s), Grading

























Table 4. Comparison of Average Frequency of Word Usage in PARE Articles

Word or Phrase



Assess, Assessment



Evaluate, Evaluation


















Although they were not included in Table 4, there was no meaningful difference between most- and least- accessed articles in the average number of uses of the words/phrases "test/tests/testing" and "score/scores/scoring."

There was a positive correlation between the average number of times a title was accessed each month and the number of times "rubric(s)," "standard(s)," or "criteria" appeared in the article (p = .67, < .01, n = 18). Table 5 provides the bivariate correlations of article retrievals and frequently-used words in PARE's most- and least-often-accessed titles.

"Evaluation Methods" and "Student Evaluation" were the most frequent ERIC descriptors of the PARE titles that had been retrieved the most times on average since publication, followed by "Elementary Secondary Education." ERIC descriptors for titles accessed least often included "Adaptive Testing," "Computer Assisted Testing," "Multiple Regression," "Item Response Theory," "Item Banking," "Difficulty Level," and "Limited English Proficient." ERIC descriptors that appeared in both most-accessed and least-accessed titles were "Test Construction" and "Test Scores." "Assessment(s)" was a title word in 6 of 13 of the most-accessed PARE articles. Words that appeared in more than one of the most-accessed PARE titles included, "classroom," "evaluate," "implement," "portfolio," "rubrics," "scoring," "teachers," "tests," and "when." The average number of words in the titles of most-accessed articles was 5.5.

Table 5. Correlations Between PARE Word Counts and Retrievals

N = 18










































































Note. ** p < .01, * p < .05. Sample included most- and least- accessed articles to minimize restriction of range in the correlation.


PARE articles addressing issues about teachers and students in the classroom were more often accessed than articles focusing on statistical procedures and measurement, i.e., psychometrics. As noted by the author of another widely read PARE title on assessment fundamentals, "It is important to understand the difference between measurement evidence (differentiating degrees of a trait by description or by assigning scores) and evaluation (interpretation of the description or scores)" [McMillan, 2000]. There are, after all, more K-12 classroom teachers, administrators, and students of education who are concerned about teaching to criteria, standards, and rubrics than there are statisticians and measurement professionals who hold the same concerns. With one exception, the PARE articles did not emphasize educational policy or politics; however, one of the most often accessed titles addressed accountability, content standards, reform, and policymaking relative to high-stakes testing and the assessment of all students [Linn, 2001]. This article offered recommendations for safeguards in the system and concluded that unintended negative effects of the high-stakes accountability uses often outweigh the intended positive uses. The number of times this article has been accessed provides compelling testimony of the educator concern about student achievement and high-stakes testing in an environment of increasing federal, state, and district accountability.

Table 5 provides simple correlations of frequently used words in PARE articles. These words were taken out of context; a thorough analysis should give some consideration to the context within which they were most often used. "Characteristics of performance standards" and "strengths and weaknesses of content standards" [Linn, 2001] provide richer contextual clues to the various meanings of standards than is possible with a single identifier. "Rubrics are descriptive scoring schemes," "scoring rubrics describe what is expected" [Moskal, 2000], and "a rubric is a rating system by which teachers can determine at what level of proficiency a student is able to perform a task or display knowledge of a concept," [Brualdi, 1998] are more explanatory of the authors' intention when using the term "rubrics" than can be indicated by simply counting the number of times these words appeared in PARE articles. Another PARE author restricted a discussion on "assessment" to "authentic assessment" and then contextualized the somewhat ambiguous phrase, "authentic assessments require students to be effective performers with acquired knowledge" [Wiggins, 1990]. Although the occurrence of the words "test(s), testing" did not differentiate between articles that were or were not likely to be accessed, an article on criterion and norm-referenced testing that concluded, "As long as the content of the test matches the content that is considered important to learn, the CRT gives the student, the teacher, and the parent more information about how much of the valued content has been learned than an NRT" [Bond, 1996] had been accessed more than 21,000 times.

Portfolio assessment was another topic popular with PARE readers. Portfolios may assess teacher work or student work: thus, "a teacher portfolio is designed to demonstrate ...talents...knowledge and skill in teaching" [Doolittle, 1994] or a teacher may encourage "students to organize their work and compare various items within their portfolios using rubrics ... checklists, and award stickers." [Forgette-Giroux & Simon, 2000]. Another popular PARE article explored teacher motivation from the perspective of two behavioral psychology models widely used in organizational management [Gawel, 1997].

The EPAA article most frequently accessed since the journal's inception in 1993 was published in March 1999. As of March 2002, this one article had been accessed more than 52,000 times and had elicited several published responses on EPAA that had together been accessed a total of nearly 20,000 times. These three titles and responses were detailed examinations of home-schooling, student achievement, and the interaction effect between home-schooling, church affiliation, and student achievement on standardized tests [Arai, 1999; Rudner, 1999; Welner & Welner, 1999].

For titles in publication since September 1999, the most-often-accessed EPAA title was about teacher quality and student achievement [Darling-Hammond, 2000]. That article, prepared by a widely recognized leader in the field and on a hot topic concludes "that improving the quality of teachers in the classroom will do more for students than other strategies designed to raise student achievement."

ERIC descriptors of most-often-accessed PARE and EPAA articles were equally likely to include:

  • "Student Achievement,"
  • "Elementary Secondary Education,"
  • "Standardized Tests" and
  • "Accountability"

while most-accessed EPAA titles were more likely to have ERIC descriptors such as:

  • "State Programs,"
  • "Politics of Education," and
  • "Educational Policy"

and to include discussions of education equity litigation at the state level. While PARE articles were more likely to provide guidelines and standards for the use of evaluation methodologies or for conducting various types of assessments, EPAA articles were more likely to present evidence (including case studies) of the efficacy of state testing and accountability programs on academic standards or of state policies on student and teacher testing on academic standards. Most-read EPAA articles were more likely to address such policy-related issues as the effect of block scheduling on student achievement, disparities by ethnicity and poverty in access to technology and the use of technology in the classroom, and the necessity for designing curricula specifically for the technology of the Internet. Interestingly enough, although articles about statistics were less often accessed than articles on other topics, the most popular EPAA articles [Darling-Hammond, 2000; Rudner, 1999] made very effective use of statistics and included statistical analysis summary tables and graphs, as well as an explanation about the choice of statistical methods used for the analyses.


Using a short readership survey and content analysis of most frequently accessed articles from two education journals provides insight to the on-line journal readership and their needs and interests. Survey response information can be used by editors to encourage submissions in high-interest areas and assure potential contributors of the high visibility of their contributions.

We found that these journals appear to be reaching a larger and wider audience than many print journals. Articles are downloaded thousands of times compared to fewer than 1,000 requests for scholarly print journals. A large number of readers are teachers and others from the K-12 community. Most readers reported their primary role as college student, researcher, or college professor. Readers reported that their primary purpose of the visit was to assist with class assignments and report preparation. Large numbers also visited for personal or professional interest. Even though certain topics drew a substantially larger readership than others, it is important to note that some diversity in titles provides journal audiences with the opportunity to acquire information on unfamiliar topics or to expand their perspectives on an issue of interest to the professional education community. Thus, it is important to provide the educational community not only with titles of general interest, but also with well-written articles that address more specialized evaluation, assessment, and policy analysis topics.

Our examination of the most-read topics—home-schooling, rubrics, standards, politics—revealed a keen interest in currency. These topics are not well covered in traditional print journals. Print publication lag time significantly limits the usefulness of print articles on current topics. Electronic journals, however, are able to publish on current topics, and readers apparently readily consume articles on current topics. The inherent ability of electronic media to provide immediacy of response is important, particularly for those who must be cognizant of changes not only in educational policy, but in the evolution of terminology that often accompanies such discussions.

The Association of Learned and Professional Society Publishers (ALPSP) recently conducted a large-scale survey, the purpose of which was to discover what motivated researchers to publish in journals, and how they decided where to publish, as well as their concerns about the current system, and what they wanted or expected in the future [Swan & Brown, 1999]. Questionnaires were sent to 11,500 contributors to journals published in the U.K., the U.S., and elsewhere. Swan and Brown found that the main aim of contributors was to reach the widest possible audience, with the quality of peer review and the impact factor of the journal the main factors of importance in achieving their overall publishing objectives. In deciding where to submit their work, the perceived reputation of the journal, its impact factor, subject area, international reach and coverage by abstracting and indexing services are extremely important.

With their wide and diverse readership, PARE and EPAA clearly achieve the first goal - the widest possible audience. Both of these journals provide live usage statistics to clearly document impact.

For aspiring faculty, journal reputation is extremely important. There is a fear that tenure committees may under-value on-line journals relative to more established print journals. In a May 1999 article in The Chronicle of Higher Education, Kiernan wrote "Scholars are worried...that electronic publication will not carry much credit toward tenure, or that electronic journals might fail, carrying prized papers with them into oblivion...that electronic journals are likely to be less permanent than printed journals."

The authors obviously feel tenure committees that undervalue electronic journals are misguided. These journals are often as rigorous as print journals and have a much greater impact in terms of educating readers.

Based on this study, we offer the following suggestions to editors and publishers of on-line journals:

  • Emphasize and solicit papers on current topics.
  • Select materials that recognize the diverse nature of the audience.
  • Provide for permanently archiving published articles.
  • Get the journal indexed by Education Index and the Current Index to Journals in Education.
  • Gather and publish usage statistics.
  • Educate potential authors and tenure committees on the impact and value of your on-line journal.

A question that arises based on Rudner's survey of percent of K-12 readership is the extent to which the vast majority of educators, i.e., the classroom teachers, have ready access to computers with Internet connections. The goal of having at least one Internet-connected computer in each school is close to becoming a reality, but for many teachers, having an Internet-connected computer in the classroom or at home is still a dream [Hoffman, Novak, & Schlosser, 2000]. We close with the following quote from an EPAA article on the use of technology,

The dream scenario is that the information age will help bring about the kinds of educational change that reformers have pushed for all century, with schools becoming sites of critical collaborative inquiry and autonomous constructivist learning as individuals and groups work with new technologies to solve authentic problems under the guidance of a facilitative teacher [Warschauer, 2000].


[Arai] Arai, A. (1999). Homeschooling and the redefinition of citizenship. Educational Policy Analysis Archives, 7(27). Available on-line: <>.

[Bond] Bond, L. A. (1996). Norm- and criterion-referenced testing. Practical Assessment, Research & Evaluation, 5(2). Available on-line: <>.

[Brualdi] Brualdi, A. (1998). Implementing performance assessment in the classroom. Practical Assessment, Research & Evaluation, 6(2). Available on-line: <>.

[Darling-Hammond] Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state policy evidence. Education Policy Analysis Archives, 8 (1). Available on-line: <>.

[Doolittle] Doolittle, P. (1994). Teacher portfolio assessment. Practical Assessment, Research & Evaluation, 4 (1). Available on-line: <>.

[Eason et al.] Eason, K., Richardson, S., & Yu, L. (2000). Patterns of use of electronic journals. Journal of Documentation, 56 (5), 477-504.

[Edwards] Edwards, J. (1997). Electronic journals: problem or panacea? Ariadne, 10. Available on-line: <>.

[EPAA] EPAA log (2002). Usage statistics for Education Policy Analysis Archives. Available on-line <>. Last checked: 13-Mar-2002.

[Forgette-Giroux], Forgette-Giroux, R., & Simon, M. (2000). Organizational issues related to portfolio assessment implementation in the classroom. Practical Assessment, Research & Evaluation, 7(4). Available on-line: <>.

[Gawel] Gawel, J. E. (1997). Herzberg's theory of motivation and Maslow's hierarchy of needs. Practical Assessment, Research & Evaluation, 5(11). Available on-line: <>.

[Harter] Harter, S. P. (1996). The impact of electronic journals on scholarly communication: A citation analysis. The Public-Access Computer Systems Review, 7 (5). Available on-line: <>.

[Hertzberg] Hertzberg, S., & Rudner, L. (1999). The quality of researchers' searches of the ERIC database. 7(25). Available on-line: <>.

[Hoffman et al.] Hoffman, D., Novak, T., & Schlosser, A. (2000). The evolution of the Digital Divide: How gaps in Internet access may impact electronic commerce. Journal of Computer-Mediated Communication, 5(3).

[Kiernan, V. (1999). Why do some electronic-only journals struggle, while others flourish? The Chronicle of Higher Education, 45(37) p. A25.

[Liew et al.] Liew, C.L., Foo, S., & Chennupati, K.R. (2000). A study of graduate student end-users; use and perception of electronic journals. Online Information Review, 24(4), 302-315.

[Linn] Linn, R. L. (2001). Assessments and accountability (condensed version). Practical Assessment, Research & Evaluation, 7(11). Available on-line: <>.

[Market Data] Market Data Retrieval (2002). Education Mailing Lists: Interactive Catalog, Faculty, Department Chairs, and Deans by Discipline. Available on-line: <>. Accessed Oct 16, 2001. Last checked: 13-Mar-2002.

[McDowell] McDowell, L. (2001). Early estimates of public elementary/secondary education survey, 2000-01. Education Statistics Quarterly, 3(1). U.S. Department of Education. Available on-line: <>.

[McMillan] McMillan, J. H. (2000). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation, 7(8). Available on-line: <>.

[Moskal] Moskal, B. M. (2000). Scoring rubrics: what, when and how? Practical Assessment, Research & Evaluation, 7(3). Available on-line: <>.

[Ng et al.] Ng, A. Y., Zheng, A. X., & Jordan, M. I. (2001, August). Link analysis, eigenvectors, and stability. Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, Seattle, WA. Available on-line: <>.

[PARE] PARE log (2002). User Statistics for Practical Assessment, Research and Evaluation.

[Peters] Peters, S. (2000). EPRESS: Scaling up electronic journal production. Ariadne, (23). Available on-line: <>.

[Rudner, 1999] Rudner, L. (1999). Scholastic achievement and demographic characteristics of home school students in 1998. Educational Policy Analysis Archives, 7(8). Available on-line: <>.

[Rudner, 2000a] Rudner, L. (2000a). Who is going to mine digital library resources? And how? D-Lib Magazine, 6(15). Available on-line: <>.

[Rudner, 2000b] Rudner, L. (2000b). Who is using some web resources. Available on-line: <>.

[Rudner, 2001] Rudner, L.M. (2001). How many people search the ERIC database each day? Available on-line at: <>.

[Rudner, et al.] Rudner, L. R., Burke, J., & Rudner, L. (2001). Is something happening to measurement scholarship? Newsletter of the National Council on Measurement Education, 9(2) p. 1. Available on-line at <>.

[Stemler] Stemler, S. (2001). An overview of content analysis. Practical Assessment, Research & Evaluation, 7(17). Available on-line: <>.

[Swan & Brown] Swan, A., & Brown, S. (1999), What authors Want, West Sussex, UK: The Association of Learned and Professional Society Publishers. Available on-line: <>.

[Tenopir] Tenopir, C. (2000). Towards electronic journals. Psycoloquy, 11(084). Available on-line: <>.

[U.S. Department of Education] U.S. Department of Education (2000). "Table 214, Enrollment in postsecondary education, by major field of study, age, and level of student: 1995-96." Table Source: Digest of Educational Statistics. Chapter 3. Post Secondary Education. Data Source: U.S. Department of Education, National Center for Education Statistics, "The 1995-96 National Postsecondary Student Aid Study," unpublished data.

[Warschauer] Warschauer, M. (2000). Technology and school reform: A view from both sides of the tracks. Educational Policy Analysis Archives, 8(4). <>.

[Welner] Welner, K. M., & Welner, K. G. (1999). Contextualizing homeschooling data: A response to Rudner. Educational Policy Analysis Archives, 7(13). Available on-line: <>.

[Wiggins] Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research & Evaluation, 2(2). Available on-line: <>.

Copyright © Lawrence M. Rudner, Marie Miller-Whitehead, and Jennifer S. Gellmann

Top | Contents
Search | Author Index | Title Index | Back Issues
Previous Article | Conference Report
Home | E-mail the Editor


D-Lib Magazine Access Terms and Conditions

DOI: 10.1045/december2002-rudner