D-Lib Magazine
April 1999

Volume 5 Number 4

ISSN 1082-9873

The State of the Dublin Core Metadata Initiative
April 1999

blue line

Stuart Weibel
OCLC Online Computer Library Center, Inc.



One hundred and one experts in resource description convened in Washington, D.C., November 2 through November 4, 1998, for the sixth Dublin Core Metadata Workshop. The registrants represented 16 countries on 4 continents, and many disciplines. As with previous workshops, many new issues were opened, and vigorous debate was a hallmark of the event.

Unlike previous workshops, the focus of DC-6 was not to resolve questions in plenary meetings, but rather to identify unresolved issues and assign them to formal working groups for resolution. The result of this process was an ambitious workplan for 1999. This report summarizes that workplan, highlights the progress that has made been on the workplan, and identifies a few significant projects that exemplify this progress.

The Dublin Core Metadata Initiative in 1998

Prior to DC-6, the Dublin Core could be characterized as 15 unstructured elements with text-string values, described in RFC 2413 (what has become known informally as DC 1.0). The only widely deployed syntax option for encoding these elements was the <META> tag dot syntax that has been in use since 1996 [WEIBEL 1996]. Implementations in many countries and languages, and in many disciplines testify to the widely perceived need for such a metadata element set, and the Dublin Core is the leading candidate for achieving the goal of simple resource description for Internet resources.

The basic definitions of the 15 elements of Dublin Core 1.0 have been stable since December 1996, reflecting confidence in the consensus that has been developed about core description elements over the previous four years. However, few applications have found that the 15 elements satisfy all their needs. This is unsurprising: the Dublin Core is intended to be just as its name implies -- a core element set, augmented, on one hand, for local purposes by extension with added elements of local importance, and, on the other hand, by refinement through the use of qualifiers. There are many possible approaches to qualifying or refining the elements to meet such needs. Standardization of the semantics and methods for qualification of the basic elements is necessary if such qualification is to be widely interoperable.

DC-6 and the 1999 Workplan

Several important issues emerged from the DC-6 Workshop. These issues reflect a cross section of concerns, from process to pragmatics, syntax to theory. Each has a place in the agenda of the community. The following is a summary of the major areas that emerged from discussions just before and during DC-6.

A Maintenance Agency for the Dublin Core Initiative

The Dublin Core Metadata Initiative began informally as an interdisciplinary workshop on resource description. As it attracted broader international and interdisciplinary interest, it has been necessary to develop greater formality around the process. Providing explicit process and structure for decision making is critical for sustaining community confidence. Steps toward this goal were initiated in 1998 with the formation of the Dublin Core Directorate, a Policy Advisory Committee (PAC), and a Technical Advisory Committee (TAC).

The Directorate is hosted by the OCLC Office of Research, and maintains the Dublin Core Home Page [DC-HOME], the repository of official documents and other information about the Dublin Core Metadata Initiative. The Directorate also administers the activities of Dublin Core working groups and plans DC Workshops.

The Policy Advisory Committee is comprised of representatives of major stakeholder communities and serves a liaison role between these communities and the Dublin Core Directorate.

The Technical Advisory Committee is comprised largely of working group chairs and provides a forum for the discussion and ratification of proposals concerning the Dublin Core.

A sub-committee of the two groups has taken on the task of preparing a document to codify the process and provide for stable transition of membership on the advisory committees.

There are a number of governance examples that provide relevant models for a Dublin Core maintenance agency -- the World Wide Web Consortium (W3C), the Internet Engineering Task Force (IETF), and various standards agencies all have procedures worthy of emulation. The goal is to achieve a stable procedural foundation for the Dublin Core that retains the interdisciplinary, international consensus-building culture that has grown up around the Dublin Core initiative.

Dublin Core Working Groups

Working Groups are formed to address particular problems or clusters of problems. Working groups have charters and scheduled deliverables, and are expected to go out of business officially at the end of each workshop cycle (approximately one year). Each Working Group has a mail server to support electronic discussions among its members, and all Working Groups are open to enrollment by any interested parties. All DC mailing lists use the Mailbase system (, an electronic discussion forum that supports higher education in the UK. Special thanks are due to Paul Miller of UKOLN who has willingly assumed the challenging responsibility of maintaining the numerous DC mailing lists.

Ratification Process

While administrative details are not entirely resolved at this time, the ratification process will work approximately as follows:

This structured procedure is intended to meet the requirements for stability for a standard such as the Dublin Core while providing broad representation of stakeholder communities and supporting the need for measured evolution.

Standardization of the Dublin Core

IETF RFCs: Initial steps towards standardization

Standardization is taking place along several parallel pathways. The first is the Internet Engineering Task Force (IETF), appealing because it has the least formal structure, and useful because it establishes a publicly accessible repository of informational documents that are widely recognized in the Internet world as having formal standing.

RFC 2413 is the first formal expression of Dublin Core semantics. This RFC describes what has become known as DC 1.0: the semantics of the 15 elements of the Dublin Core. RFC stands for Request for Comments, but in this case, it is more in the nature of a Request for Cooperation: If your resource description needs are met with this core set of elements, the use of these will improve your chances for semantic interoperability with other communities that use the Dublin Core elements.

The next stage of formalization of Dublin Core standardization will involve a polishing and slight restructuring of RFC 2413 and submission to NISO (National Information Standards Organization) and CEN (Center for European Normalization). These organizations play roughly similar roles in North America and Europe. The Dublin Core is already a work item for both; however, it is judged appropriate to modify the existing RFC prior to standardization.

The planned modifications fall into two categories. The first is a review of the element definitions to improve clarity and thereby promote more consistent deployment. Working groups have been established to review the definitions and propose changes where necessary, with the proviso that such changes are limited to the purposes of clarification.

The second proposed change is simply to format the Dublin Core specification according to a standard description template for metadata elements known as ISO 11179 [ISO 11179]. ISO 11179 is an international standard for formally expressing the semantics of data elements in a consistent manner. This consistency promotes clarity of expression both within the definitions of Dublin Core elements themselves, and also with other communities that utilize ISO11179 for their semantic representation.

At this writing, element definition reviews are nearing completion, and formal proposals will be available for public comment and submitted to the Dublin Core Technical Advisory Committee for review and validation. It is expected that formal documents will be submitted to NISO and CEN in 1999.

Encoding Dublin Core in HTML

An Internet Draft authored by John Kunze [KUNZE 1999] has recently been released that articulates the specification of how Dublin Core can be encoded in HTML. An early convention for this has been in place since 1996 [WEIBEL 1996], but changes in HTML and a general need for greater formalization make this Internet Draft an important step forward for the community.

Internet Drafts are short lived discussion proposals (they expire 6 months following publication). This one is currently undergoing public review and comment by the Dublin Core mailing list [DC-GENERAL], following which, it will be revised and submitted as an RFC, which has long term persistence.

Qualification of Dublin Core Metadata

It has been recognized from the outset that most applications require mechanisms to refine or qualify metadata elements or their values. There are several reasons to do so:

  1. Increased semantic specificity. Use of domain-specific controlled vocabularies or classification schemes helps to add descriptive precision. The Dewey Decimal System (DDC), Medical Subject Headings (MeSH), and the Library of Congress Subject Headings (LCSH) are common examples, but there are many others. Indicating that a subject descriptor comes from a controlled vocabulary makes it possible to take advantage of a formal browsing structure or knowledge structure.
  2. Specification of encoding rules. Identifying a formal encoding standard can make an otherwise ambiguous value useful. Date values are a good example: only by specifying a set of encoding rules can a string specifying a date be parsed reliably.
  3. Defining formal substructure. It is often desirable to assign a compound value to an element. Specifying that a metadata value is in fact a compound structure allows the inclusion of richer structure. For example, the value of a Creator element is in its simplest form a name. Many applications have a need to associate additional information with such a value, such as affiliation, email address, and title. Specifying the value of a Creator element as a compound value that includes this information as structured sub-elements is useful, but requires a mechanism for specifying the substructure: a scheme qualifier.
  4. Authority Control. Authority records, used by many communities, are examples of structured records that provide authoritative values that help to uniquely identify a person, corporation, or place name.

Implications of metadata qualification for interoperability

The range of possible qualifiers for Dublin Core metadata is limitless. If applications are to interoperate, it is desirable to constrain these possibilities. When possible, it is recommended that applications use externally maintained schemes (e.g., the Dewey-Decimal System, Library of Congress Subject Headings, Medical Subject Headings, and the ISO-8601 Date Profile [8601 DATES]). Doing so leverages the substantial investments that such schemes represent, as well as improving the chances for interoperability.

There is currently underway a review of existing DC community practice, the goal of which will be to identify qualifiers now in use and propose a set of qualifier values that may be adopted to promote interoperability. This review is being conducted by element-specific working groups and is scheduled to be completed in time for the next Dublin Core Workshop in October 1999, at Die Deutsche Bibliothek in Frankfurt [DC-7].

It is clear that there nonetheless will be many variants on qualifiers and qualifier schemes. What implications does this have for interoperability? One can distinguish at least two levels of interoperability: interchange and search.

Interchange interoperability is the most stringent case. For two applications to exchange metadata effectively, their metadata must have the same semantics and must share a common structure and syntax. The MARC standard, in conjunction with the second edition of the Anglo American Cataloging Rules (AACR2) establishes this level of interoperability for library cataloging, and it allows compliant applications to exchange and use metadata which is created according to these standards.

Search interoperability demands less coherence, though it requires, at a minimum, common semantics and a protocol for asking and responding to queries. If a client application is searching a Dublin Core repository application for a keyword, for example, it is sufficient that the two applications share the semantics of DC:Subject, and have a means to carry out the transaction in a suitable protocol (Z39.50, for example).

If the repository uses the DDC system to classify documents, and the client application "understands" DDC, then the probability of a useful transaction is substantially increased. The application may be able to take advantage of the DDC as a browsing structure as well, thereby increasing the effectiveness of the search. If the client application is ignorant of DDC, a keyword search can still succeed, even though the user has no specific knowledge of or expertise in the intricacies of DDC.

Thus, applications that use different schemes for the encoding of DC:Subject may still interoperate at the search level, though they are unlikely to be able to exchange their metadata effectively.

Qualification and the Dublin Core Data Model

The Data Model working group has been engaged in the task of identifying a common structural expression of qualifiers such that qualification objectives may be accomplished; a formal report of their efforts is scheduled for release in May of 1999. If one thinks of a metadata description as a grammatical statement, the data model specifies the allowable parts of speech and syntax of the metadata assertion.

The simple, unqualified version of a metadata assertion relates attributes, or named properties, with a resource. The following declarative sentence can be captured in an unqualified metadata declaration:

A book entitled "The Library as Literacy Classroom" was authored by Marguerite C. Weibel and published by the American Library Association in 1992.

A more elaborate version of the same statement might include several qualifiers that make the statement more precise, but which do so at the cost of added complexity:

A book uniquely identified by the ISBN Number 0-8389-0596-x and having the title "The Library as Literacy Classroom" was published by the American Library Association in the year 1992. Marguerite C. Weibel, uniquely identified in the Library of Congress Name Authority Record #84204399, is the creator of the resource and is of class person and has a creation role of author.

These examples serve to illustrate different degrees of specificity of a metadata assertion, from the simple version to a more elaborated version. The simplest version has been straightforward to encode in the form of embedded HTML for some time now. The work of the DC Data Model Working Group is directed towards standardizing the more expressive of the more elaborated versions such that they can be deployed consistently across applications, and thereby promote broad, interdisciplinary interoperability for both search and interchange of metadata.

The additional specificity made possible through qualification comes at additional costs: the costs of system complexity to support qualification, and the cost of the human resources necessary to create and maintain the additional information. These costs can only be sustained if the resulting benefit is demonstrably useful.

A number of communities maintain authority control systems. Libraries do so for personal, corporate, and geographical place names, the music industry does so for unique identification of intellectual property rights. The ability to uniquely and authoritatively identify people, places, and organizations is critical to maintaining effective knowledge structures such as are represented by library catalogs. For electronic commerce, the ability to assign authoritative identities and attributes is essential for the use of metadata that will enable compensation for goods or services, or the licensing of intellectual property. In each case, the mechanisms for accomplishing this are similar.

Many disciplines support classification systems that promote better access to their literature. Prior to the Web, the use of classification systems was justified on the basis of making a given body of literature more coherent and, hence, more useful for scholarship. These benefits continue and are augmented by the possibility of using classification systems as browsing structures that can enhance the immediacy and utility of online discovery. The machine-processing of classification data has the potential to amplify the intellectual reach of the searcher and sharpen the ability to discriminate among resources in the vast store of knowledge that is increasingly digital.

Relationship of the Dublin Core to other metadata efforts

Among the significant events of the DC-6 workshop was the participation of representatives of parallel metadata efforts, including the Digital Object Identifier (DOI) Metadata Workgroup, the INDECS project [INDECS], Government Information Locator Service [GILS], and the Instructional Management System [IMS]. Each of these efforts has similarities and differences with Dublin Core, and each has important constituencies, all of whom will benefit from convergence.

Among the principles of the Warwick Framework is the notion that different varieties of metadata will be elaborated by stakeholder communities, and the metadata architecture should support the snapping together of metadata modules just as Lego™ blocks are snapped together to form compound structures. Enabling the superficially simple child’s play of building with Lego™ building blocks requires surprisingly precise and complex specifications and manufacturing process, involving manufacturing tolerances approaching that of internal combustion engines. The reward for a unifying architecture for that has been interoperability across a span of six decades or more, in spite of broad "semantic" variation in the blocks, from undersea exploration to outer space vehicles; knights and ghosts, to mummies and cowboys. They all snap together, unfettered by "semantic" diversity.

The goal of a metadata architecture should be similar: to support a broad diversity of metadata semantics within a common syntactic and structural framework. The Resource Description Framework, or RDF, was developed specifically with this objective in mind. Developed under the auspices of the W3C, RDF became a W3C recommendation in February 1999, with the enthusiastic support of Tim Berners-Lee, Director of the W3C and inventor of the Web.

As RDF tools and systems become more common, an important part of the structural and syntactic conventions necessary for metadata interoperability will become an integral part of the Web infrastructure. This means that utilities designed to support creation and management of metadata will be integrated into common application software: text editors, image manipulation software, and browsers, for example. Applications will be able to use metadata, and by downloading the schemas for various varieties of metadata, the possibility of modular, plug-and-play metadata will come within reach.

RDF makes easier the task of harmonizing various metadata schemas, but by no means assures that they will be useable in a Lego™-like modularity. It is still necessary to identify the common aspects of the data models that underlay various metadata sets and work towards harmonizing them.

The first example of harmonization of different varieties of metadata has begun by representatives of the Dublin Core Data Model working group and the INDECS project. INDECS is a project to explore the common functional metadata requirements necessary to support electronic commerce for a number of content industries (publishing, music, and visual arts). The functional requirements of managing intellectual property rights include the ability to encode descriptive data at a high level of precision. The description requirements for resource discovery are generally less precise, and production environments often will not permit the expenditure of costs necessary to achieve this precision. Nonetheless, harmonizing the underlying data models will have long term benefits.

This does not imply that the Dublin Core should be optimized to support the management of intellectual property rights, nor that the INDECS metadata effort will change its focus to resource discovery. The underlying structural models of each, however, may be harmonized so as to support interoperability, and potentially, the reuse of some common components and services.

An early report on the expected benefits and problems of this effort has been published [BEARMAN 1999], and the work continues as an exploratory effort to bring together the underlying models of two systems with different goals and approaches. A discussion paper edited by Carl Lagoze [DC-SCHEMA 1999] illustrates how this harmonization can accommodate the existing version of Dublin Core and support graceful evolution of the DC as well. As this work develops, it is hoped that the congruencies identified will extend to other metadata sets as well.

What’s this I hear about DC 2.0?

Discussions during and after DC-6 raised issues of changes in the underlying structure of Dublin Core metadata. Are Creator, Contributor, and Publisher just specific (and sometimes misleading) ways of expressing the more general notion of an agent that plays a role in the life cycle of a resource? Is it the case that the Source element is simply a particular variety of Relation? Is it helpful to view elements such as Date as a facet of events that occur in the lifecycle of an information resource (for example, a resource is published by a particular agent on a particular date)? Discussions revolving around these questions suggest that the 15 Dublin Core elements might be more coherently expressed if they are related to an underlying logical model such as that expressed in the Functional Requirements for Bibliographic Records (FRBR) of the International Federation of Library Associations. This model treats information resources as having logical states (an abstract work or a physical item, for example) that have relationships to each other and to other resources.

What implications do these discussions have for Dublin Core in its present state?

As economist Edgar Fiedler said in Fiedler's Law of Prediction, "Forecasting is very difficult, especially if it's about the future." Exploration of the issues is ongoing, and if they prove fruitful, then the results will be embodied in a proposal for a version of the Dublin Core that is being referred to as Dublin Core 2.0.

It is unclear how these issues will be played out, but the following can be asserted with confidence:

In the middle of the 19th century there were 7 distinct track gauges for railroads operating in North America. This infrastructural impediment to the flow of goods had demonstrable effects on economic development, to say nothing of the additional costs of supporting such a rail network. It is important to note that nobody stopped the trains to wait for the tracks to become the same width. Over a period of 20 years a variety of work-arounds were deployed to ease the transition to a single track gauge. Meanwhile, the mail got delivered, the milk went to market, and people rode trains to populate the American West.

We are currently participating in the laying of track for the transport of metadata on the Web. We will not have gotten every aspect of the job just right the first time. That does not mean we should stop the metadata trains, nor that the cars will all be carrying the same things. But we should be looking for ways to improve interoperability, to reconcile semantics, structure, and syntax where it makes sense to do so. Failure to do so will force parallel evolution of non-interoperable systems that share many functional requirements.

RDF is a W3C Recommendation

The Resource Description Framework, or RDF, became a W3C Recommendation at the end of February [RDF]. The DC community has made important contributions to this milestone, Eric Miller having co-chaired the working group, and a number of other Dublin Core folk having been active participants. The formalization of RDF as a standard will promote promulgation of the supporting tools that should make many of our implementation challenges easier.

RDF is a set of conventions for expressing metadata that uses eXtensible Markup Language, or XML, as an encoding standard and provides a framework for exchanging metadata of many varieties. RDF constrains the expression of metadata, allowing assertions to be made only according to a standard set of constructs, thereby making it easier for any given application to make use of them.

A given set of metadata elements can be registered as an RDF schema on the Web, thereby specifying the semantics and structure of the metadata set. XML provides the encoding syntax, and the XML-namespace facility makes it straightforward to mix element sets in a given metadata description without the danger of element names colliding. That is, an element established as a component of one namespace, such as the Dublin Core, is in no danger of being confused with an element of the same name from another namespace. Element sets are thus modular in the Warwick Framework sense [LAGOZE 1996].

Putting aside the issue of software, the underlying ideas of RDF provide a conceptual foundation for the efforts of the Data Model Working Group, and hence have influenced much of the work on qualification of Dublin Core. The deployment of Dublin Core metadata is not, however, dependent on the deployment of RDF. Useful systems have been, and will continue to be, developed using simpler syntactical expressions (HTML or raw XML, for example).

In the design of any system, balancing the constraints of existing technology, the functional requirements of the application, and the expected benefits of increased complexity all enter into the choice of a deployment strategy. For stand-alone applications, the simplicity of embedded HTML is appealing and sufficient. For systems that are designed to interoperate in heterogeneous application domains, it is expected that RDF will pay dividends.

Additional benefits of RDF

Why add the additional complexity of RDF? The answer has to do primarily with the additional constraints that RDF imposes on the expression of metadata (the grammar of the metadata assertions). Without these conventions, the variety of metadata grammars would be so varied and complex as to preclude the development of general tools to support management and interchange of metadata sets (much as the overwhelming diversity of possible document type definitions for SGML documents made the generalized use of SGML intractable in the Web environment and led to the development of XML).

The ability to specify metadata schemas in RDF will make it possible for applications to access a particular schema from a publicly accessible registry on the Web and retrieve the parsing structure and semantics of the element set. This does not ensure either searching or interchange interoperability among metadata sets, but it makes the job of achieving it easier.


Among the most important indicators of the impact of the Dublin Core is the continuing propagation of the element set in multiple languages.

To date, the Dublin Core Element Set has been translated into eighteen languages: Arabic, Chinese, Czech, Danish, Dutch, Finnish, French, German, Greek, Bahasa Indonesia, Italian, Japanese, Korean, Norwegian, Portuguese, Spanish, Thai, and Turkish. Translations are currently under development for Burmese, Hungarian, and Khmer and Ukrainian. See [DC-LANGUAGES] for a continually updated list.

The realization of International metadata that will globalize resource discovery is far more complex than simply translating element definitions. Interested readers should see the article "Languages for Dublin Core" by Thomas Baker in D-Lib Magazine, December 1998, for further background and discussion [BAKER 1998].

Projects of Note

The Dublin Core has progressed not so much on its foundations in ontology as on the pragmatics of making useful systems to solve real needs of information seekers. Highlighting a handful of important Dublin Core applications gives a flavor of the directions and progress of the Dublin Corps -- the pioneers who are laying down the tracks on the frontiers. The following are a few of the many projects that exemplify the commitment of a community of people with a passion for making information more accessible and the fortitude to act in the face of uncertainty.

The CIMI Interoperability Testbed begins a new phase

The CIMI Interoperability Testbed Project [CIMI] has been the most ambitious interoperability project to date, 14 distinct museums having created records for more than 200,000 resources in only a few months. The project has just entered Phase II, with the objective of broadening the scope and adding qualification to the basic Dublin Core schema used for Phase I. The first meeting of Phase II just took place in New Orleans, where RDF was a focus of attention. The CIMI work has done much to test the assumptions and discover problems in the Dublin Core, and these implementers are continuing to forge the leading edge of Dublin Core deployment. The Guide to Best Practice: Dublin Core from the CIMI Institute should be an important document to guide not only museums, but other institutions as well.

The CORC project: Dublin Core and MARC in the same system

CORC (Cooperative Online Resource Catalog) is a research project at OCLC exploring the cooperative creation and use of metadata, primarily for online resources [CORC]. Currently the system provides for creation and editing of metadata records in MARC and Dublin Core. All records are available (and can be exported) in either view.

Another aspect of CORC is the creation of 'pathfinders,' collections of links. These can be created link by link, or imported from existing pages and edited. Pathfinders can also include embedded searches of the CORC catalog.

One of the major goals of CORC is to give libraries the tools they need to produce an integrated view of their collections and the Web, taking advantage of the cooperation that has distinguished library resource sharing for decades.

Finland adopts Dublin Core for government information

The Government of Finland is joining the governments of Australia and Denmark in adopting the Dublin Core as the basis for description of official government documents at state and regional levels. The Finnish metadata format will be a superset of Dublin Core, with additional elements and qualifiers.

The Helsinki University Library is officially the maintenance organization of the Finnish translation of the DC, an effort taken as seriously as the maintenance of the FINMARC format. The scope of the work also covers user guide and metadata templates, and some related tools.

The Helsinki Library and the Finnish National Archives also continue to play an active role in the Nordic Web Index/Nordic Web Archive work, developing tools for providing national web indices and archives.


The fourth year of the Dublin Core has been as tumultuous as the first, marked with controversy and vigorous debate. Broadened interest in metadata, in general, and the Dublin Core, in particular, combined with closer interaction with other metadata communities has sharpened debate and made cooperation both more difficult and more urgent.

Nonetheless, the year has witnessed important strides forward on many fronts, including standardization, the formalization of syntax alternatives, a deeper understanding of data modeling issues, and a refinement of the semantics of the elements and their qualifiers. The Dublin Core continues to attract broad international interest, continues to see new projects in many disciplines and sectors, and has begun to formalize a process that will ensure stability and representation across the broad spectrum of its constituency.

This progress, and expectations for further growth, all hinge on the hard work and good will of a diverse, often contentious, always dedicated cadre from around the world, who have found in the Web an unprecedented opportunity for improving information access, and have found in themselves the commitment to realize this opportunity through cooperative action.


[8601 DATES] Date and Time Formats. Misha Wolf and Charles Wicksteed. Submitted to W3C 15 September 1997.

[BAKER 1998] Languages for Dublin Core. Thomas Baker. D-Lib Magazine, December 1998.

[BEARMAN 1999] A Common Model to Support Interoperable Metadata: Progress report on reconciling metadata requirements the Dublin Core and INDECS/DOI Communities. David Bearman, Eric Miller, Godfrey Rust, Jennifer Trant, Stuart Weibel. D-Lib Magazine, January 1999. Volume 5, Number 1.

[CIMI] Consortium for the Computer Interchange of Museum Information. Organizational Website (April 1999):

[CORC] CORC--Cooperative Online Resource Catalog. Project Website (April 1999):

[DC-6] DC: The Sixth Dublin Core Metadata Workshop. November 2-4, 1998. Library of Congress, Washington, D.C., USA. Workshop Website.

[DC-7] The 7th Dublin Core Metadata Workshop. October 25-27, 1999. Die Deutsche Bibliothek Frankfurt am Main, Germany. Link from:

[DC-GENERAL] The Dublin Core Mailing List.

[DC-HOME] The Dublin Core Metadata Initiative. Organizational Website. (April 1999)

[DC-LANGUAGES] Dublin Core Multiple Languages Working Group Webpage. (April 1999)

[DC-SCHEMA 1999] DC Schema Discussion Paper: Dublin Core views of an underlying data model. Edited by Carl Lagoze.

[DOI] International DOI Foundation Organizational Website. (April 1999)

[GILS] Government Information Service. Organizational Website. (April 1999)

[IMS] Instructional Management System. Project Website. (April 1999)

[INDECS] INDECS: Interoperability of Data in E-Commerce Systems. Project Website. (April, 1999).

[ISO11179] ISO 11179 Parts 1-6, Specification and Standardization of Data Elements,

[KUNZE 1999] Encoding Dublin Core Metadata in HTML. John Kunze. Informational Internet Draft. (March 18, 1999)

[LAGOZE 1996] The Warwick Framework: A Container Architecture for Aggregating Sets of Metadata. Cornell Computer Science. Carl Lagoze, Clifford Lynch, Ron Daniel, Jr. (June, 1996) Technical Report TR96-1593.

[RDF] Resource Description Framework. Working Group Webpage. (April 1999)

[RFC2413] Dublin Core Metadata for Resource Discovery. Stuart Weibel, John Kunze, Carl Lagoze, and Misha Wolf. IETF Informational RFC. (September 1998)

[W3C] World Wide Web Consortium. Organizational Website. (April 1999)

[WEIBEL1996] A proposed convention for embedding metadata in HTML. W3C Distributed Indexing and Searching Workshop, May 28-29, 1996.

Copyright © 1999 Stuart Weibel

The URL for the Government Information Service (GILS) was corrected from to at the request of the author. The Editor, 4/22/99 10:49 am.

Top | Contents
Search | Author Index | Title Index | Monthly Issues
Previous Story | Next Story
Home | E-mail the Editor

D-Lib Magazine Access Terms and Conditions

DOI: 10.1045/april99-weibel