At the end of November 2013, a conference was held at the Library of Congress giving an update on the development of Bibframe. The recording and transcript can be found here. Eric Miller of Zepheira presented a fascinating first glimpse at an input tool, but it was most astonishing to hear him use the word “constrain” so many times. One example: “What we’ve done in BIBFRAME, however, is actually constrained the problem. Now we’re not talking about the entire semantic web authoring tools. We’re talking about just the authoring tools that follow a particular pattern that we in this community basically care about.”
I can see a move away from the hype of the Semantic Web / Big Data and the admiration of the enormous Linked Data cloud. The Semantic Web has exceeded the human scale. Zooming in on a local focus within a larger space of compatibility, “profiles [in the Bibframe editing tool] provides a means for us to sort of constrain the different aspects of BIBFRAME that we are interested in and project our local meaning into a common framework based on these global standards.” This ties in with what ecomonist E. F. Schumacher wrote in his 1973 book Small is beautiful: “Today we suffer from an almost universal idolatry of gigantism. It is therefore necessary to insist on the virtues of smallness – where this applies. [...] For every activity there is a certain appropriate scale [...].”
Another of Schumacher’s concepts, that of “intermediate technology”, can be used as a metaphor for one of Vinod Chachra’s (VTLS) statements: “Because you’re moving at such a complex world, which is far more complex than your local library, you have to have very simple tools, like Albert Einstein said, and be able to be used by the users, with zero, and I mean absolute big solid zero training. And that’s the kind of system we’re trying to build, so then it becomes everybody’s tool, not just the tools of specialized librarians.” Schumacher defines intermediate technology as follows: “The equipment would be fairly simple and therefore understandable [...] Men are more easily trained: supervision, control, and organisation are simpler; and there is far less vulnerability to unforeseen difficulties.”
We may be on our way to an (at least temporary) understanding of what is “enough” in terms of complexity, tailoring Bibframe as a model and the underlying technology to an appropriate (human) scale.
Field books (primary source documents that are created during field research and that are of big importance for natural history) are unique because they come in a variety of formats and material types. The recent D-Lib Magazine features an article by Sonoe Nakasone and Carolyn Sheffield, “Descriptive metadata for field books: methods and practices of the Field Book Project”. In an earlier post I talked about the project, but this article now goes into more detail regarding the descriptive metadata used for the Field Book Registry. It was decided that these items are going to be described both on the collection and the item level. Metadata schemas from the museum, archives and library communities were chosen for this task: Natural Collections Description (NCD) is used for collection level records and Metadata Object Description Schema (MODS) for item level records, with Encoded Archival Context (EAC) being used for authority records of collectors, organization and expeditions. These schemas are combined into one database, the Field Book Registry. Explicit connections are established between collection, item and authority records via IDs, and controlled vocabularies like the Thesaurus of Geographic Names (TGN) or LCSH enrich the records. The articles closes with screenshots of the cataloging interface and with mentioning some challenges and future developments.
I would like to share Karen Coyle’s comment from the Bibframe list for us all to contemplate:
… And as Shlomo [Sanders of ExLibris] has pointed out in another forum, for ILS vendors who have discovery layers, it is essential that any bibliographic space today be able to seamlessly include both library holdings as well as a variety of research materials that libraries would not normally catalog. This tells me that the goal should be compatibility with the whole bibliographic universe, and not a focus on the library catalog as we know it today.
OK, I admit to this: I feel that anything we do that replicates what we know of as “library cataloging” – from FRBR to RDA to BIBFRAME – is taking us in the wrong direction. I’d rather see us designing for general bibliographic compatibility and interoperability, and then seeing how we can continue to have (for internal purposes) a decent inventory of library holdings – which should get much less of our attention and energy because it serves our users less. Bluntly, bibliographic description as we have known it is passé. FRBR, RDA and BIBFRAME (as a new serialization for MARC) are not terribly relevant.
The BBC World Service Archive Prototype is a website that provides access to the huge digital archive of radio programs of the BBC World Service. Yves Raimond and Tristan Ferne describe in a concise article (PDF, 8 pages) how Semantic Web technologies, automation and crowdsourcing are used to annotate, correct and add metadata for search and navigation. Ed Summers has a blog post about this project, making a comment I wholeheartedly agree with: “… [I]t is the (implied) role of the archivist, as the professional responsible for working with developers to tune these algorithms, evaluating/gauging user contributions, and helping describe the content themselves that excites me the most about this work.” I think this is not only a possible future role for archivists but also for librarians, especially catalogers and metadata specialists working with digital collections.
The OLAC Movie & Video Credit Annotation Experiment is part of a larger project to make it easier to find film and video in libraries and archives. This experiment breaks current movie records down to pull out all the cast and crew information so that it may be re-ordered and manipulated. We also want to make explicit connections between cast and crew names and their roles or functions in the movie production. Adding these formal connections to movie records will allow us to provide a better user experience. For example, library patrons would be able to search just for directors or just for cast members or only for movies where Clint Eastwood is actually in the cast rather than all the movies that he is connected with. […]
We therefore want to convert our existing records into more structured sets of data. Eventually, we intend to automate most of this conversion. For now, we need help from human volunteers, who can train our software to recognize the many ways names and roles have been listed in library records for movies.
I already submitted a number of annotations, and it’s so much fun and so easy that I could hardly stop! So join in if you have a few minutes to spare and contribute to this crowd-sourcing project.
I was just watching OCLC’s recent presentation on their Next Generation Metadata Management which includes an interesting overview (YouTube) by VIVA, the Virtual Library of Virginia, that coordinates the collection management and resource sharing of online resources in a consortial environment.
Managing the e-book metadata for the Austrian library consortium and also serving one library with DDA, I wish I had such a (relatively) unified system of record delivery that still allows you to make individual local settings for each library. Let me briefly describe my current workflow: For certain publishers or packages, we have agreements with German library networks that pre-process the metadata and offer it to other consortia who want to use it. Springer would be an example. But not all Springer packages are covered, so I also need to go to their portal, download records from there and customize them myself. In addition I have to set myself reminders each month for these tasks. The fact that we have these different sources of metadata means that different processing methods are involved for each of these sources – some elements (e.g. some shell scripts) are the same but on the whole there is no identical workflow for all the e-book metadata in our consortium.
A few years ago, there was talk that the German National Library would offer a central metadata pool for e-books for the German-speaking library community, but unfortunately that never panned out. What I find very attractive about OCLC’s system is that you get automatically notified of new, updated or deleted records and can distribute them widely while at the same time have local customizations.
In its recent installment, entitled “Curating the Analog, Curating the Digital”, Archives remixed, part of the Archive Journal, features two articles that might be of interest to librarians and especially catalogers:
- “All in the Family: a dinner table conversation about libraries, archives, data, and science” by sisters Kristen A. Yarmey (Digital Services Librarian) and Lynn A. Yarmey (Lead Data Curator) explores the relationships between libraries, archives, and data curation, covering topics like containers, content and context, metadata or creators and users.
- “Disrespect des Fonds: Rethinking Arrangement and Description in Born-Digital Archives” by Jefferson Bailey looks at the question: “How will traditional principles of archival arrangement and description be challenged or modified to account for born-digital materials?”, outlining the shift from the linear narrative of a traditional finding aid to a dynamic system of multiple interrelationships of born-digital archival material.