I was just watching OCLC’s recent presentation on their Next Generation Metadata Management which includes an interesting overview (YouTube) by VIVA, the Virtual Library of Virginia, that coordinates the collection management and resource sharing of online resources in a consortial environment.
Managing the e-book metadata for the Austrian library consortium and also serving one library with DDA, I wish I had such a (relatively) unified system of record delivery that still allows you to make individual local settings for each library. Let me briefly describe my current workflow: For certain publishers or packages, we have agreements with German library networks that pre-process the metadata and offer it to other consortia who want to use it. Springer would be an example. But not all Springer packages are covered, so I also need to go to their portal, download records from there and customize them myself. In addition I have to set myself reminders each month for these tasks. The fact that we have these different sources of metadata means that different processing methods are involved for each of these sources – some elements (e.g. some shell scripts) are the same but on the whole there is no identical workflow for all the e-book metadata in our consortium.
A few years ago, there was talk that the German National Library would offer a central metadata pool for e-books for the German-speaking library community, but unfortunately that never panned out. What I find very attractive about OCLC’s system is that you get automatically notified of new, updated or deleted records and can distribute them widely while at the same time have local customizations.
Library practices of bibliographic description have so far taken for granted the stability of the book. In the future, we might have to deal with describing versioning, forking and remixing. The article ” Forking the book” argues that dynamic content will become possible. As an example, it highlights a tool that lets you edit EPUB with GIT as a backend. “[W]ith this demo we are using GIT with a book so you can clone, edit, fork and merge the book into infinite versions.” There is already a platform for remixing books, BookRiff, which has not yet gained wide acceptance but which is slated to enable the kind of forking the article talks about.
Data modeling has to be aware of developments in the creation of the objects it primarily describes and makes discoverable. Borrowing expressions from the print paradigm, the forked book is comparable to a kind of “bound with”, multi-work constellation, but more complicated since only parts of works might be used, different versions might be created and licensing information would have to be noted. I guess Bibframe will be able to accommodate these versions and remixes, but that would mean that the statement in the November Bibframe report, “Each BIBFRAME Instance is an instance of one and only one BIBFRAME Work”, will not hold, because, as I see it, the instance (the remixed/forked book) would be in a relationship with two or more works.
Slides of the NISO Forum “The e-book renaissance, part II” (held in October 2012) have been made available at http://www.niso.org/news/events/2012/ebooks/agenda/.
I found Suzanne M. Ward’s presentation, “The Ideal E-Book World: An Academic Librarian’s Dream”, particularly interesting.
Library patrons clamor for e-books. Librarians are ready and willing to provide their patrons with access to e-books whenever possible, but publishers don’t always make it easy. […] A collection manager from a large research library will discuss the current e-book landscape from a librarian’s perspective and suggest feature and service improvements to enable libraries and publishers to benefit and meet evolving user demands while remaining flexible in the new era of publishing, acquisitions and scholarly collection development. […]
From the limited experience I’ve gained so far managing e-book metadata, these two items are on top of my own personal wishlist:
1 – More stability of access (not take away the rights for over a hundred titles and make libraries delete their records and holdings only to announce about two weeks later that access has been restored…)
2 – Better management of deletes, i.e. a separate reliable file with the records that have to be deleted, and not mix them into one big file with changes and new records.
The article “Cataloging Then, Now, and Tomorrow” cites three trends in cataloging: “the increasing reliance on vendor-supplied records and services, the explosion of electronic resources, and the growing interrelatedness of local library catalogs with systems outside the library.”
Well, I’m excited about getting to address the first two at work. I was able to slightly shift the focus of my role and am now one of two people responsible for managing the automated cataloging of vendor/publisher-supplied ebook data. After retrieving the data packages via FTP, we run shell scripts to modify them according to our needs, to load them into the ILS and to create holdings. There are some plans to support our consortium members with patron-driven acquisitions, and I’ll be involved in that project, too.
So I now have a foot in both worlds – the traditional cataloging world of one item (or one card) at a time, and the world of using “power tools” to manipulate large quantities of metadata without having to touch each record.
What can be said about the importance and impact of metadata for Patron-Driven Acquisitions (PDA) of ebooks in higher education institutions? The 22-page report (PDF) for UK’s JISC, “Patron Driven Acquisitions (PDA) and the role of metadata in the discovery, selection and acquisition of ebooks” (Ken Chad Consulting, Dec. 2011) addresses these questions. “This project investigated end user’s motivation for selecting an e-book and the role that metadata plays in the discovery, selection and eventual acquisition process.” Discovery services, unique identifiers, vendor metadata quality and social metadata are some of the buzzwords the report elaborates on. The project wiki has more details about the case studies, stakeholder interviews and other methods that were used to gather information. It also offers a number of additional resources and a very useful synthesis that brings together many of the points of the report in a concise way.
Some people wonder why, with full-text search available, an ebook might still need an index. If you happen to be one of them, go read “Missing Entry: Whither the eBook Index?” ;). This article is a great summary of the value of indexes (even or especially for books in electronic form) and gives examples (with nice illustrations!) of what enhanced indexes might look like. Indexes with enhanced functionality can be much more interactive and appealing to the user than pure lists of words with a page indication.
Just like subject cataloging, indexes offer a value that cannot be replaced by full-text search. They chart a structured map of the content, show paths into the information, expose relationships and go beyond pure search (which just pulls up instances of terms) in that content is analyzed and arranged meaningfully.
Experienced indexer Jan Wright points out in a fascinating podcast on ebook indexing that an index is a discovery feature just like other metadata. She says: “The more tools for getting into information readers are given, the happier they will be.”
The potential of what ebooks can be (beyond static representations of regular print books) has not been tapped yet – indexes are only one example. We’ll just have to wait for EPUB to recognize its importance and address it explicitly in its specification, and for publishers to incorporate smarter indexes into their products.
Imagine a user wants to read a public-domain book in electronic form. She’d be faced with the same situation as users before the advent of unified resource discovery systems – she has to go to various places on the web and do separate searches. Wouldn’t it be nice if there was a meta catalog for digitized works that brings together data from the likes of the Internet Archive, HathiTrust, Project Gutenberg, Europeana or Google Books? It could show what books were digitized by whom, whether they are downloadable, in what format, on what devices they can be read etc. Such a directory could also enable users to compare the quality if the same work is available in different versions. Another benefit would be the reduction of duplications of effort. Having duplicate electronic versions is not necessarily bad, but are time and money not better spent on unique materials not digitized elsewhere? Local priorities could be determined on a more informed basis.
All of this occurred to me while reading an article about the eBooks-on-Demand (EOD) service discovery platform (from p. 229 here, in German). EOD is a joint initiative of over 30 libraries from 12 European countries that each run their own digitization activities. Together they offer the (paid) service that lets users order a public-domain book to be digitized and delivered as an ebook. Instead of relying on users discovering EOD books “by chance” in the respective libraries’ catalogs, a VuFind search interface was built that allows finding books for digitization from all participating libraries in one central place and gives direct access to already digitized items. Records are ingested via OAI or FTP batch upload. For the future the project team plans to enhance the search platform to include links (via API queries of players like those I mentioned above) to works already digitized elsewhere. And this is where the idea of a central overarching catalog for digitized public-domain works popped up. Existing portals such as the Zentrales Verzeichnis digitalisierter Drucke (ZVDD, central catalog of digitized printed works, which covers digital versions created in Germany) go into the right direction, but we definitely have to think more globally and on a larger scale.