The Getty Search Gateway

The J. Paul Getty Trust, consisting of the J. Paul Getty Museum, the Getty Research Institute, the Getty Conservation Institute and the Getty Foundation, has recently launched an exciting new portal, the Getty Search Gateway (see also the press release). It allows you to search and browse the collection database, library catalog, collection inventories and archival finding aids as well as digital collections simultaneously, and filter results using facets. It caught my attention especially because of its similarities to library discovery layers in providing a convenient way to search across collections for a variety of resource formats. Mike Clardy, Assistant Director, Information Systems / Information Technology Services at the Getty, who wrote a blog post to introduce the new research tool, and Joe Shubitowski, Head, Library Information Systems, were kind enough to answer my curious questions and to share some details about the development and underlying structure with me which I’ll paraphrase here.

As you may have guessed, the search gateway was built using the Solr / Lucene search engine. The objective was to bring together a number of sources and formats under one umbrella. This is why the schema definition had to be flexible enough to support the wide variety of contributing sources. In fact, as I learned reading up on the Solr schema, Solr offers ways to dynamically create fields without them being pre-defined or explicitly named. With <dynamicField> declarations, you can create rules that tell the application what to do with certain fields, what data type to use etc. Generally in Solr, fields are strongly typed, i.e. every field in the schema is defined to be of a certain type with specifications about its intended use.

In the case of the Getty Search Gateway, this makes it possible for every source contributor to decide what fields to include in the index, what fields to display (and in which order) and how to label them. More specifically, the Solr schema developed by the Getty staff contains very few required fields, very few mapped fields that all data sources have to map to, and dynamic fields that any source can use to index and display their holdings. A single field may get copied into several different Solr fields, with different field options for searching, sorting, faceting or display, for example. This approach for aggregating museum and library data provides some major facets to pivot on, but also gives each data contributor the freedom to export, index and display the data elements they deem most important. For every data source, custom XSL transformations were written.

The possibility for each source to specify its own options is very powerful and has great potential for other applications. The Solr schema is cleverly exploited in the design of this implementation. I wasn’t previously aware of these possibilities in Solr and really appreciate the chance to understand its inner workings a bit better.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s