Guest post authored by Josh Lynn, Digital Media Specialist, Minneapolis Institute of Art
Original post published on LinkedIn Pulse
"... digital asset metadata cannot be represented by a
single, unchanging metadata model and schema. Data architects need to
embrace flexible models that allow metadata to vary widely across asset
types and that accommodate constant change."
– Demian Hess, “Managing Digital Asset Metadata,” Journal of Digital Media Management, Vol. 3, No. 2 (November 2014).An inflection point occurred at the Minneapolis Institute of Art (Mia) when we were awarded an IMLS grant for our Enterprise Content Management (ECM) project. The goal was straightforward: create an interface for federated search across museum data systems including the Digital Asset Management System(s), Collections Management System (the database for all works of art), and Web Content Management System. In other words, “one stop shopping” for all the great stuff being created across Mia’s various divisions and departments. As a digital asset management professional, I knew the task would be daunting. On the other hand, I was equally inspired by the opportunity to re-imagine a metadata model that had become bloated and inflexible over the years. I’m happy to report that Mia now has a shiny new ECM search interface in place, aptly named MetaMia (thanks IMLS!). What’s more, Mia now has a flexible metadata model for feeding MetaMia healthy, lean metadata. Now that we’ve spoiled the suspense, here’s the back story.
Mia’s enterprise DAMS was implemented over a decade ago. As you can image, the system has seen a number of system upgrades and the addition of a few metadata fields over the years. About 5 years ago, we established an API process to push metadata from our Collections Management System into the DAMS, for images of collections objects (that’s the “art”, my friend) specifically. This process also feeds DAMS images to the museum’s website, artsmia.org. The API sync was a start, but left 60% of our digital assets in relative seclusion within the DAMS. Meanwhile, the DAMS has become home to over 180,000 assets, growing by 27,287 assets year-to-date. Our metadata model had also grown... to over 250 unique fields, many of which had become deprecated, duplicated, or disused. Additionally, swaths of our fields were like desert islands – even where ISO metadata schema existed out in the world, XMP mapping had not been consistently configured within our DAMS.
But that’s not all! Consequently, the search index was bogged down with this deprecated, duplicated, and disused metadata, ensuring less-than-relevant search results for our users. You could call it metadata bloat. And that bloat was creating increasing inflexibility in our metadata model, degrading system performance, and creating a confounding user experience. Indeed, the tangled underbrush of metadata was threatening the very foundation of our DAMS. So, with a generously funded IMLS grant in place (along with a ticking clock, limited staff, and a lot of work on our hands), we set out to create a new metadata specification named Mia Core.
The Mia Core metadata makeover began with some serious reflection. First, we didn’t want to get into the same situation again. Ever. Therefore, a flexible metadata model was a necessity. The ability to painlessly add and remove schema future-proofs Mia Core for the changes we’re certain will occur, but cannot predict. Next, we resolved to keep it simple – more Library Science, less Rocket Science
Our new, flexible metadata model makes it easier for system administrators to maintain and develop the DAMS, it also makes it easier for catalogers, creatives, and general staff to contribute to, search, and retrieve from the DAMS. We even assigned easy-to-understand display labels for our users, for example “Keywords” rather than “DC:Subject”.
We also resolved to keep Mia Core (mostly) standards-based. By embracing recommendations from the Federal Agencies Digitization Guidelines Initiative, we quickly established a simplified set of minimum embedded metadata (based on Dublin Core) for all asset types, including audio, design, photo, and video. Because no single metadata standard can do it all, we then mapped in select schema from various standards and specifications including IPTC, Creative Commons, PLUS, VRA Core, XMP and more. This standards-based approach ensures the interoperability of our metadata – from desktop to DAMS and back again. Naturally, we have also created a small set of local fields for workflow process, including approvals, embargo, and quality control. Furthermore, we’ve added fields for expanded API data sync and publication routines, including ArtStories. After all, there isn’t a standard for everything, and we’ve got creative work to do. And, because XMP is a part of Mia Core, metadata seamlessly flows throughout workflows, from desktop to DAMS to the wider ECM system.
The work we’ve done over the past few years has reestablished a strong foundation for our DAMS and ECM system alike. Freed from an outdated metadata model, we are now positioned for future capacity. In the meantime, we’ll keep working to ensure Mia Core remains our Goldilocks metadata model – not too big, not too small, but just right. And because Mia Core is system agnostic, it can be applied to Enterprise and Open Source DAMS alike. After all, it’s good to have options. It is also good to share, and we’re keen to share what we’ve learned with the LAMs (Libraries, Archives, Museums) community and beyond.
If you would like to hear more about our project, my colleague Frances Lloyd-Baynes and I will be presenting at the upcoming Henry Stewart DAM Chicago 2015 event in September, where we’ll be sharing our work and its results. Note: As a reader of this post, you can register for that conference at a discount by using the code MINNEAPOLIS100. We hope to see you there.