Top of page

Exploring Computational Description

Woman crouched by a Main Reading Room card catalog cabinet in the Library of Congress
Woman at Main Reading Room card catalog in the Library of Congress. Delano, Jack, photographer. Washington DC. 1930-1950.

Investigating how machine learning can help with cataloging.

The Library of Congress has recently concluded phases 1 and 2 (ECD1 and ECD2) of the Exploring Computational Description (ECD) experiment, which investigates how AI tools might be used to help catalogers create metadata records. Catalog records are key to the discovery and access of digital materials, and AI may provide an opportunity to support catalogers’ workflows. ECD 1 and ECD2, the first two phases of this experiment, employed user-centered research to help the Library examine which technologies, models, and workflow approaches provide the most promising support for cataloging. In this experiment we tested promising AI models and systems while prototyping tools and workflows to assist catalogers. The primary audiences for the experiment are internal Library stakeholders and decision-makers. The lessons, evidence and recommendations gathered in the experiment will inform the design and requirements of future infrastructure systems and programs of engagement.

Process

In the first phase, in ECD1 (2022-2023), approximately 120,000 ebook files in epub and PDF formats, mostly in English, were processed through 5 open-source AI models using several approaches to determine how well each model performed at predicting the required metadata. Existing MARC records for the ebooks were used as ground truth for the experiment and for assessing the models’ performance by comparing the ML-predicted metadata with the fields in the original MARC records. Library of Congress catalogers also did manual reviews of the ML-generated subjects and author names. Catalogers provided quality assessment feedback to help refine the models. In ECD2 (2023-2024), several human in the loop (HITL) prototypes were developed to suggest machine-generated terms to catalogers as they create records. Catalogers were asked to provide feedback about the utility of the prototypes and identify needed refinements and enhancements in the data generation, authority term matching, and workflow interface.

Lessons Learned

The first two phases of this experiment have helped us test machine learning models and approaches and consider a range of questions about the benefits, risks, and costs of each of them. Additionally, it has supported our development towards quality benchmarks for automated records, to support the potential adoption of this technology use with large sets of Library of Congress digital materials Additional lessons are listed below.

  • Because library catalog records must contain highly reliable and accurate metadata, no current AI tools are currently good enough to run fully automatically for this task. But this is the right time to prototype HITL processes to understand where in the cataloger’s workflow these tools could most usefully be implemented.
  • The success of this experiment is highly dependent on the quality and robustness of the training data. Developing techniques and practices for creating balanced datasets for training and tuning models will be an ongoing effort.
  • AI tools are changing and updating at a rapid pace. Therefore, it is important to be able to test new tools with benchmark datasets to evaluate tools as they are released to inform decisions about when to implement new models or approaches.
  • Certain fields -- like identifiers (LCCN, ISBN), author, and title—can be extracted at around 85% accuracy, while others like subject headings, genre, and dates were only generated accurately at or under 50% of the time.
  • It is important to evaluate AI models and tools not just on performance but on practical and strategic considerations. These considerations include licensing, privacy, long-term reliability of models, and compute cost, in addition to ensuring that organizational regulations and principles are upheld while making use of automation.
  • The catalogers who volunteered to test the HITL prototypes were enthusiastic about the possibility of using AI tools in their workflows because they got a sense of how ML could augment and support their work.
  • Experimenting with the goal to understand the pros and cons of new technologies like AI help position the Library to adapt the policies, practices, and infrastructures that best support effective, practical, and responsible innovation for the benefit of our users and stakeholders.

Next Steps

The next phase in the experiment (ECD3) will be a test of the most promising HITL models and prototypes in workflows that more closely match the daily work of a cataloger. Instead of outputting in MARC, the data will be output in BIBFRAME. In addition, catalogers will be working with models and data in workflows that more closely match real-world cataloging tasks. More catalogers will be involved in testing and reviewing the data and prototypes, and the overall experiment outputs will be evaluated with the goal of developing requirements toward an implementable HITL cataloging workflow.

Presentations and Reports

Thank you!

A big thank you to all the staff members who collaborated with us on ECD1 and ECD2!

Project Co-Leads

  • Abigail Potter, Office of the Chief Information Officer
  • Caroline Saccucci, Library Collections and Services Group

Questions

For more information or questions about ECD please email LC-Labs@loc.gov

Back to top