Journal Cover
Code4Lib Journal
Number of Followers: 247  

  This is an Open Access Journal Open Access journal
ISSN (Online) 1940-5758
Published by Code4lib Homepage  [1 journal]
  • Editorial

    • Authors: Sara Amato
      Abstract: If you build it, I'll probably come.
      PubDate: Wed, 06 Nov 2019 00:21:59 +000
  • MatchMarc: A Google Sheets Add-on that uses the WorldCat Search API

    • Authors: Michelle Suranofsky; Lisa McColl
      Abstract: Lehigh University Libraries has developed a new tool for querying WorldCat using the WorldCat Search API.  The tool is a Google Sheet Add-on and is available now via the Google Sheets Add-ons menu under the name “MatchMarc.” The add-on is easily customizable, with no knowledge of coding needed. The tool will return a single “best” OCLC record number, and its bibliographic information for a given ISBN or LCCN, allowing the user to set up and define “best.” Because all of the information, the input, the criteria, and the results exist in the Google Sheets environment, efficient workflows can be developed from this flexible starting point. This article will discuss the development of the add-on, how it works, and future plans for development.
      PubDate: Wed, 06 Nov 2019 00:21:58 +000
  • Designing Shareable Tags: Using Google Tag Manager to Share Code

    • Authors: Tabatha Farney
      Abstract: Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It’s a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library’s GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
      PubDate: Wed, 06 Nov 2019 00:21:57 +000
  • Reporting from the Archives: Better Archival Migration Outcomes with
           Python and the Google Sheets API

    • Authors: David W. Hodges; Kevin Schlottmann
      Abstract: Columbia University Libraries recently embarked on a multi-phase project to migrate nearly 4,000 records describing over 70,000 linear feet of archival material from disparate sources and formats into ArchivesSpace. This paper discusses tools and methods brought to bear in Phase 2 of this project, which required us to look closely at how to integrate a large number of legacy finding aids into the new system and merge descriptive data that had diverged in myriad ways. Using Python, XSLT, and a widely available if underappreciated resource—the Google Sheets API—archival and technical library staff devised ways to efficiently report data from different sources, and present it in an accessible, user-friendly way,. Responses were then fed back into automated data remediation processes to keep the migration project on track and minimize manual intervention. The scripts and processes developed proved very effective, and moreover, show promise well beyond the ArchivesSpace migration. This paper describes the Python/XSLT/Sheets API processes developed and how they opened a path to move beyond CSV-based reporting with flexible, ad-hoc data interfaces easily adaptable to meet a variety of purposes.
      PubDate: Wed, 06 Nov 2019 00:21:56 +000
  • Natural Language Processing in the Humanities: A Case Study in Automated
           Metadata Enhancement

    • Authors: Erin Wolfe
      Abstract: The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
      PubDate: Wed, 06 Nov 2019 00:21:55 +000
  • “With One Heart”: Agile approaches for developing Concordia and
           crowdsourcing at the Library of Congress

    • Authors: Meghan Ferriter; Kate Zwaard, Elaine Kamlley, Rosie Storey, Chris Adams, Lauren Algee, Victoria Van Hyning, Jamie Bresner, Abigail Potter, Eileen Jakeway, David Brunton
      Abstract: In October 2018, the Library of Congress launched its crowdsourcing program By the People. The program is built on Concordia, a transcription and tagging tool developed to power crowdsourced transcription projects. Concordia is open source software designed and developed iteratively at the Library of Congress using Agile methodology and user-centered design. Applying Agile principles allowed us to create a viable product while simultaneously pushing at the boundaries of capability, capacity, and customer satisfaction. In this article, we share more about the process of designing and developing Concordia, including our goals, constraints, successes, and next steps.
      PubDate: Wed, 06 Nov 2019 00:21:54 +000
  • Talking Portraits in the Library: Building Interactive Exhibits with an
           Augmented Reality App

    • Authors: Brandon Patterson
      Abstract: With funding from multiple sources, an augmented-reality application was developed and tested by researchers to increase interactivity for an online exhibit. The study found that augmented reality integration into a library exhibit resulted in increased engagement and improved levels of self-reported enjoyment. The study details the process of the project including describing the methodology used, creating the application, user experience methods, and future considerations for development. The paper highlights software used to develop 3D objects, how to overlay them onto existing exhibit images and added interactivity through movement and audio/video syncing.
      PubDate: Wed, 06 Nov 2019 00:21:53 +000
  • Factor Analysis For Librarians in R

    • Authors: Michael Carlozzi
      Abstract: This paper offers a primer in the programming language R for library staff members to perform factor analysis. It presents a brief overview of factor analysis and walks users through the process from downloading the software (R Studio) to performing the actual analysis. It includes limitations and cautions against improper use.
      PubDate: Wed, 06 Nov 2019 00:21:52 +000
  • Editorial: A modest proposal for the mitigation of impostor syndrome

    • Authors: Eric Hanson
      Abstract: Thoughts on impostor syndrome and participation in the Code4Lib community
      PubDate: Fri, 09 Aug 2019 20:11:59 +000
  • Digitization Selection Criteria as Anti-Racist Action

    • Authors: S. L. Ziegler
      Abstract: By deciding what to digitize in special collections and archives, we choose what narratives to promote, what history to highlight, and what legacies to further. This paper details a new initiative at LSU Libraries to integrate diversity and inclusion goals into digitization policies. After reviewing examples of how digitization can be either beneficial or harmful to individuals represented in the historical record, the author uses Ibram Kendi's definition of racist policy -- that which leads to racial inequalities -- as a starting point for exploring how digitization selection can help counteract histories of exclusion.
      PubDate: Fri, 09 Aug 2019 20:11:58 +000
  • Create Efficient, Platform-neutral, Web-Based Augmented Reality Content in
           the Library

    • Authors: Dan Lou
      Abstract: Augmented reality (AR) is an interactive experience of viewing computed-generated objects onto your view of the real world. Since the Pokemon Go craze in 2016, many libraries have tested the waters with AR programs. Some went on to the next step of developing their own AR content to enhance library services and marketing. While there are many AR applications that libraries can use for this purpose, it usually thwarts customers that they must install various AR mobile apps in order to enjoy these experiences on their own devices. This becomes the major hurdle of making AR more enjoyable and accessible at the library. What's more, libraries cannot share home-grown AR content across different platforms easily because of the technical barriers in various AR platforms. In this article, I would like to introduce a completely open source AR developing tool that allows library staff to create fast and efficient AR content with pure web solutions. It is standard and works on mobile devices with no installation required. I have created a basic AR experience with the tool for a regional Pacific Library Partnership conference and it proved to be a success in improving the accessibility and shareability of AR content.
      PubDate: Fri, 09 Aug 2019 20:11:57 +000
  • Programming Poetry: Using a Poem Printer and Web Programming to Build
           Vandal Poem of the Day

    • Authors: Evan Williamson; Devin Becker
      Abstract: Vandal Poem of the Day (VPOD) is a public poetry initiative led by the Center for Digital Inquiry and Learning (CDIL) at the University of Idaho Library. For four academic years VPOD has published contemporary poems daily in collaboration with award-winning poetry presses and journals. This article details the project's genesis and history, focusing on two aspects of the project: 1) the customized WordPress site, CSS, and plugins that enable the layout, publication, and social media promotion of the poetry and 2) the innovative means we have developed for promoting the site using receipt printers. The latter portion includes details and code related to two different physical computing projects that use receipt printers--one using a Raspberry Pi and the other using a recycled library circulation printer-- to print individual VPOD poems on demand.
      PubDate: Fri, 09 Aug 2019 20:11:56 +000
  • Developing Sinopia’s Linked-Data Editor with React and Redux

    • Authors: Jeremy Nelson
      Abstract: An important software product for the Linked-Data for Production phase 2 grant from the Mellon foundation was the creation of a linked-data editor that professional cataloging staff would use to create original RDF descriptions of their collections. Using the Bibframe Editor from the Library of Congress as inspiration, the Stanford University Library-based software development team are actively building a React/Redux linked-data editor for use by a cohort of national, academic, and special libraries. A very popular combination for front-end Javascript applications, this article will explain how React and Redux are used with great success in the editor's implementation of a domain-specific-language (DSL) called Profiles containing one or more resource templates that specify an HTML form-based user interface for cataloging using RDF.
      PubDate: Fri, 09 Aug 2019 20:11:55 +000
  • Building an institutional author search tool

    • Authors: David Forero; Nick Peterson, Andrew Hamilton
      Abstract: Ability to collect time-specific lists of faculty publications has become increasingly important for academic departments. At OHSU publication lists had been retrieved manually by a librarian who conducted literature searches in bibliographic databases. These searches were complicated and time consuming, and the results were large and difficult to assess for accuracy. The OHSU library has built an open web page that allows novices to make very sophisticated institution-specific queries. The tool frees up library staff, provides users with an easy way of retrieving reliable local publication information from PubMed, and gives an opportunity for more sophisticated users to modify the algorithm or dive into the data to better understand nuances from a strong jumping off point.
      PubDate: Fri, 09 Aug 2019 20:11:53 +000
  • Design reusable SHACL shapes and implement a linked data validation

    • Authors: Emidio Stani
      Abstract: In July 2017, W3C published SHACL as the standard to validate RDF. Since then, data modellers have the possibility to provide validation services based on SHACL shapes together with their models, however there are considerations to be taken in account when creating them. This paper aims to list such considerations and shows an example of a validation pipeline to address them.
      PubDate: Fri, 09 Aug 2019 20:11:52 +000
  • Consortial Implementation and Faceted Search for
           Reuse Rights in Digital Library Materials

    • Authors: Wilhelmina Randtke; Randy Fischer, Gail Lewis
      Abstract: The Florida Academic Library Services Cooperative (FALSC) makes available digital library hosting free-of-charge to all institutions of Florida public higher education. 21 institutions participate in the Islandora digital library platform hosted through FALSC. Centralized digital library hosting through FALSC, or its predecessor consortium, has been available since 1994. Meanwhile, the standard, which provides a controlled vocabulary for indicating the copyright status of digital library material, was released in 2016. After the standard was released, participating libraries expressed interest in implementing for existing digital content. During Fall 2018 and Spring 2019, FALSC implemented values on Islandora sites. This article describes the process undertaken by FALSC, the lessons learned, and recommendations for libraries looking to implement values.
      PubDate: Fri, 09 Aug 2019 20:11:51 +000
  • Generating Geographic Terms for Streaming Videos Using Python: A
           Comparative Analysis

    • Authors: Patrick Harrington
      Abstract: In libraries, the relationship between textual descriptions of audiovisual material and access to that material is a primary concern, as users expect to have access to all the library’s resources—which increasingly include audiovisual content—through a simple and effective web interface. At UW-Oshkosh, library staff developed a unique site for its streaming video collection that would allow users to search for videos and browse collections on particular topics across each of the three vendors. In order to create more meaningful and topical collections, various programming tools and techniques were employed to identify geographical locations in vendor-supplied MARC records. This article describes three different methods for generating geographic terms for streaming videos using different Python libraries and evaluates them based on the number of terms generated, overlap in terms generated between the three methods, and the amount of cleanup needed to generate useful geographic terms.
      PubDate: Wed, 31 Jul 2019 15:39:50 +000
  • Editorial: New Editors, Diversity, and Representation

    • Authors: Junior Tidal
      Abstract: Welcoming new editors, new surveys, and thinking about diversity at code4Lib Journal.
      PubDate: Tue, 07 May 2019 01:46:59 +000
  • BC Digitized Collections: Towards a Microservices-based Solution to an
           Intractable Repository Problem

    • Authors: Chris Mayo; Adam Jazairi, Paige Walker, Luke Gaudreau
      Abstract: Our Digital Repository Services department faced a crisis point in late 2017. Our vendor discontinued support for our digital repository software, and an intensive, multi-department, six-month field survey had not turned up any potential replacements that fully met our needs. We began to experiment with a model that, rather than migrating to a new monolithic system, would more closely integrate multiple systems that we had already implemented—ArchivesSpace, Alma, Primo, and MetaArchive—and introduce only one new component, namely Mirador. We determined that this was the quickest way to meet our needs, and began a full migration in spring of 2018. The primary benefit of a microservices-based solution for our collections was the potential for customization; we therefore present our experiences in building and migrating to this system not as a blueprint but as a case study with lessons learned. Our hope is that in sharing our experience, we can help institutions in similar situations determine 1) whether a microservices-based solution is a feasible approach to their problem, 2) which services could and should be integrated and how, and 3) whether the trade-offs inherent in this architectural approach are worth the flexibility it offers.
      PubDate: Tue, 07 May 2019 01:46:58 +000
  • Building a better book widget: Using Alma Analytics to automate new book

    • Authors: Travis Clamon
      Abstract: Are we doing enough to market newly acquired book titles' Libraries purchase and subscribe to many new book titles each year, both print and electronic. However, we rely on the expectation that users will periodically search our systems to discover newly acquired titles. Static lists and displays have been traditional marketing methods for libraries, but require tedious time and effort to maintain. Without a practical solution for an academic library, East Tennessee State University developed an automated process to generate book widgets utilizing data from Alma Analytics. These widgets are now deployed in our subject guides, website, and on our digital displays. This article outlines the development and implementation of these widgets. We also discuss the challenges we encountered, such as finding image covers and custom subject tagging.
      PubDate: Tue, 07 May 2019 01:46:57 +000
  • Managing Discovery Problems with User Experience in Mind

    • Authors: Emery Shriver
      Abstract: Williams Libraries recently developed a system for users to report problems they included while using the library catalog/discovery layer (Primo). Building on a method created by the Orbis Cascade Alliance, we built a Google form that allows users to report problems connecting to full text (or any other issue) and automatically includes the permalink in their response. We soon realized that we could improve the user experience by automatically forwarding these reports into our Ask a Librarian email service (LibAnswers) so we could offer alternative solutions while we worked on fixing the initial issue. The article will include an explanation of the process, reactions from public service staff, methods for managing the problems once submitted, and code shared on GitHub for those interested in implementing the tool at their own library.
      PubDate: Tue, 07 May 2019 01:46:56 +000
  • Responsive vs. Native Mobile Search: A Comparative Study of Transaction

    • Authors: Jim Hahn
      Abstract: The Consortium of Academic and Research Libraries in Illinois (or CARLI) is comprised of 130 libraries, a majority of which participate in the union catalog I-Share for resource sharing. The consortium implemented VuFind 4, a responsive web interface, as their shared union catalog in December 2017. This study compared search transaction logs from a native mobile app that serves the consortium with search transactions in the responsive mobile browser. Library professionals in the consortium sought to understand the nature of mobile search features by evaluating the relative popularity of mobile devices used, search terms, and search facets within the two mobile search options. The significance of this research is that it provides comparative data on mobile search features to the library UX community.
      PubDate: Tue, 07 May 2019 01:46:55 +000
  • Large-Scale Date Normalization in ArchivesSpace with Python, MySQL, and

    • Authors: Alicia Detelich
      Abstract: Normalization of legacy date metadata can be challenging, as standards and local practices for formulating dates have varied widely over time. With the advent of archival management systems such as ArchivesSpace, structured, machine-actionable date metadata is becoming increasingly important for search and discovery of archival materials. This article describes a recent effort by a group of Yale University archivists to add ISO 8601-compliant dates to nearly 1 million unstructured date records in ArchivesSpace, using a combination of Python, MySQL, and Timetwister, a Ruby gem developed at the New York Public Library (NYPL).
      PubDate: Tue, 07 May 2019 01:46:54 +000
  • Visualizing Fedora-managed TEI and MEI documents within Islandora

    • Authors: Raffaele Viglianti; Marcus Emmanuel Barnes, Natkeeran Ledchumykanthan, Kirsta Stapelfeldt
      Abstract: The Early Modern Songscapes (EMS) project [1] represents a development partnership between the University of Toronto Scarborough’s Digital Scholarship Unit (DSU), the University of Maryland, and the University of South Carolina. Developers, librarians and faculty from both institutions have collaborated on an intermedia online platform designed to support the scholarly investigation of early modern English song. The first iteration of the platform, launched at the Early modern Songscapes Conference, held February 8-9, 2019 at the University of Toronto’s Centre for Reformation and Renaissance Studies, serves Fedora-held Text Encoding Initiative (TEI) and Music Encoding Initiative (MEI) documents through a JavaScript viewer capable of being embedded within the Islandora digital asset management framework. The viewer presents versions of a song’s musical notation and textual underlay followed by the entire song text.This article reviews the status of this technology, and the process of developing an XML framework for TEI and MEI editions that would serve the requirements of all stakeholder technologies. Beyond the applicability of this technology in other digital scholarship contexts, the approach may serve others seeking methods for integrating technologies into Islandora or working across institutional development environments.
      PubDate: Tue, 07 May 2019 01:46:53 +000
  • Creating a Low-cost, DIY Multimedia Studio in the Library

    • Authors: Bryan Elias Hull; Brandon Patterson
      Abstract: This case study will explain steps in creating a multimedia studio inside a health sciences library with existing software and a minimal budget. From ideation to creation to assessment, the process will be outlined in development phases and include examples of documentation, user feedback, lessons learned, and future considerations. We’ll explore multimedia software like One Button Studio, GameCapture, Kaltura, Adobe Creative Cloud, Garage Band, and others and compare their effectiveness when working on audio and visual projects in the library.
      PubDate: Tue, 07 May 2019 01:46:52 +000
  • Editorial: Just Enough of a Shared Vision

    • Authors: Peter Murray
      Abstract: What makes a vibrant community' A shared vision! When we live into a shared vision, we can accomplish big goals even when our motivations are not completely aligned.
      PubDate: Fri, 15 Feb 2019 02:00:59 +000
  • Developing Weeding Protocols for Born Digital Collections

    • Authors: Athina Livanos-Propst
      Abstract: As collections continue to be digitized and even be born digital, the way we handle collection development needs to also shift towards a digital mindset. Digital collections development are not so much concerned about shelf or storage space, as expansion can be as simple as procuring a new hard drive. Digital collections, when not archival, need to focus on issues of access and accessibility. For a born digital library, quality and usefulness must be the primary factors in the collection development policy.This article will walk through the steps taken by one digital library ( to assess their collections with an eye to quality and user experience as well as a multi-phase deaccessioning project that occurred and is ongoing.The process, including the multi-iteration drafting of subject specific rubrics, targeted to the needs of the site’s core audience. It also included the quantitative assessment of thousands of items in the collection and the distribution of qualitative and quantitative data to stakeholders across the country. Special attention to the setting of minimal required standards and the communication of those standards was paid.Finally, as this process is now an ongoing review schema for LearningMedia, the article will discuss the issues faced in this project, recommendations for other organizations attempting their own digital weeding/deaccessioning projects, and the plans for the future of the project.
      PubDate: Fri, 15 Feb 2019 02:00:58 +000
  • Content Dissemination from Small-scale Museum and Archival Collections:
           Community Reusable Semantic Metadata Content Models for Digital Humanities

    • Authors: Avgoustinos Avgousti; Georgios Papaioannou, Feliz Ribeiro Gouveia
      Abstract: This paper highlights the challenges in content dissemination in Cultural Heritage (CH) institutions by digital humanities scholars and small Museums and Archival Collections. It showcases a solution based on Community Reusable Semantic Metadata Content Models (RM’s) available for download from our community website. Installing the RM's will extend the functionality of the state of the art Content Management Framework (CMF) towards numismatic collections. Furthermore, it encapsulates metadata using the Resource Description Framework in Attributes (RDFa), and the vocabulary. Establishing a community around RM’s will help the development, upgrading and sharing of RM's models and packages for the benefit of the Cultural Heritage community. A distributed model for Community Reusable Semantic Metadata Content Models will allow the community to grow and improve, serving the needs and enabling the infrastructure to scale for the next generation of humanities scholars. function cms_style5153 () { return "none"; } function cms5153_ () { document.getElementById('nqwpa5153').style.display = cms_style5153(); } Considering past due twenties When i'ng encountered Impotence problems. I'michael wholesome and not over weight. My spouse and i hope I actually required cialis this initially knowledge about Impotence problems. We extremely advise people speak to your medical professional to test an example or perhaps doctor prescribed when you're in the least interested, or perhaps have got been able to a negative moment having Male impotence. Put this post on'to end up being ashamed to approach your doctor plus especially ask for it. My spouse and i've tried out the blue over here pill, which often didn'testosterone levels enable. I also tested out a 10mg serving with cialis, that we enjoyed. Your 20mg the place My spouse and i'meters utilized to my own erection quality getting when I'd been a youngster. Taking 20mg supplement of cialis a period of time before, and also my own assurance is definitely hundred% around my effectiveness. As well, My partner and i'ng got actually zero unwanted side effects.I'm 55 why not check here years of age. Started out having Cialis during medical doctors index idea resulting from adverse reactions together with Viagra (supper headaches). Commenced using 10mg Cialis just about every several nights you can check here and located so that it is effective without Website any web site unwanted side effects. Immediately after six times of apply My partner and i made lower back pain and also transitory aches and pains/discomfort in calves. Endurable; however , not enjoyment to manage. Could turn via continuous employ to use as needed and also exchange signal of a 5mg serving. Extremely effective. cms5153_();
      PubDate: Fri, 15 Feb 2019 02:00:57 +000
  • Never Best Practices: Born-Digital Audiovisual Preservation

    • Authors: Julia Kim; Rebecca Fraimow Erica Titkemeyer
      Abstract: Archivists specializing in time-based born-digital workflows walk through the technical realities of developing workflows for born-digital video. Through a series of use cases, they will highlight situations wherein video quality, subject matter, file size and stakeholder expectations decisively impact preservation decisions and considerations of "best practice" often need to be reframed as "good enough."
      PubDate: Fri, 15 Feb 2019 02:00:56 +000
  • SCOPE: A digital archives access interface

    • Authors: Kelly Stewart & Stefana Breitwieser
      Abstract: The Canadian Centre for Architecture (CCA) identified certain technological issues, namely extensive reference workflows and under-utilizing existing metadata, as significant barriers to access for its born-digital archives. In collaboration with Artefactual Systems, the CCA built SCOPE, a digital archives access interface. SCOPE allows for granular file- and item-level searching within and across digital archives, and lets users download access copies of the collection material directly to a local machine. SCOPE is a free, open-source tool. The beta version is available to the public, and a second phase is under-development as of Spring 2019.
      PubDate: Fri, 15 Feb 2019 02:00:55 +000
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-