for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover
Code4Lib Journal
Number of Followers: 201  

  This is an Open Access Journal Open Access journal
ISSN (Online) 1940-5758
Published by Code4lib Homepage  [1 journal]
  • Editorial: Beyond Posters: On Hospitality in Libtech

    • Authors: Ruth Kitchin Tillman
      Abstract: In this editorial, I will be using the word hospitality to mean the intentional welcome of others into a space which one currently occupies, possibly as a member of a dominant group. I do not wish to encourage the idea that one should cultivate or maintain a role of benevolent host in a way that forces others to remain forever guest or outsider, although there will always be newcomers. Hospitality may be a first step to ceding one's position as host in a space. It may be expanding that space to become a place with many potential hosts, each respected for their varied contributions and skillsets. It may also be supporting those in a different space or a different role, such as those who use the technologies we build and support (both colleagues and patrons), and respecting them in that space.
      PubDate: Mon, 07 May 2018 15:05:59 +000
       
  • What’s in a Name' On ‘Meaningfulness’ and Best Practices in
           Filenaming within the LAM Community

    • Authors: Drew Krewer; Mary Wahl
      Abstract: Cultural institutions such as libraries, archives and museums (LAM) face many challenges with managing digital collections, particularly when it comes to organizing the individual files that make up each collection. While tools such as metadata and collection management systems support identification and arrangement for digital files, administrative control depends significantly on the mere filenaming in use beneath the surface. Anecdotal evidence has shown that many LAM institutions have specialized filenaming schemes in place for their digital collections. This paper includes a literature review of filenaming practices in the LAM community, followed by a description and analysis of survey data regarding filenaming practices in the LAM community. The purpose of the survey was to learn about filenaming conventions in use within LAM organizations who have filenaming policies in place. The data suggests that: similarities and differences exist in filenaming approaches between museums/galleries, archives/special collections, and academic institutions; it is preferred that filenaming be simultaneously meaningful to both humans and computers; and conventions that affect sortability are deemed more important than those that affect readability. The data also indicate several subtopics related to filenaming that would benefit from further study.
      PubDate: Mon, 07 May 2018 15:05:58 +000
       
  • Centralized Accessioning Support for Born Digital Archives

    • Authors: Alice Sara Prael
      Abstract: Archives often receive obsolete digital storage media alongside paper acquisitions: CDs and DVDs mixed in with folders of correspondence, Zip disks, and floppy disks set aside by the donor with the intention to review the content later. Archives must not only have the expertise to work with digital media, but also the hardware and software to capture the content without the risk of altering the files merely by viewing them. This article will describe how Yale University Libraries and Museums addressed accessioning of born-digital archival content on physical media through a centralized digital accessioning support service. Centralizing the hardware and expertise required for working with physical media made it possible to accession media more quickly and return the files to the originating archives for arrangement and description.
      PubDate: Mon, 07 May 2018 15:05:57 +000
       
  • Wikidata: a platform for your library’s linked open data

    • Authors: Stacy Allison-Cassin; Dan Scott
      Abstract: Seized with the desire to improve the visibility of Canadian music in the world, a ragtag band of librarians led by Stacy Allison-Cassin set out to host Wikipedia edit-a-thons in the style of Art+Feminism, but with a focus on addressing Canadian music instead. Along the way, they recognized that Wikidata offered a low-barrier, high-result method of making that data not only visible but reusable as linked open data, and consequently incorporated Wikidata into their edit-a-thons. This is their story.
      PubDate: Mon, 07 May 2018 15:05:56 +000
       
  • Redux: Tabulating Transactions with Raspberry Pi and Visualizing Results

    • Authors: Tim Ribaric
      Abstract: Often in the library tech world we are not given the opportunity to attempt a project again. Effort spent re-doing a previous project in a different way, in some sense, means wasting time that could be used to work on new initiatives. This article describes a redux of a project, a revenge story so to speak. In 2013 the Arduino based Tabulatron first entered production at Brock University Library. The device had its flaws, an attempt to rectify those flaws was manifested in the creation of the PiTab, the story of which is presented here.
      PubDate: Mon, 07 May 2018 15:05:55 +000
       
  • FAIR Principles for Library, Archive and Museum Collections: A proposal
           for standards for reusable collections

    • Authors: Lukas Koster; Saskia Woutersen-Windhouwer
      Abstract: Many heritage institutions would like their collections to be open and reusable but fail to achieve that situation because of organizational, legal and technological barriers. A set of guidelines and best practices is proposed to facilitate the process of making heritage collections reusable. These guidelines are based on the FAIR Principles for scholarly output (FAIR data principles [2014]), taking into account a number of other recent initiatives for making data findable, accessible, interoperable and reusable. The resulting FAIR Principles for Heritage Library, Archive and Museum Collections focus on three levels: objects, metadata and metadata records. Clarifications and examples of these proposed principles are presented, as well as recommendations for the assessment of current situations and implementations of the principles.
      PubDate: Mon, 07 May 2018 15:05:54 +000
       
  • Editorial: Musing on learning to be a selfish librarian

    • Authors: Terry Reese
      Abstract: One of the perks of being the coordinating editor is you get to write the opening editorial for the issue.  It’s an opportunity to think broadly about the community, the journal…current events.  And if you look back over the past year or so, those that have taken on this role have been more than up […]
      PubDate: Mon, 05 Feb 2018 20:44:59 +000
       
  • Approaching the largest ‘API’: extracting information from the
           Internet with Python

    • Authors: Jonathan E. Germann
      Abstract: This article explores the need for libraries to algorithmically access and manipulate the world’s largest API: the Internet. The billions of pages on the ‘Internet API’ (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
      PubDate: Mon, 05 Feb 2018 20:44:58 +000
       
  • Using R and the Tidyverse to Generate Library Usage Reports

    • Authors: Andy Meyer
      Abstract: Gathering, analyzing, and communicating library usage data provides a foundation for thoughtful assessment. However, the amount of time and expertise required creates a barrier to actually using this data. By using the statistical programming language R and the tools and approach of the Tidyverse, the process of gathering, analyzing, and communicating data can be automated in ways that reduce the amount of time and energy required. At the same time, this approach increases staff capacity for other data science projects and creates a shareable model and framework for other libraries. This article focuses on electronic resource usage reports - especially Counter DB1 Reports - but this approach could be extended to other data sources and needs.
      PubDate: Mon, 05 Feb 2018 20:44:57 +000
       
  • Archidora: Integrating Archivematica and Islandora

    • Authors: Tim Hutchinson
      Abstract: “Archidora” is shorthand for the publicly available integration between the open source software packages Archivematica and Islandora. Sponsored by the University of Saskatchewan Library, this integration enables the automated ingest into Archivematica of objects created in Islandora. This will allow institutions that use Islandora as a digital asset management system, particularly for digitized material, to take advantage of Archivematica’s standards-based digital preservation functionality, without requiring staff doing digitization to interact with Archivematica. This paper outlines the basic functionality and workflow of archidora; provides an overview of the development process including challenges and lessons learned; and discusses related initiatives and possible future directions for development.
      PubDate: Mon, 05 Feb 2018 20:44:56 +000
       
  • Microdata in the IR: A Low-Barrier Approach to Enhancing Discovery of
           Institutional Repository Materials in Google

    • Authors: Shayna Pekala
      Abstract: Georgetown University Library curates a multitude of open access resources in its institutional repository and digital collections portal, DigitalGeorgetown. Over the last several years, the Library has experimented with methods for making these items increasingly visible in search engine search results. This article describes the Library’s low-barrier approach to applying Schema.org vocabulary to its DSpace institutional repository using microdata, as well as the challenges with and strategies used for assessing this work. The effects of the application of Schema.org microdata to DigitalGeorgetown on Google search results were tracked over time using three different metrics, providing new insights about its impact.
      PubDate: Mon, 05 Feb 2018 20:44:55 +000
       
  • Getting Real in the Library: A Case Study at the University of Florida

    • Authors: Samuel R. Putnam; Sara Russell Gonzalez
      Abstract: In the fall of 2014, the University of Florida (UF) Marston Science Library, in partnership with UF IT, opened a new computer lab for students to learn and develop mobile applications. The Mobile Application Development Environment (MADE@UF) features both software and circulating technology for students to use in an unstructured and minimally-staffed environment. As the technological landscape has shifted in the past few years, virtual and augmented reality have become more prominent and prevalent, signaled by companies like Facebook, Google, and Microsoft making significant financial investments in these technologies. During this evolution, MADE@UF has migrated to focus more on virtual and augmented reality, and we will discuss the opportunities and challenges that hosting and managing such a space has provided to the science library and its staff.
      PubDate: Mon, 05 Feb 2018 20:44:54 +000
       
  • Accio e-Libri: Magically Delivering Digital Resources to Patrons Using NFC
           Technology

    • Authors: Christopher M. Jimenez; Barbara M. Sorondo
      Abstract: To coincide with the 20th anniversary of the publication of Harry Potter and the Philosopher’s/Sorcerer’s Stone, our library created a Happee Birthdae Harry display incorporating Near Field Communication (NFC) technology alongside print materials in order to magically place electronic resources in our users’ hands. The display was a spellbinding success, increasing usage of both print and electronic items, and helping our students become familiar with this innovative technology in an engaging manner. This article will provide step-by-step instructions on the materials and procedures librarians need to implement NFC technology in their own libraries, and will discuss the challenges and opportunities associated with this rapidly spreading technology.
      PubDate: Mon, 05 Feb 2018 20:44:53 +000
       
  • Ship It: Logistical tracking of ILL physical loans

    • Authors: Ryan Litsey & Scott Luker
      Abstract: The OBILLSK Shipment Tracking system is the first consolidated and comprehensive shipment information system for interlibrary loan. The system is unique because not only does it offer an interface for consolidating the items being shipped out of an ILL office, it also provides real time statistical data of global geographic shipping patterns, tracking of packages across all major couriers, and customized date range reporting for ILL shipment activity. This system takes advantage of several web-based technologies that makes it easy to use for students, staff and library administrators. The web-based software utilizes a .NET platform and SQL Server database. Client-side frameworks include Bootstrap and jQuery for responsive design, Shield UI for data visualizations, and jVectorMap for geographical representation of shipments. The system is now available for all libraries. It is actively in use at 15 academic libraries nationwide and has over 190,000 items scanned since October of 2016. It is through the development of innovative technologies that libraries can continue to serve as incubators for practical solutions that can help the discipline and practice of librarianship.
      PubDate: Mon, 05 Feb 2018 20:44:52 +000
       
  • The Automagic of the LII’s eCFR

    • Authors: Charlotte Schneider; Sylvia Kwakye
      Abstract: The Legal Information Institute (LII) began providing access to federal legal materials in 1992. This article discusses their work expanding and improving free public access to federal legal resources in the U.S., particularly developing their eCFR product for the Code of Federal Regulations, and plans to integrate DocketWrench.
      PubDate: Mon, 05 Feb 2018 20:44:51 +000
       
  • Editorial: The Economics of Not Being an Organization

    • Authors: Carol Bean
      Abstract: Our successes have caught up with us. Now we get to choose the next step in our evolution.
      PubDate: Wed, 18 Oct 2017 15:00:59 +000
       
  • Usability Analysis of the Big Ten Academic Alliance Geoportal: Findings
           and Recommendations for Improvement of the User Experience

    • Authors: Mara Blake; Karen Majewicz, Amanda Tickner, Jason Lam
      Abstract: The Big Ten Academic Alliance (BTAA) Geospatial Data Project is a collaboration between twelve member institutions of the consortium and works towards providing discoverability and access to geospatial data, scanned maps, and web mapping services. Usability tests and heuristic evaluations were chosen as methods of evaluation, as they have had a long standing in measuring and managing website engagement and are essential in the process of iterative design. The BTAA project hopes to give back to the community by publishing the results of our usability findings with the hope that it will benefit other portals built with GeoBlacklight.
      PubDate: Wed, 18 Oct 2017 15:00:58 +000
       
  • Using the ‘rentrez’ R Package to Identify Repository Records
           for NCBI LinkOut

    • Authors: Yoo Young Lee; Erin D. Foster, David E. Polley, Jere Odell
      Abstract: In this article, we provide a brief overview of the National Center for Biotechnology Information (NCBI) LinkOut service for institutional repositories, a service that allows links from the PubMed database to full-text versions of articles in participating institutional repositories (IRs). We discuss the criteria for participation in NCBI LinkOut for IRs, current methods for participating, and outline our solution for automating the identification of eligible articles in a repository using R and the ‘rentrez’ package. Using our solution, we quickly processed 4,400 open access items from our repository, identified the 557 eligible records, and sent them to the NLM. Direct linking from PubMed resulted in a 17% increase in web traffic.
      PubDate: Wed, 18 Oct 2017 15:00:57 +000
       
  • The Drawings of the Florentine Painters: From Print Catalog to Linked Open
           Data

    • Authors: Lukas Klic; Matt Miller, Jonathan K. Nelson, Cristina Pattuelli, Alexandra Provo
      Abstract: The Drawings of The Florentine Painters project created the first online database of Florentine Renaissance drawings by applying Linked Open Data (LOD) techniques to a foundational text of the same name, first published by Bernard Berenson in 1903 (revised and expanded editions, 1938 and 1961). The goal was to make Berenson’s catalog information—still an essential information resource today—available in a machine-readable format, allowing researchers to access the source content through open data services. This paper provides a technical overview of the methods and processes applied in the conversion of Berenson’s catalog to LOD using the CIDOC-CRM ontology; it also discusses the different phases of the project, focusing on the challenges and issues of data transformation and publishing. The project was funded by the Samuel H. Kress Foundation and organized by Villa I Tatti, The Harvard University Center for Italian Renaissance Studies.Catalog: http://florentinedrawings.itatti.harvard.eduData Endpoint: http://data.itatti.harvard.edu
      PubDate: Wed, 18 Oct 2017 15:00:56 +000
       
  • Web-Scraping for Non-Programmers: Introducing OXPath for Digital Library
           Metadata Harvesting

    • Authors: Mandy Neumann; Jan Steinberg, Philipp Schaer
      Abstract: Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
      PubDate: Wed, 18 Oct 2017 15:00:55 +000
       
  • DIY DOI: Leveraging the DOI Infrastructure to Simplify Digital
           Preservation and Repository Management

    • Authors: Kyle Bannerjee; David Forero
      Abstract: This article describes methods for how staff with modest technical expertise can leverage the
      DOI (Digital Object Identifier) infrastructure in combination with third party storage and preservation solutions to build safer, more useful, and easier to manage repositories at much lower cost than is normally possible with standalone systems. It also demonstrates how understanding the underlying mechanisms and questioning the assumptions of technology metaphors such as filesystems can lead to seeing and using tools in new and more powerful ways.
      PubDate: Wed, 18 Oct 2017 15:00:54 +000
       
  • Direct Database Access to OCLC Connexion’s Local Save File

    • Authors: Rebecca B. French
      Abstract: A feature of OCLC's Connexion cataloging client unknown to most librarians is the ability to directly work with the Microsoft Access database underlying the local save file. This article provides an overview of the metadata made available through this method, including fields that cannot be accessed through the regular Connexion interface, and discusses factors to be considered when deciding whether to migrate the data to another database system instead of continuing to work with Access. Descriptions of three projects illustrate how this functionality has been applied to efficiently catalog a gift collection, find OCLC numbers for e-books, and create bibliographic records for Early English Books Online/Text Creation Partnership titles using data from multiple sources. With the option to rely only on common, off-the-shelf software, this method of directly accessing the local save file database offers a way to expand Connexion’s functionality for those unable or unwilling to work with OCLC APIs. Other benefits include the ability to import external data and to use SQL for more advanced querying. A number of limitations are also discussed, and their implications for metadata access and use are explored.
      PubDate: Wed, 18 Oct 2017 15:00:53 +000
       
  • Between the Sheets: a Library-wide Inventory with Google

    • Authors: Craig Boman; Ray Voelker
      Abstract: When it comes to taking an inventory of physical items, libraries often rely on their traditional integrated library system’s (ILS) à la carte add ons; outside vendors; or other possibly outdated, complex, and often expensive methods. For libraries with shrinking budgets and other limited resources, high costs can put these methods out of reach. At the University of Dayton Libraries, we set out to develop an inexpensive and reasonably easy-to-use method for conducting a library-wide physical item inventory. In this article, we explain a custom built Google Sheets-based library inventory system, along with some code for the implementation of a RESTful API (written in PHP) that interacts with our ILS. We will also explain our use of Google Apps scripts in our Google Sheet, which are crucial to our systems. Although this method used a specific ILS (Innovative Interfaces' Sierra product) and custom-built RESTful APIs, it may be possible to use similar approaches with other ILS software. Additional notes include areas for improvement and recommendations for interoperability with other ILS systems.
      PubDate: Wed, 18 Oct 2017 15:00:52 +000
       
  • Tools and Workflows for Collaborating on Static Website Projects

    • Authors: Kaitlin Newson
      Abstract: Static website generators have seen a significant increase in popularity in recent years, offering many advantages over their dynamic counterparts. While these generators were typically used for blogs, they have grown in usage for other web-based projects, including documentation, conference websites, and image collections. However, because of their technical complexity, these tools can be inaccessible to content creators depending on their level of technical skill and comfort with web development technologies. Drawing from experience with a collaborative static website project, this article will provide an overview of static website generators, review different tools available for managing content, and explore workflows and best practices for collaborating with teams on static website projects.
      PubDate: Wed, 18 Oct 2017 15:00:51 +000
       
  • Leveraging Python to improve ebook metadata selection, ingest, and
           management

    • Authors: Kelly Thompson and Stacie Traill
      Abstract: Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.
      PubDate: Wed, 18 Oct 2017 15:00:50 +000
       
  • Testing Three Types of Raspberry Pi People Counters

    • Authors: Johnathan Cintron; Devlyn Courtier, John DeLooper
      Abstract: The Hudson County Community College (HCCC) Library tested three different types of Raspberry Pi based people counters between 6/14/2017 and 7/9/2017. This article will describe how we created each type of counter, will compare the accuracy of each sensor, and will compare them to the college’s existing 3M 3501 gate counters. It will also describe why and how our team decided to make this project, discuss lessons learned, and provide instructions for how other libraries can create their own gate counters.
      PubDate: Wed, 18 Oct 2017 15:00:49 +000
       
  • Editorial: Welcome New Editors, What We Know About Who We Are, and
           Submission Pro Tip!

    • Authors: Sara Amato
      Abstract: Want to see your work in C4LJ' Here's a pro tip!
      PubDate: Tue, 18 Jul 2017 21:03:59 +000
       
  • A Practical Starter Guide on Developing Accessible Websites

    • Authors: Cynthia Ng; Michael Schofield
      Abstract: There is growing concern about the accessibility of the online content and services provided by libraries and public institutions. While many articles cover legislation, general benefits, and common opportunities to improve web accessibility on the surface (e.g., alt tags), few articles discuss web accessibility in more depth, and when they do, they are typically not specific to library web services. This article is meant to fill in this vacuum and will provide practical best practices and code.
      PubDate: Tue, 18 Jul 2017 21:03:58 +000
       
  • Recount: Revisiting the 42nd Canadian Federal Election to Evaluate the
           Efficacy of Retroactive Tweet Collection

    • Authors: Anthony T. Pinter; Ben Goldman
      Abstract: In this paper, we report the development and testing of a methodology for collecting tweets from periods beyond the Twitter API’s seven-to-nine day limitation. To accomplish this, we used Twitter’s advanced search feature to search for tweets from past the seven to nine day limit, and then used JavaScript to automatically scan the resulting webpage for tweet IDs. These IDs were then rehydrated (tweet metadata retrieved) using twarc. To examine the efficacy of this method for retrospective collection, we revisited the case study of the 42nd Canadian Federal Election. Using comparisons between the two datasets, we found that our methodology does not produce as robust results as real-time streaming, but that it might be useful as a starting point for researchers or collectors. We close by discussing the implications of these findings.
      PubDate: Tue, 18 Jul 2017 21:03:57 +000
       
  • Extending Omeka for a Large-Scale Digital Project

    • Authors: Haley Antell; Joe Corall, Virginia Dressler, Cara Gilgenbach
      Abstract: In September 2016, the department of Special Collections and Archives, Kent State University Libraries, received a Digital Dissemination grant from the National Historical Publications and Records Commission (NHPRC) to digitize roughly 72,500 pages from the May 4 collection, which documents the May 1970 shootings of thirteen students by Ohio National Guardsmen at Kent State University. This article will highlight the project team’s efforts to adapt the Omeka instance with modifications to the interface and ingestion processes to assist the efforts of presenting unique archival collections online, including an automated method to create folder level links on the relevant finding aids upon ingestion; implementing open source Tesseract to provide OCR to uploaded files; automated PDF creation from the raw image files using Ghostscript; and integrating Mirador to present a folder level display to reflect archival organization as it occurs in the physical collections. These adaptations, which have been shared via GitHub, will be of interest to other institutions looking to present archival material in Omeka.
      PubDate: Tue, 18 Jul 2017 21:03:56 +000
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.198.77.35
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-