for Journals by Title or ISSN
for Articles by Keywords
help
Journal Cover Code4Lib Journal
  [200 followers]  Follow
    
  This is an Open Access Journal Open Access journal
   ISSN (Online) 1940-5758
   Published by Code4lib Homepage  [1 journal]
  • Editorial: Welcome New Editors, What We Know About Who We Are, and
           Submission Pro Tip!

    • Authors: Sara Amato
      Abstract: Want to see your work in C4LJ' Here's a pro tip!
      PubDate: Tue, 18 Jul 2017 21:03:59 +000
       
  • A Practical Starter Guide on Developing Accessible Websites

    • Authors: Cynthia Ng; Michael Schofield
      Abstract: There is growing concern about the accessibility of the online content and services provided by libraries and public institutions. While many articles cover legislation, general benefits, and common opportunities to improve web accessibility on the surface (e.g., alt tags), few articles discuss web accessibility in more depth, and when they do, they are typically not specific to library web services. This article is meant to fill in this vacuum and will provide practical best practices and code.
      PubDate: Tue, 18 Jul 2017 21:03:58 +000
       
  • Recount: Revisiting the 42nd Canadian Federal Election to Evaluate the
           Efficacy of Retroactive Tweet Collection

    • Authors: Anthony T. Pinter; Ben Goldman
      Abstract: In this paper, we report the development and testing of a methodology for collecting tweets from periods beyond the Twitter API’s seven-to-nine day limitation. To accomplish this, we used Twitter’s advanced search feature to search for tweets from past the seven to nine day limit, and then used JavaScript to automatically scan the resulting webpage for tweet IDs. These IDs were then rehydrated (tweet metadata retrieved) using twarc. To examine the efficacy of this method for retrospective collection, we revisited the case study of the 42nd Canadian Federal Election. Using comparisons between the two datasets, we found that our methodology does not produce as robust results as real-time streaming, but that it might be useful as a starting point for researchers or collectors. We close by discussing the implications of these findings.
      PubDate: Tue, 18 Jul 2017 21:03:57 +000
       
  • Extending Omeka for a Large-Scale Digital Project

    • Authors: Haley Antell; Joe Corall, Virginia Dressler, Cara Gilgenbach
      Abstract: In September 2016, the department of Special Collections and Archives, Kent State University Libraries, received a Digital Dissemination grant from the National Historical Publications and Records Commission (NHPRC) to digitize roughly 72,500 pages from the May 4 collection, which documents the May 1970 shootings of thirteen students by Ohio National Guardsmen at Kent State University. This article will highlight the project team’s efforts to adapt the Omeka instance with modifications to the interface and ingestion processes to assist the efforts of presenting unique archival collections online, including an automated method to create folder level links on the relevant finding aids upon ingestion; implementing open source Tesseract to provide OCR to uploaded files; automated PDF creation from the raw image files using Ghostscript; and integrating Mirador to present a folder level display to reflect archival organization as it occurs in the physical collections. These adaptations, which have been shared via GitHub, will be of interest to other institutions looking to present archival material in Omeka.
      PubDate: Tue, 18 Jul 2017 21:03:56 +000
       
  • Annotation-based enrichment of Digital Objects using open-source
           frameworks

    • Authors: Marcus Emmanuel Barnes; Natkeeran Ledchumykanthan, Kim Pham, Kirsta Stapelfeldt
      Abstract: The W3C Web Annotation Data Model, Protocol, and Vocabulary unify approaches to annotations across the web, enabling their aggregation, discovery and persistence over time. In addition, new javascript libraries provide the ability for users to annotate multi-format content. In this paper, we describe how we have leveraged these developments to provide annotation features alongside Islandora’s existing preservation, access, and management capabilities. We also discuss our experience developing with the Web Annotation Model as an open web architecture standard, as well as our approach to integrating mature external annotation libraries. The resulting software (the Web Annotation Utility Module for Islandora) accommodates annotation across multiple formats. This solution can be used in various digital scholarship contexts.
      PubDate: Tue, 18 Jul 2017 21:03:55 +000
       
  • The FachRef-Assistant: Personalised, subject specific, and transparent
           stock management

    • Authors: Eike T. Spielberg; Frank Lützenkirchen
      Abstract: We present in this paper a personalized web application for the weeding of printed resources: the FachRef-Assistant. It offers an extensive range of tools for evidence based stock management, based on the thorough analysis of usage statistics. Special attention is paid to the criteria individualization, transparency of the parameters used, and generic functions. Currently, it is designed to work with the Aleph-System from ExLibris, but efforts were spent to keep the application as generic as possible. For example, all procedures specific to the local library system have been collected in one Java package. The inclusion of library specific properties such as collections and systematics has been designed to be highly generic as well by mapping the individual entries onto an in-memory database. Hence simple adaption of the package and the mappings would render the FachRef-Assistant compatible to other library systems.The personalization of the application allows for the inclusion of subject specific usage properties as well as of variations between different collections within one subject area. The parameter sets used to analyse the stock and to prepare weeding and purchase proposal lists are included in the output XML-files to facilitate a high degree of transparency, objectivity and reproducibility.
      PubDate: Tue, 18 Jul 2017 21:03:54 +000
       
  • The Semantics of Metadata: Avalon Media System and the Move to RDF

    • Authors: Juliet L. Hardesty; Jennifer B. Young
      Abstract: The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services.Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
      PubDate: Tue, 18 Jul 2017 21:03:53 +000
       
  • OpeNumisma: A Software Platform Managing Numismatic Collections with A
           Particular Focus On Reflectance Transformation Imaging

    • Authors: Avgoustinos Avgousti; Andriana Nikolaidou, Ropertos Georgiou
      Abstract: This paper describes OpeNumisma; a reusable web-based platform focused on digital numismatic collections. The platform provides an innovative merge of digital imaging and data management systems that offer great new opportunities for research and the dissemination of numismatic knowledge online. A unique feature of the platform is the application of Reflectance Transformation Imaging (RTI), a computational photographic method that offers tremendous image analysis and possibilities for numismatic research. This computational photography technique allows the user to observe on browser minor details, unseen with the naked eye just by holding the computer mouse rather than the actual object. The first successful implementation of OpeNumisma has been the creation of a digital library for the medieval coins from the collection of the Bank of Cyprus Cultural Foundation.
      PubDate: Tue, 18 Jul 2017 21:03:52 +000
       
  • DuEPublicA: Automated bibliometric reports based on the University
           Bibliography and external citation data

    • Authors: Eike T. Spielberg
      Abstract: This paper describes a web application to generate bibliometric reports based on the University Bibliography and the Scopus citation database. Our goal is to offer an alternative to easy-to-prepare automated reports from commercial sources. These often suffer from an incomplete coverage of publication types and a difficult attribution to people, institutes and universities. Using our University Bibliography as the source to select relevant publications solves the two problems. As it is a local system, maintained and set up by the library, we can include every publication type we want. As the University Bibliography is linked to the identity management system of the university, it enables an easy selection of publications for people, institutes and the whole university.The program is designed as a web application, which collects publications from the University Bibliography, enriches them with citation data from Scopus and performs three kinds of analyses:1. A general analysis (number and type of publications, publications per year etc.),2. A citation analysis (average citations per publication, h-index, uncitedness), and3. An affiliation analysis (home and partner institutions)We tried to keep the code highly generic, so that the inclusion of other databases (Web of Science, IEEE) or other bibliographies is easily feasible. The application is written in Java and XML and uses XSL transformations and LaTeX to generate bibliometric reports as HTML pages and in pdf format. Warnings and alerts are automatically included if the citation analysis covers only a small fraction of the publications from the University Bibliography. In addition, we describe a small tool that helps to collect author details for an analysis.
      PubDate: Tue, 18 Jul 2017 21:03:51 +000
       
  • New Metadata Recipes for Old Cookbooks: Creating and Analyzing a Digital
           Collection Using the HathiTrust Research Center Portal

    • Authors: Gioia Stevens
      Abstract: The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
      PubDate: Tue, 18 Jul 2017 21:03:50 +000
       
  • Countering Stryker’s Punch: Algorithmically Filling the Black Hole

    • Authors: Michael J. Bennett
      Abstract: Two current digital image editing programs are examined in the context of filling in missing visual image data from hole-punched United States Farm Security Administration (FSA) negatives. Specifically, Photoshop's Content-Aware Fill feature and GIMP's Resynthesizer plugin are evaluated and contrasted against comparable images. A possible automated workflow geared towards large scale editing of similarly hole-punched negatives is also explored. Finally, potential future research based upon this study's results are proposed in the context of leveraging previously-enhanced, image-level metadata.
      PubDate: Tue, 18 Jul 2017 21:03:49 +000
       
  • Editorial: Reflecting on the success and risks to the Code4Lib Journal

    • Authors: Peter E. Murray
      Abstract:
      PubDate: Thu, 20 Apr 2017 18:18:59 +000
       
  • Linked Data is People: Building a Knowledge Graph to Reshape the Library
           Staff Directory

    • Authors: Jason A. Clark; Scott W. H. Young
      Abstract: One of our greatest library resources is people. Most libraries have staff directory information published on the web, yet most of this data is trapped in local silos, PDFs, or unstructured HTML markup. With this in mind, the library informatics team at Montana State University (MSU) Library set a goal of remaking our people pages by connecting the local staff database to the Linked Open Data (LOD) cloud. In pursuing linked data integration for library staff profiles, we have realized two primary use cases: improving the search engine optimization (SEO) for people pages and creating network graph visualizations. In this article, we will focus on the code to build this library graph model as well as the linked data workflows and ontology expressions developed to support it. Existing linked data work has largely centered around machine-actionable data and improvements for bots or intelligent software agents. Our work demonstrates that connecting your staff directory to the LOD cloud can reveal relationships among people in dynamic ways, thereby raising staff visibility and bringing an increased level of understanding and collaboration potential for one of our primary assets: the people that make the library happen.
      PubDate: Thu, 20 Apr 2017 18:18:58 +000
       
  • Recommendations for the application of Schema.org to aggregated Cultural
           Heritage metadata to increase relevance and visibility to search engines:
           the case of Europeana

    • Authors: Richard Wallis; Antoine Isaac, Valentine Charles, Hugo Manguinhas
      Abstract: Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
      PubDate: Thu, 20 Apr 2017 18:18:57 +000
       
  • Autoload: a pipeline for expanding the holdings of an Institutional
           Repository enabled by ResourceSync

    • Authors: James Powell; Martin Klein Herbert Van de Sompel
      Abstract: Providing local access to locally produced content is a primary goal of the Institutional Repository (IR). Guidelines, requirements, and workflows are among the ways in which institutions attempt to ensure this content is deposited and preserved, but some content is always missed. At Los Alamos National Laboratory, the library implemented a service called LANL Research Online (LARO), to provide public access to a collection of publicly shareable LANL researcher publications authored between 2006 and 2016. LARO exposed the fact that we have full text for only about 10% of eligible publications for this time period, despite a review and release requirement that ought to have resulted in a much higher deposition rate. This discovery motivated a new effort to discover and add more full text content to LARO. Autoload attempts to locate and harvest items that were not deposited locally, but for which archivable copies exist. Here we describe the Autoload pipeline prototype and how it aggregates and utilizes Web services including Crossref, SHERPA/RoMEO, and oa
      DOI as it attempts to retrieve archivable copies of resources. Autoload employs a bootstrapping mechanism based on the ResourceSync standard, a NISO standard for resource replication and synchronization. We implemented support for ResourceSync atop the LARO Solr index, which exposes metadata contained in the local IR. This allowed us to utilize ResourceSync without modifying our IR. We close with a brief discussion of other uses we envision for our ResourceSync-Solr implementation, and describe how a new effort called Signposting can replace cumbersome screen scraping with a robust autodiscovery path to content which leverages Web protocols.
      PubDate: Thu, 20 Apr 2017 18:18:56 +000
       
  • Outside The Box: Building a Digital Asset Management Ecosystem for
           Preservation and Access

    • Authors: Andrew Weidner; Sean Watkins, Bethany Scott, Drew Krewer, Anne Washington, Matthew Richardson
      Abstract: The University of Houston (UH) Libraries made an institutional commitment in late 2015 to migrate the data for its digitized cultural heritage collections to open source systems for preservation and access: Hydra-in-a-Box, Archivematica, and ArchivesSpace. This article describes the work that the UH Libraries implementation team has completed to date, including open source tools for streamlining digital curation workflows, minting and resolving identifiers, and managing SKOS vocabularies. These systems, workflows, and tools, collectively known as the Bayou City Digital Asset Management System (BCDAMS), represent a novel effort to solve common issues in the digital curation lifecycle and may serve as a model for other institutions seeking to implement flexible and comprehensive systems for digital preservation and access.
      PubDate: Thu, 20 Apr 2017 18:18:55 +000
       
  • Medici 2: A Scalable Content Management System for Cultural Heritage
           Datasets

    • Authors: Constantinos Sophocleous; Luigi Marini, Ropertos Georgiou, Mohammed Elfarargy, Kenton McHenry
      Abstract: Digitizing large collections of Cultural Heritage (CH) resources and providing tools for their management, analysis and visualization is critical to CH research. A key element in achieving the above goal is to provide user-friendly software offering an abstract interface for interaction with a variety of digital content types. To address these needs, the Medici content management system is being developed in a collaborative effort between the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Bibliotheca Alexandrina (BA) in Egypt, and the Cyprus Institute (CyI). The project is pursued in the framework of European Project “Linking Scientific Computing in Europe and Eastern Mediterranean 2” (LinkSCEEM2) and supported by work funded through the U.S. National Science Foundation (NSF), the U.S. National Archives and Records Administration (NARA), the U.S. National Institutes of Health (NIH), the U.S. National Endowment for the Humanities (NEH), the U.S. Office of Naval Research (ONR), the U.S. Environmental Protection Agency (EPA) as well as other private sector efforts.Medici is a Web 2.0 environment integrating analysis tools for the auto-curation of un-curated digital data, allowing automatic processing of input (CH) datasets, and visualization of both data and collections. It offers a simple user interface for dataset preprocessing, previewing, automatic metadata extraction, user input of metadata and provenance support, storage, archiving and management, representation and reproduction. Building on previous experience (Medici 1), NCSA, and CyI are working towards the improvement of the technical, performance and functionality aspects of the system. The current version of Medici (Medici 2) is the result of these efforts. It is a scalable, flexible, robust distributed framework with wide data format support (including 3D models and Reflectance Transformation Imaging-RTI) and metadata functionality. We provide an overview of Medici 2’s current features supported by representative use cases as well as a discussion of future development directions
      PubDate: Thu, 20 Apr 2017 18:18:54 +000
       
  • An Interactive Map for Showcasing Repository Impacts

    • Authors: Hui Zhang; Camden Lopez
      Abstract: Digital repository managers rely on usage metrics such as the number of downloads to demonstrate research visibility and impacts of the repositories. Increasingly, they find that current tools such as spreadsheets and charts are ineffective for revealing important elements of usage, including reader locations, and for attracting the targeted audiences. This article describes the design and development of a readership map that provides an interactive, near-real-time visualization of actual visits to an institutional repository using data from Google Analytics. The readership map exhibits the global impacts of a repository by displaying the city of every view or download together with the title of the scholarship being read and a hyperlink to its page in the repository. We will discuss project motivation and development issues such as authentication with Google API, metadata integration, performance tuning, and data privacy.
      PubDate: Thu, 20 Apr 2017 18:18:53 +000
       
  • Editorial: Introspection as Activism, or, Getting Our Houses in Order

    • Authors: Ruth Kitchin Tillman
      Abstract: Those of us in libraries like to trace our history to Alexandria or to the French governmental system of record-keeping, but the construction of the modern GLAM world is far more recent, almost as new as coding. It has evolved almost as rapidly. And its future is on us, whether we choose to passively accept a status quo others build or to act and grow and develop ourselves and our workplaces.
      PubDate: Mon, 30 Jan 2017 18:46:59 +000
       
  • Bridging Technologies to Efficiently Arrange and Describe Digital
           Archives: the Bentley Historical Library‚Äôs
           ArchivesSpace-Archivematica-DSpace Workflow Integration Project

    • Authors: Max Eckard; Dallas Pillen, Mike Shallcross
      Abstract: In recent years, ArchivesSpace and Archivematica have emerged as two of the most exciting open source platforms for working with digital archives. The former manages accessions and collections and provides a framework for entering descriptive, administrative, rights, and other metadata. The latter ingests digital content and prepares information packages for long-term preservation and access. In October 2016, the Bentley Historical Library wrapped up a two-year, $355,000 grant from the Andrew W. Mellon Foundation to partner with the University of Michigan Library on the integration of these two systems in an end-to-end workflow that will include the automated deposit of content into a DSpace repository. This article provides context of the project and offers an in-depth exploration of the project’s key development tasks, all of which were provided by Artefactual Systems, the developers of Archivematica (code available at https://github.com/artefactual-labs/appraisal-tab).
      PubDate: Mon, 30 Jan 2017 18:46:58 +000
       
  • The Devil’s Shoehorn: A case study of EAD to ArchivesSpace migration
           at a large university

    • Authors: Dave Mayo; Kate Bowers
      Abstract: A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system.  Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives.  These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify.  Portions of them are rigorously structured, while other parts are narrative.  Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent.  In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned.  The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard’s Library and Technology Services.  Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
      PubDate: Mon, 30 Jan 2017 18:46:57 +000
       
  • Participatory Design Methods for Collaboration and Communication

    • Authors: Tara Wood; Cate Kompare
      Abstract: Website redesigns can be contentious and fraught in any type of organization, and libraries are no exception. Coming to consensus on priorities and design decisions is nearly impossible, as different groups compete to ensure their subject or specialty area is represented. To keep projects on track and on time, libraries may give a few staff members the authority to make all of the decisions, while keeping user research limited to a small number of usability tests. While these tactics are sometimes necessary, at best they can leave many feeling left out of the process, and at worst, can result in major oversights in the final design. Participatory design methods can bring users and stakeholders into the design process and ultimately lead to a better design and less friction in the project. The authors share their experience and lessons learned using participatory design techniques in a website redesign project at a large, multi-location academic library, and how these techniques facilitated communication, shaped design decisions, and kept a complex, difficult project on track.
      PubDate: Mon, 30 Jan 2017 18:46:56 +000
       
  • Python, Google Sheets, and the Thesaurus for Graphic Materials for
           Efficient Metadata Project Workflows

    • Authors: Jeremy Bartczak; Ivey Glendon
      Abstract: In 2017, the University of Virginia (U.Va.) will launch a two year initiative to celebrate the bicentennial anniversary of the University’s founding in 1819. The U.Va. Library is participating in this event by digitizing some 20,000 photographs and negatives that document student life on the U.Va. grounds in the 1960s and 1970s. Metadata librarians and archivists are well-versed in the challenges associated with generating digital content and accompanying description within the context of limited resources. This paper describes how technology and new approaches to metadata design have enabled the University of Virginia’s Metadata Analysis and Design Department to rapidly and successfully generate accurate description for these digital objects. Python’s pandas module improves efficiency by cleaning and repurposing data recorded at digitization, while the lxml module builds MODS XML programmatically from CSV tables. A simplified technique for subject heading selection and assignment in Google Sheets provides a collaborative environment for streamlined metadata creation and data quality control.
      PubDate: Mon, 30 Jan 2017 18:46:55 +000
       
  • Supporting Oral Histories in Islandora

    • Authors: Marcus Emmanuel Barnes; Natkeeran Ledchumykanthan, Kim Pham, Kirsta Stapelfeldt
      Abstract: Since 2014, the University of Toronto Scarborough Library’s Digital Scholarship Unit (DSU) has been working on an Islandora-based solution for creating and stewarding oral histories (the Oral Histories solution pack). Although regular updates regarding the status of this work have been presented at Open Repositories conferences, this is the first article to describe the goals and features associated with this codebase, as well as the roadmap for development. An Islandora-based approach is appropriate for addressing the challenges of Oral History, an interdisciplinary methodology with complex notions of authorship and audience that both brings a corresponding complexity of use cases and roots Oral Histories projects in the ever-emergent technical and preservation challenges associated with multimedia and born digital assets. By leveraging Islandora, those embarking on Oral Histories projects benefit from existing community-supported code. By writing and maintaining the Oral Histories solution pack, the library seeks to build on common ground for those supporting Oral Histories projects and encourage a sustainable solution and feature set.
      PubDate: Mon, 30 Jan 2017 18:46:54 +000
       
  • Building a Scalable and Flexible Library Data Dashboard

    • Authors: Nathan Mealey
      Abstract: Data dashboards provide libraries with the means to demonstrate their ongoing activities and usage in an engaging and communicative fashion. Yet, due to the number of service platforms used by libraries, and the wide-ranging technical specifications they entail, bringing all of this content together in a sustainable way is a significant challenge. This article describes Portland State University’s project to design and build a data dashboard based on a scalable and flexible infrastructure that would enable them to present data in a visually compelling and dynamic interface.
      PubDate: Mon, 30 Jan 2017 18:46:53 +000
       
  • What’s New' Deploying a Library New Titles Page with Minimal
           Programming

    • Authors: John Meyerhofer
      Abstract: With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP) or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combined it with a Google Sheet, HTML, and a little JavaScript to create a new titles web page that was dynamic and engaging to its users.
      PubDate: Mon, 30 Jan 2017 18:46:52 +000
       
  • OPRM: Challenges to Including Open Peer Review in Open Access Repositories
           

    • Authors: Pandelis Perakakis; Agnes Ponsati, Isabel Bernal, Carles Sierra, Nardine Osman, Concha Mosquera-de-Arancibia, Emilio Lorenzo
      Abstract: The peer review system is the norm for many publications. It involves an editor and several experts in the field providing comments for a submitted article. The reviewer remains anonymous to the author, with only the editor knowing the reviewer´s identity. This model is now being challenged and open peer review (OPR) models are viewed as the new frontier of the review process. OPR is a term that encompasses diverse variations in the traditional review process. Examples of this are modifications in the way in which authors and reviewers are aware of each other’s identity (open identities), the visibility of the reviews carried out (open reviews) or the opening up of the review to the academic community (open participation). We present the project for the implementation of an Open Peer Review Module in two major Spanish repositories, DIGITAL.CSIC and e-IEO, together with some promising initial results and challenges in the take-up process. The OPR module, designed for integration with DSpace repositories, enables any scholar to provide a qualitative and quantitative evaluation of any research object hosted in these repositories.
      PubDate: Mon, 30 Jan 2017 18:46:51 +000
       
  • Adopting a Distributed Model for Data Services

    • Authors: Casey Gibbs; Marcos Hernandez, Pongracz Sennyey
      Abstract: This article describes how the Saint Edward’s University Library implemented a distributed model for the Institutional Repository. Based on Cloud Based platforms and APIs, the Library has created an Institutional Repository that is scaleable and modular, considerably lowering its implementation and maintenance costs, while lowering its technical complexity.
      PubDate: Mon, 30 Jan 2017 18:46:50 +000
       
  • Developing an online platform for gamified library instruction

    • Authors: Jared Cowing
      Abstract: Gamification is a concept that has been catching fire for a while now in education, particularly in libraries. This article describes a pilot effort to create an online gamified platform for use in the Woodbury University Library’s information literacy course. The objectives of this project were both to increase student engagement and learning, and to serve as an opportunity for myself to further develop my web development skills. The platform was developed using the CodeIgniter web framework and consisted of several homework exercises ranging from a top-down two-dimensional library exploration game to a tutorial on cleaning up machine-generated APA citations. This article details the project’s planning and development process, the gamification concepts that helped guide the conceptualization of each exercise, reflections on the platform’s implementation in four course sections, and aspirations for the future of the project. It is hoped that this article will serve as an example of the opportunities–and challenges–that await both librarians and instructors who wish to add coding to their existing skill set.
      PubDate: Mon, 30 Jan 2017 18:46:49 +000
       
  • Editorial: Some Numbers

    • Authors: Andrew Darby
      Abstract: Wherein the Journal's most popular article and other small mysteries are revealed.
      PubDate: Tue, 25 Oct 2016 15:11:59 +000
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.224.197.251
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016