for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Code4Lib Journal
  [198 followers]  Follow
    
  This is an Open Access Journal Open Access journal
   ISSN (Online) 1940-5758
   Published by Code4lib Homepage  [1 journal]
  • Editorial: The Economics of Not Being an Organization

    • Authors: Carol Bean
      Abstract: Our successes have caught up with us. Now we get to choose the next step in our evolution.
      PubDate: Wed, 18 Oct 2017 15:00:59 +000
       
  • Usability Analysis of the Big Ten Academic Alliance Geoportal: Findings
           and Recommendations for Improvement of the User Experience

    • Authors: Mara Blake; Karen Majewicz, Amanda Tickner, Jason Lam
      Abstract: The Big Ten Academic Alliance (BTAA) Geospatial Data Project is a collaboration between twelve member institutions of the consortium and works towards providing discoverability and access to geospatial data, scanned maps, and web mapping services. Usability tests and heuristic evaluations were chosen as methods of evaluation, as they have had a long standing in measuring and managing website engagement and are essential in the process of iterative design. The BTAA project hopes to give back to the community by publishing the results of our usability findings with the hope that it will benefit other portals built with GeoBlacklight.
      PubDate: Wed, 18 Oct 2017 15:00:58 +000
       
  • Using the ‘rentrez’ R Package to Identify Repository Records
           for NCBI LinkOut

    • Authors: Yoo Young Lee; Erin D. Foster, David E. Polley, Jere Odell
      Abstract: In this article, we provide a brief overview of the National Center for Biotechnology Information (NCBI) LinkOut service for institutional repositories, a service that allows links from the PubMed database to full-text versions of articles in participating institutional repositories (IRs). We discuss the criteria for participation in NCBI LinkOut for IRs, current methods for participating, and outline our solution for automating the identification of eligible articles in a repository using R and the ‘rentrez’ package. Using our solution, we quickly processed 4,400 open access items from our repository, identified the 557 eligible records, and sent them to the NLM. Direct linking from PubMed resulted in a 17% increase in web traffic.
      PubDate: Wed, 18 Oct 2017 15:00:57 +000
       
  • The Drawings of the Florentine Painters: From Print Catalog to Linked Open
           Data

    • Authors: Lukas Klic; Matt Miller, Jonathan K. Nelson, Cristina Pattuelli, Alexandra Provo
      Abstract: The Drawings of The Florentine Painters project created the first online database of Florentine Renaissance drawings by applying Linked Open Data (LOD) techniques to a foundational text of the same name, first published by Bernard Berenson in 1903 (revised and expanded editions, 1938 and 1961). The goal was to make Berenson’s catalog information—still an essential information resource today—available in a machine-readable format, allowing researchers to access the source content through open data services. This paper provides a technical overview of the methods and processes applied in the conversion of Berenson’s catalog to LOD using the CIDOC-CRM ontology; it also discusses the different phases of the project, focusing on the challenges and issues of data transformation and publishing. The project was funded by the Samuel H. Kress Foundation and organized by Villa I Tatti, The Harvard University Center for Italian Renaissance Studies.Catalog: http://florentinedrawings.itatti.harvard.eduData Endpoint: http://data.itatti.harvard.edu
      PubDate: Wed, 18 Oct 2017 15:00:56 +000
       
  • Web-Scraping for Non-Programmers: Introducing OXPath for Digital Library
           Metadata Harvesting

    • Authors: Mandy Neumann; Jan Steinberg, Philipp Schaer
      Abstract: Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
      PubDate: Wed, 18 Oct 2017 15:00:55 +000
       
  • DIY DOI: Leveraging the DOI Infrastructure to Simplify Digital
           Preservation and Repository Management

    • Authors: Kyle Bannerjee; David Forero
      Abstract: This article describes methods for how staff with modest technical expertise can leverage the
      DOI (Digital Object Identifier) infrastructure in combination with third party storage and preservation solutions to build safer, more useful, and easier to manage repositories at much lower cost than is normally possible with standalone systems. It also demonstrates how understanding the underlying mechanisms and questioning the assumptions of technology metaphors such as filesystems can lead to seeing and using tools in new and more powerful ways.
      PubDate: Wed, 18 Oct 2017 15:00:54 +000
       
  • Direct Database Access to OCLC Connexion’s Local Save File

    • Authors: Rebecca B. French
      Abstract: A feature of OCLC's Connexion cataloging client unknown to most librarians is the ability to directly work with the Microsoft Access database underlying the local save file. This article provides an overview of the metadata made available through this method, including fields that cannot be accessed through the regular Connexion interface, and discusses factors to be considered when deciding whether to migrate the data to another database system instead of continuing to work with Access. Descriptions of three projects illustrate how this functionality has been applied to efficiently catalog a gift collection, find OCLC numbers for e-books, and create bibliographic records for Early English Books Online/Text Creation Partnership titles using data from multiple sources. With the option to rely only on common, off-the-shelf software, this method of directly accessing the local save file database offers a way to expand Connexion’s functionality for those unable or unwilling to work with OCLC APIs. Other benefits include the ability to import external data and to use SQL for more advanced querying. A number of limitations are also discussed, and their implications for metadata access and use are explored.
      PubDate: Wed, 18 Oct 2017 15:00:53 +000
       
  • Between the Sheets: a Library-wide Inventory with Google

    • Authors: Craig Boman; Ray Voelker
      Abstract: When it comes to taking an inventory of physical items, libraries often rely on their traditional integrated library system’s (ILS) à la carte add ons; outside vendors; or other possibly outdated, complex, and often expensive methods. For libraries with shrinking budgets and other limited resources, high costs can put these methods out of reach. At the University of Dayton Libraries, we set out to develop an inexpensive and reasonably easy-to-use method for conducting a library-wide physical item inventory. In this article, we explain a custom built Google Sheets-based library inventory system, along with some code for the implementation of a RESTful API (written in PHP) that interacts with our ILS. We will also explain our use of Google Apps scripts in our Google Sheet, which are crucial to our systems. Although this method used a specific ILS (Innovative Interfaces' Sierra product) and custom-built RESTful APIs, it may be possible to use similar approaches with other ILS software. Additional notes include areas for improvement and recommendations for interoperability with other ILS systems.
      PubDate: Wed, 18 Oct 2017 15:00:52 +000
       
  • Tools and Workflows for Collaborating on Static Website Projects

    • Authors: Kaitlin Newson
      Abstract: Static website generators have seen a significant increase in popularity in recent years, offering many advantages over their dynamic counterparts. While these generators were typically used for blogs, they have grown in usage for other web-based projects, including documentation, conference websites, and image collections. However, because of their technical complexity, these tools can be inaccessible to content creators depending on their level of technical skill and comfort with web development technologies. Drawing from experience with a collaborative static website project, this article will provide an overview of static website generators, review different tools available for managing content, and explore workflows and best practices for collaborating with teams on static website projects.
      PubDate: Wed, 18 Oct 2017 15:00:51 +000
       
  • Leveraging Python to improve ebook metadata selection, ingest, and
           management

    • Authors: Kelly Thompson and Stacie Traill
      Abstract: Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.
      PubDate: Wed, 18 Oct 2017 15:00:50 +000
       
  • Testing Three Types of Raspberry Pi People Counters

    • Authors: Johnathan Cintron; Devlyn Courtier, John DeLooper
      Abstract: The Hudson County Community College (HCCC) Library tested three different types of Raspberry Pi based people counters between 6/14/2017 and 7/9/2017. This article will describe how we created each type of counter, will compare the accuracy of each sensor, and will compare them to the college’s existing 3M 3501 gate counters. It will also describe why and how our team decided to make this project, discuss lessons learned, and provide instructions for how other libraries can create their own gate counters.
      PubDate: Wed, 18 Oct 2017 15:00:49 +000
       
  • Editorial: Welcome New Editors, What We Know About Who We Are, and
           Submission Pro Tip!

    • Authors: Sara Amato
      Abstract: Want to see your work in C4LJ' Here's a pro tip!
      PubDate: Tue, 18 Jul 2017 21:03:59 +000
       
  • A Practical Starter Guide on Developing Accessible Websites

    • Authors: Cynthia Ng; Michael Schofield
      Abstract: There is growing concern about the accessibility of the online content and services provided by libraries and public institutions. While many articles cover legislation, general benefits, and common opportunities to improve web accessibility on the surface (e.g., alt tags), few articles discuss web accessibility in more depth, and when they do, they are typically not specific to library web services. This article is meant to fill in this vacuum and will provide practical best practices and code.
      PubDate: Tue, 18 Jul 2017 21:03:58 +000
       
  • Recount: Revisiting the 42nd Canadian Federal Election to Evaluate the
           Efficacy of Retroactive Tweet Collection

    • Authors: Anthony T. Pinter; Ben Goldman
      Abstract: In this paper, we report the development and testing of a methodology for collecting tweets from periods beyond the Twitter API’s seven-to-nine day limitation. To accomplish this, we used Twitter’s advanced search feature to search for tweets from past the seven to nine day limit, and then used JavaScript to automatically scan the resulting webpage for tweet IDs. These IDs were then rehydrated (tweet metadata retrieved) using twarc. To examine the efficacy of this method for retrospective collection, we revisited the case study of the 42nd Canadian Federal Election. Using comparisons between the two datasets, we found that our methodology does not produce as robust results as real-time streaming, but that it might be useful as a starting point for researchers or collectors. We close by discussing the implications of these findings.
      PubDate: Tue, 18 Jul 2017 21:03:57 +000
       
  • Extending Omeka for a Large-Scale Digital Project

    • Authors: Haley Antell; Joe Corall, Virginia Dressler, Cara Gilgenbach
      Abstract: In September 2016, the department of Special Collections and Archives, Kent State University Libraries, received a Digital Dissemination grant from the National Historical Publications and Records Commission (NHPRC) to digitize roughly 72,500 pages from the May 4 collection, which documents the May 1970 shootings of thirteen students by Ohio National Guardsmen at Kent State University. This article will highlight the project team’s efforts to adapt the Omeka instance with modifications to the interface and ingestion processes to assist the efforts of presenting unique archival collections online, including an automated method to create folder level links on the relevant finding aids upon ingestion; implementing open source Tesseract to provide OCR to uploaded files; automated PDF creation from the raw image files using Ghostscript; and integrating Mirador to present a folder level display to reflect archival organization as it occurs in the physical collections. These adaptations, which have been shared via GitHub, will be of interest to other institutions looking to present archival material in Omeka.
      PubDate: Tue, 18 Jul 2017 21:03:56 +000
       
  • Annotation-based enrichment of Digital Objects using open-source
           frameworks

    • Authors: Marcus Emmanuel Barnes; Natkeeran Ledchumykanthan, Kim Pham, Kirsta Stapelfeldt
      Abstract: The W3C Web Annotation Data Model, Protocol, and Vocabulary unify approaches to annotations across the web, enabling their aggregation, discovery and persistence over time. In addition, new javascript libraries provide the ability for users to annotate multi-format content. In this paper, we describe how we have leveraged these developments to provide annotation features alongside Islandora’s existing preservation, access, and management capabilities. We also discuss our experience developing with the Web Annotation Model as an open web architecture standard, as well as our approach to integrating mature external annotation libraries. The resulting software (the Web Annotation Utility Module for Islandora) accommodates annotation across multiple formats. This solution can be used in various digital scholarship contexts.
      PubDate: Tue, 18 Jul 2017 21:03:55 +000
       
  • The FachRef-Assistant: Personalised, subject specific, and transparent
           stock management

    • Authors: Eike T. Spielberg; Frank Lützenkirchen
      Abstract: We present in this paper a personalized web application for the weeding of printed resources: the FachRef-Assistant. It offers an extensive range of tools for evidence based stock management, based on the thorough analysis of usage statistics. Special attention is paid to the criteria individualization, transparency of the parameters used, and generic functions. Currently, it is designed to work with the Aleph-System from ExLibris, but efforts were spent to keep the application as generic as possible. For example, all procedures specific to the local library system have been collected in one Java package. The inclusion of library specific properties such as collections and systematics has been designed to be highly generic as well by mapping the individual entries onto an in-memory database. Hence simple adaption of the package and the mappings would render the FachRef-Assistant compatible to other library systems.The personalization of the application allows for the inclusion of subject specific usage properties as well as of variations between different collections within one subject area. The parameter sets used to analyse the stock and to prepare weeding and purchase proposal lists are included in the output XML-files to facilitate a high degree of transparency, objectivity and reproducibility.
      PubDate: Tue, 18 Jul 2017 21:03:54 +000
       
  • The Semantics of Metadata: Avalon Media System and the Move to RDF

    • Authors: Juliet L. Hardesty; Jennifer B. Young
      Abstract: The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services.Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
      PubDate: Tue, 18 Jul 2017 21:03:53 +000
       
  • OpeNumisma: A Software Platform Managing Numismatic Collections with A
           Particular Focus On Reflectance Transformation Imaging

    • Authors: Avgoustinos Avgousti; Andriana Nikolaidou, Ropertos Georgiou
      Abstract: This paper describes OpeNumisma; a reusable web-based platform focused on digital numismatic collections. The platform provides an innovative merge of digital imaging and data management systems that offer great new opportunities for research and the dissemination of numismatic knowledge online. A unique feature of the platform is the application of Reflectance Transformation Imaging (RTI), a computational photographic method that offers tremendous image analysis and possibilities for numismatic research. This computational photography technique allows the user to observe on browser minor details, unseen with the naked eye just by holding the computer mouse rather than the actual object. The first successful implementation of OpeNumisma has been the creation of a digital library for the medieval coins from the collection of the Bank of Cyprus Cultural Foundation.
      PubDate: Tue, 18 Jul 2017 21:03:52 +000
       
  • DuEPublicA: Automated bibliometric reports based on the University
           Bibliography and external citation data

    • Authors: Eike T. Spielberg
      Abstract: This paper describes a web application to generate bibliometric reports based on the University Bibliography and the Scopus citation database. Our goal is to offer an alternative to easy-to-prepare automated reports from commercial sources. These often suffer from an incomplete coverage of publication types and a difficult attribution to people, institutes and universities. Using our University Bibliography as the source to select relevant publications solves the two problems. As it is a local system, maintained and set up by the library, we can include every publication type we want. As the University Bibliography is linked to the identity management system of the university, it enables an easy selection of publications for people, institutes and the whole university.The program is designed as a web application, which collects publications from the University Bibliography, enriches them with citation data from Scopus and performs three kinds of analyses:1. A general analysis (number and type of publications, publications per year etc.),2. A citation analysis (average citations per publication, h-index, uncitedness), and3. An affiliation analysis (home and partner institutions)We tried to keep the code highly generic, so that the inclusion of other databases (Web of Science, IEEE) or other bibliographies is easily feasible. The application is written in Java and XML and uses XSL transformations and LaTeX to generate bibliometric reports as HTML pages and in pdf format. Warnings and alerts are automatically included if the citation analysis covers only a small fraction of the publications from the University Bibliography. In addition, we describe a small tool that helps to collect author details for an analysis.
      PubDate: Tue, 18 Jul 2017 21:03:51 +000
       
  • New Metadata Recipes for Old Cookbooks: Creating and Analyzing a Digital
           Collection Using the HathiTrust Research Center Portal

    • Authors: Gioia Stevens
      Abstract: The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
      PubDate: Tue, 18 Jul 2017 21:03:50 +000
       
  • Countering Stryker’s Punch: Algorithmically Filling the Black Hole

    • Authors: Michael J. Bennett
      Abstract: Two current digital image editing programs are examined in the context of filling in missing visual image data from hole-punched United States Farm Security Administration (FSA) negatives. Specifically, Photoshop's Content-Aware Fill feature and GIMP's Resynthesizer plugin are evaluated and contrasted against comparable images. A possible automated workflow geared towards large scale editing of similarly hole-punched negatives is also explored. Finally, potential future research based upon this study's results are proposed in the context of leveraging previously-enhanced, image-level metadata.
      PubDate: Tue, 18 Jul 2017 21:03:49 +000
       
  • Editorial: Reflecting on the success and risks to the Code4Lib Journal

    • Authors: Peter E. Murray
      Abstract:
      PubDate: Thu, 20 Apr 2017 18:18:59 +000
       
  • Linked Data is People: Building a Knowledge Graph to Reshape the Library
           Staff Directory

    • Authors: Jason A. Clark; Scott W. H. Young
      Abstract: One of our greatest library resources is people. Most libraries have staff directory information published on the web, yet most of this data is trapped in local silos, PDFs, or unstructured HTML markup. With this in mind, the library informatics team at Montana State University (MSU) Library set a goal of remaking our people pages by connecting the local staff database to the Linked Open Data (LOD) cloud. In pursuing linked data integration for library staff profiles, we have realized two primary use cases: improving the search engine optimization (SEO) for people pages and creating network graph visualizations. In this article, we will focus on the code to build this library graph model as well as the linked data workflows and ontology expressions developed to support it. Existing linked data work has largely centered around machine-actionable data and improvements for bots or intelligent software agents. Our work demonstrates that connecting your staff directory to the LOD cloud can reveal relationships among people in dynamic ways, thereby raising staff visibility and bringing an increased level of understanding and collaboration potential for one of our primary assets: the people that make the library happen.
      PubDate: Thu, 20 Apr 2017 18:18:58 +000
       
  • Recommendations for the application of Schema.org to aggregated Cultural
           Heritage metadata to increase relevance and visibility to search engines:
           the case of Europeana

    • Authors: Richard Wallis; Antoine Isaac, Valentine Charles, Hugo Manguinhas
      Abstract: Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
      PubDate: Thu, 20 Apr 2017 18:18:57 +000
       
  • Autoload: a pipeline for expanding the holdings of an Institutional
           Repository enabled by ResourceSync

    • Authors: James Powell; Martin Klein Herbert Van de Sompel
      Abstract: Providing local access to locally produced content is a primary goal of the Institutional Repository (IR). Guidelines, requirements, and workflows are among the ways in which institutions attempt to ensure this content is deposited and preserved, but some content is always missed. At Los Alamos National Laboratory, the library implemented a service called LANL Research Online (LARO), to provide public access to a collection of publicly shareable LANL researcher publications authored between 2006 and 2016. LARO exposed the fact that we have full text for only about 10% of eligible publications for this time period, despite a review and release requirement that ought to have resulted in a much higher deposition rate. This discovery motivated a new effort to discover and add more full text content to LARO. Autoload attempts to locate and harvest items that were not deposited locally, but for which archivable copies exist. Here we describe the Autoload pipeline prototype and how it aggregates and utilizes Web services including Crossref, SHERPA/RoMEO, and oa
      DOI as it attempts to retrieve archivable copies of resources. Autoload employs a bootstrapping mechanism based on the ResourceSync standard, a NISO standard for resource replication and synchronization. We implemented support for ResourceSync atop the LARO Solr index, which exposes metadata contained in the local IR. This allowed us to utilize ResourceSync without modifying our IR. We close with a brief discussion of other uses we envision for our ResourceSync-Solr implementation, and describe how a new effort called Signposting can replace cumbersome screen scraping with a robust autodiscovery path to content which leverages Web protocols.
      PubDate: Thu, 20 Apr 2017 18:18:56 +000
       
  • Outside The Box: Building a Digital Asset Management Ecosystem for
           Preservation and Access

    • Authors: Andrew Weidner; Sean Watkins, Bethany Scott, Drew Krewer, Anne Washington, Matthew Richardson
      Abstract: The University of Houston (UH) Libraries made an institutional commitment in late 2015 to migrate the data for its digitized cultural heritage collections to open source systems for preservation and access: Hydra-in-a-Box, Archivematica, and ArchivesSpace. This article describes the work that the UH Libraries implementation team has completed to date, including open source tools for streamlining digital curation workflows, minting and resolving identifiers, and managing SKOS vocabularies. These systems, workflows, and tools, collectively known as the Bayou City Digital Asset Management System (BCDAMS), represent a novel effort to solve common issues in the digital curation lifecycle and may serve as a model for other institutions seeking to implement flexible and comprehensive systems for digital preservation and access.
      PubDate: Thu, 20 Apr 2017 18:18:55 +000
       
  • Medici 2: A Scalable Content Management System for Cultural Heritage
           Datasets

    • Authors: Constantinos Sophocleous; Luigi Marini, Ropertos Georgiou, Mohammed Elfarargy, Kenton McHenry
      Abstract: Digitizing large collections of Cultural Heritage (CH) resources and providing tools for their management, analysis and visualization is critical to CH research. A key element in achieving the above goal is to provide user-friendly software offering an abstract interface for interaction with a variety of digital content types. To address these needs, the Medici content management system is being developed in a collaborative effort between the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Bibliotheca Alexandrina (BA) in Egypt, and the Cyprus Institute (CyI). The project is pursued in the framework of European Project “Linking Scientific Computing in Europe and Eastern Mediterranean 2” (LinkSCEEM2) and supported by work funded through the U.S. National Science Foundation (NSF), the U.S. National Archives and Records Administration (NARA), the U.S. National Institutes of Health (NIH), the U.S. National Endowment for the Humanities (NEH), the U.S. Office of Naval Research (ONR), the U.S. Environmental Protection Agency (EPA) as well as other private sector efforts.Medici is a Web 2.0 environment integrating analysis tools for the auto-curation of un-curated digital data, allowing automatic processing of input (CH) datasets, and visualization of both data and collections. It offers a simple user interface for dataset preprocessing, previewing, automatic metadata extraction, user input of metadata and provenance support, storage, archiving and management, representation and reproduction. Building on previous experience (Medici 1), NCSA, and CyI are working towards the improvement of the technical, performance and functionality aspects of the system. The current version of Medici (Medici 2) is the result of these efforts. It is a scalable, flexible, robust distributed framework with wide data format support (including 3D models and Reflectance Transformation Imaging-RTI) and metadata functionality. We provide an overview of Medici 2’s current features supported by representative use cases as well as a discussion of future development directions
      PubDate: Thu, 20 Apr 2017 18:18:54 +000
       
  • An Interactive Map for Showcasing Repository Impacts

    • Authors: Hui Zhang; Camden Lopez
      Abstract: Digital repository managers rely on usage metrics such as the number of downloads to demonstrate research visibility and impacts of the repositories. Increasingly, they find that current tools such as spreadsheets and charts are ineffective for revealing important elements of usage, including reader locations, and for attracting the targeted audiences. This article describes the design and development of a readership map that provides an interactive, near-real-time visualization of actual visits to an institutional repository using data from Google Analytics. The readership map exhibits the global impacts of a repository by displaying the city of every view or download together with the title of the scholarship being read and a hyperlink to its page in the repository. We will discuss project motivation and development issues such as authentication with Google API, metadata integration, performance tuning, and data privacy.
      PubDate: Thu, 20 Apr 2017 18:18:53 +000
       
  • Editorial: Introspection as Activism, or, Getting Our Houses in Order

    • Authors: Ruth Kitchin Tillman
      Abstract: Those of us in libraries like to trace our history to Alexandria or to the French governmental system of record-keeping, but the construction of the modern GLAM world is far more recent, almost as new as coding. It has evolved almost as rapidly. And its future is on us, whether we choose to passively accept a status quo others build or to act and grow and develop ourselves and our workplaces.
      PubDate: Mon, 30 Jan 2017 18:46:59 +000
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.91.38.173
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016