for Journals by Title or ISSN
for Articles by Keywords
Journal Cover Code4Lib Journal
  [217 followers]  Follow
  This is an Open Access Journal Open Access journal
   ISSN (Online) 1940-5758
   Published by Code4lib Homepage  [1 journal]
  • Editorial: Musing on learning to be a selfish librarian

    • Authors: Terry Reese
      Abstract: One of the perks of being the coordinating editor is you get to write the opening editorial for the issue.  It’s an opportunity to think broadly about the community, the journal…current events.  And if you look back over the past year or so, those that have taken on this role have been more than up […]
      PubDate: Mon, 05 Feb 2018 20:44:59 +000
  • Approaching the largest ‘API’: extracting information from the
           Internet with Python

    • Authors: Jonathan E. Germann
      Abstract: This article explores the need for libraries to algorithmically access and manipulate the world’s largest API: the Internet. The billions of pages on the ‘Internet API’ (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
      PubDate: Mon, 05 Feb 2018 20:44:58 +000
  • Using R and the Tidyverse to Generate Library Usage Reports

    • Authors: Andy Meyer
      Abstract: Gathering, analyzing, and communicating library usage data provides a foundation for thoughtful assessment. However, the amount of time and expertise required creates a barrier to actually using this data. By using the statistical programming language R and the tools and approach of the Tidyverse, the process of gathering, analyzing, and communicating data can be automated in ways that reduce the amount of time and energy required. At the same time, this approach increases staff capacity for other data science projects and creates a shareable model and framework for other libraries. This article focuses on electronic resource usage reports - especially Counter DB1 Reports - but this approach could be extended to other data sources and needs.
      PubDate: Mon, 05 Feb 2018 20:44:57 +000
  • Archidora: Integrating Archivematica and Islandora

    • Authors: Tim Hutchinson
      Abstract: “Archidora” is shorthand for the publicly available integration between the open source software packages Archivematica and Islandora. Sponsored by the University of Saskatchewan Library, this integration enables the automated ingest into Archivematica of objects created in Islandora. This will allow institutions that use Islandora as a digital asset management system, particularly for digitized material, to take advantage of Archivematica’s standards-based digital preservation functionality, without requiring staff doing digitization to interact with Archivematica. This paper outlines the basic functionality and workflow of archidora; provides an overview of the development process including challenges and lessons learned; and discusses related initiatives and possible future directions for development.
      PubDate: Mon, 05 Feb 2018 20:44:56 +000
  • Microdata in the IR: A Low-Barrier Approach to Enhancing Discovery of
           Institutional Repository Materials in Google

    • Authors: Shayna Pekala
      Abstract: Georgetown University Library curates a multitude of open access resources in its institutional repository and digital collections portal, DigitalGeorgetown. Over the last several years, the Library has experimented with methods for making these items increasingly visible in search engine search results. This article describes the Library’s low-barrier approach to applying vocabulary to its DSpace institutional repository using microdata, as well as the challenges with and strategies used for assessing this work. The effects of the application of microdata to DigitalGeorgetown on Google search results were tracked over time using three different metrics, providing new insights about its impact.
      PubDate: Mon, 05 Feb 2018 20:44:55 +000
  • Getting Real in the Library: A Case Study at the University of Florida

    • Authors: Samuel R. Putnam; Sara Russell Gonzalez
      Abstract: In the fall of 2014, the University of Florida (UF) Marston Science Library, in partnership with UF IT, opened a new computer lab for students to learn and develop mobile applications. The Mobile Application Development Environment (MADE@UF) features both software and circulating technology for students to use in an unstructured and minimally-staffed environment. As the technological landscape has shifted in the past few years, virtual and augmented reality have become more prominent and prevalent, signaled by companies like Facebook, Google, and Microsoft making significant financial investments in these technologies. During this evolution, MADE@UF has migrated to focus more on virtual and augmented reality, and we will discuss the opportunities and challenges that hosting and managing such a space has provided to the science library and its staff.
      PubDate: Mon, 05 Feb 2018 20:44:54 +000
  • Accio e-Libri: Magically Delivering Digital Resources to Patrons Using NFC

    • Authors: Christopher M. Jimenez; Barbara M. Sorondo
      Abstract: To coincide with the 20th anniversary of the publication of Harry Potter and the Philosopher’s/Sorcerer’s Stone, our library created a Happee Birthdae Harry display incorporating Near Field Communication (NFC) technology alongside print materials in order to magically place electronic resources in our users’ hands. The display was a spellbinding success, increasing usage of both print and electronic items, and helping our students become familiar with this innovative technology in an engaging manner. This article will provide step-by-step instructions on the materials and procedures librarians need to implement NFC technology in their own libraries, and will discuss the challenges and opportunities associated with this rapidly spreading technology.
      PubDate: Mon, 05 Feb 2018 20:44:53 +000
  • Ship It: Logistical tracking of ILL physical loans

    • Authors: Ryan Litsey & Scott Luker
      Abstract: The OBILLSK Shipment Tracking system is the first consolidated and comprehensive shipment information system for interlibrary loan. The system is unique because not only does it offer an interface for consolidating the items being shipped out of an ILL office, it also provides real time statistical data of global geographic shipping patterns, tracking of packages across all major couriers, and customized date range reporting for ILL shipment activity. This system takes advantage of several web-based technologies that makes it easy to use for students, staff and library administrators. The web-based software utilizes a .NET platform and SQL Server database. Client-side frameworks include Bootstrap and jQuery for responsive design, Shield UI for data visualizations, and jVectorMap for geographical representation of shipments. The system is now available for all libraries. It is actively in use at 15 academic libraries nationwide and has over 190,000 items scanned since October of 2016. It is through the development of innovative technologies that libraries can continue to serve as incubators for practical solutions that can help the discipline and practice of librarianship.
      PubDate: Mon, 05 Feb 2018 20:44:52 +000
  • The Automagic of the LII’s eCFR

    • Authors: Charlotte Schneider; Sylvia Kwakye
      Abstract: The Legal Information Institute (LII) began providing access to federal legal materials in 1992. This article discusses their work expanding and improving free public access to federal legal resources in the U.S., particularly developing their eCFR product for the Code of Federal Regulations, and plans to integrate DocketWrench.
      PubDate: Mon, 05 Feb 2018 20:44:51 +000
  • Editorial: The Economics of Not Being an Organization

    • Authors: Carol Bean
      Abstract: Our successes have caught up with us. Now we get to choose the next step in our evolution.
      PubDate: Wed, 18 Oct 2017 15:00:59 +000
  • Usability Analysis of the Big Ten Academic Alliance Geoportal: Findings
           and Recommendations for Improvement of the User Experience

    • Authors: Mara Blake; Karen Majewicz, Amanda Tickner, Jason Lam
      Abstract: The Big Ten Academic Alliance (BTAA) Geospatial Data Project is a collaboration between twelve member institutions of the consortium and works towards providing discoverability and access to geospatial data, scanned maps, and web mapping services. Usability tests and heuristic evaluations were chosen as methods of evaluation, as they have had a long standing in measuring and managing website engagement and are essential in the process of iterative design. The BTAA project hopes to give back to the community by publishing the results of our usability findings with the hope that it will benefit other portals built with GeoBlacklight.
      PubDate: Wed, 18 Oct 2017 15:00:58 +000
  • Using the ‘rentrez’ R Package to Identify Repository Records
           for NCBI LinkOut

    • Authors: Yoo Young Lee; Erin D. Foster, David E. Polley, Jere Odell
      Abstract: In this article, we provide a brief overview of the National Center for Biotechnology Information (NCBI) LinkOut service for institutional repositories, a service that allows links from the PubMed database to full-text versions of articles in participating institutional repositories (IRs). We discuss the criteria for participation in NCBI LinkOut for IRs, current methods for participating, and outline our solution for automating the identification of eligible articles in a repository using R and the ‘rentrez’ package. Using our solution, we quickly processed 4,400 open access items from our repository, identified the 557 eligible records, and sent them to the NLM. Direct linking from PubMed resulted in a 17% increase in web traffic.
      PubDate: Wed, 18 Oct 2017 15:00:57 +000
  • The Drawings of the Florentine Painters: From Print Catalog to Linked Open

    • Authors: Lukas Klic; Matt Miller, Jonathan K. Nelson, Cristina Pattuelli, Alexandra Provo
      Abstract: The Drawings of The Florentine Painters project created the first online database of Florentine Renaissance drawings by applying Linked Open Data (LOD) techniques to a foundational text of the same name, first published by Bernard Berenson in 1903 (revised and expanded editions, 1938 and 1961). The goal was to make Berenson’s catalog information—still an essential information resource today—available in a machine-readable format, allowing researchers to access the source content through open data services. This paper provides a technical overview of the methods and processes applied in the conversion of Berenson’s catalog to LOD using the CIDOC-CRM ontology; it also discusses the different phases of the project, focusing on the challenges and issues of data transformation and publishing. The project was funded by the Samuel H. Kress Foundation and organized by Villa I Tatti, The Harvard University Center for Italian Renaissance Studies.Catalog: http://florentinedrawings.itatti.harvard.eduData Endpoint:
      PubDate: Wed, 18 Oct 2017 15:00:56 +000
  • Web-Scraping for Non-Programmers: Introducing OXPath for Digital Library
           Metadata Harvesting

    • Authors: Mandy Neumann; Jan Steinberg, Philipp Schaer
      Abstract: Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
      PubDate: Wed, 18 Oct 2017 15:00:55 +000
  • DIY DOI: Leveraging the DOI Infrastructure to Simplify Digital
           Preservation and Repository Management

    • Authors: Kyle Bannerjee; David Forero
      Abstract: This article describes methods for how staff with modest technical expertise can leverage the
      DOI (Digital Object Identifier) infrastructure in combination with third party storage and preservation solutions to build safer, more useful, and easier to manage repositories at much lower cost than is normally possible with standalone systems. It also demonstrates how understanding the underlying mechanisms and questioning the assumptions of technology metaphors such as filesystems can lead to seeing and using tools in new and more powerful ways.
      PubDate: Wed, 18 Oct 2017 15:00:54 +000
  • Direct Database Access to OCLC Connexion’s Local Save File

    • Authors: Rebecca B. French
      Abstract: A feature of OCLC's Connexion cataloging client unknown to most librarians is the ability to directly work with the Microsoft Access database underlying the local save file. This article provides an overview of the metadata made available through this method, including fields that cannot be accessed through the regular Connexion interface, and discusses factors to be considered when deciding whether to migrate the data to another database system instead of continuing to work with Access. Descriptions of three projects illustrate how this functionality has been applied to efficiently catalog a gift collection, find OCLC numbers for e-books, and create bibliographic records for Early English Books Online/Text Creation Partnership titles using data from multiple sources. With the option to rely only on common, off-the-shelf software, this method of directly accessing the local save file database offers a way to expand Connexion’s functionality for those unable or unwilling to work with OCLC APIs. Other benefits include the ability to import external data and to use SQL for more advanced querying. A number of limitations are also discussed, and their implications for metadata access and use are explored.
      PubDate: Wed, 18 Oct 2017 15:00:53 +000
  • Between the Sheets: a Library-wide Inventory with Google

    • Authors: Craig Boman; Ray Voelker
      Abstract: When it comes to taking an inventory of physical items, libraries often rely on their traditional integrated library system’s (ILS) à la carte add ons; outside vendors; or other possibly outdated, complex, and often expensive methods. For libraries with shrinking budgets and other limited resources, high costs can put these methods out of reach. At the University of Dayton Libraries, we set out to develop an inexpensive and reasonably easy-to-use method for conducting a library-wide physical item inventory. In this article, we explain a custom built Google Sheets-based library inventory system, along with some code for the implementation of a RESTful API (written in PHP) that interacts with our ILS. We will also explain our use of Google Apps scripts in our Google Sheet, which are crucial to our systems. Although this method used a specific ILS (Innovative Interfaces' Sierra product) and custom-built RESTful APIs, it may be possible to use similar approaches with other ILS software. Additional notes include areas for improvement and recommendations for interoperability with other ILS systems.
      PubDate: Wed, 18 Oct 2017 15:00:52 +000
  • Tools and Workflows for Collaborating on Static Website Projects

    • Authors: Kaitlin Newson
      Abstract: Static website generators have seen a significant increase in popularity in recent years, offering many advantages over their dynamic counterparts. While these generators were typically used for blogs, they have grown in usage for other web-based projects, including documentation, conference websites, and image collections. However, because of their technical complexity, these tools can be inaccessible to content creators depending on their level of technical skill and comfort with web development technologies. Drawing from experience with a collaborative static website project, this article will provide an overview of static website generators, review different tools available for managing content, and explore workflows and best practices for collaborating with teams on static website projects.
      PubDate: Wed, 18 Oct 2017 15:00:51 +000
  • Leveraging Python to improve ebook metadata selection, ingest, and

    • Authors: Kelly Thompson and Stacie Traill
      Abstract: Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.
      PubDate: Wed, 18 Oct 2017 15:00:50 +000
  • Testing Three Types of Raspberry Pi People Counters

    • Authors: Johnathan Cintron; Devlyn Courtier, John DeLooper
      Abstract: The Hudson County Community College (HCCC) Library tested three different types of Raspberry Pi based people counters between 6/14/2017 and 7/9/2017. This article will describe how we created each type of counter, will compare the accuracy of each sensor, and will compare them to the college’s existing 3M 3501 gate counters. It will also describe why and how our team decided to make this project, discuss lessons learned, and provide instructions for how other libraries can create their own gate counters.
      PubDate: Wed, 18 Oct 2017 15:00:49 +000
  • Editorial: Welcome New Editors, What We Know About Who We Are, and
           Submission Pro Tip!

    • Authors: Sara Amato
      Abstract: Want to see your work in C4LJ' Here's a pro tip!
      PubDate: Tue, 18 Jul 2017 21:03:59 +000
  • A Practical Starter Guide on Developing Accessible Websites

    • Authors: Cynthia Ng; Michael Schofield
      Abstract: There is growing concern about the accessibility of the online content and services provided by libraries and public institutions. While many articles cover legislation, general benefits, and common opportunities to improve web accessibility on the surface (e.g., alt tags), few articles discuss web accessibility in more depth, and when they do, they are typically not specific to library web services. This article is meant to fill in this vacuum and will provide practical best practices and code.
      PubDate: Tue, 18 Jul 2017 21:03:58 +000
  • Recount: Revisiting the 42nd Canadian Federal Election to Evaluate the
           Efficacy of Retroactive Tweet Collection

    • Authors: Anthony T. Pinter; Ben Goldman
      Abstract: In this paper, we report the development and testing of a methodology for collecting tweets from periods beyond the Twitter API’s seven-to-nine day limitation. To accomplish this, we used Twitter’s advanced search feature to search for tweets from past the seven to nine day limit, and then used JavaScript to automatically scan the resulting webpage for tweet IDs. These IDs were then rehydrated (tweet metadata retrieved) using twarc. To examine the efficacy of this method for retrospective collection, we revisited the case study of the 42nd Canadian Federal Election. Using comparisons between the two datasets, we found that our methodology does not produce as robust results as real-time streaming, but that it might be useful as a starting point for researchers or collectors. We close by discussing the implications of these findings.
      PubDate: Tue, 18 Jul 2017 21:03:57 +000
  • Extending Omeka for a Large-Scale Digital Project

    • Authors: Haley Antell; Joe Corall, Virginia Dressler, Cara Gilgenbach
      Abstract: In September 2016, the department of Special Collections and Archives, Kent State University Libraries, received a Digital Dissemination grant from the National Historical Publications and Records Commission (NHPRC) to digitize roughly 72,500 pages from the May 4 collection, which documents the May 1970 shootings of thirteen students by Ohio National Guardsmen at Kent State University. This article will highlight the project team’s efforts to adapt the Omeka instance with modifications to the interface and ingestion processes to assist the efforts of presenting unique archival collections online, including an automated method to create folder level links on the relevant finding aids upon ingestion; implementing open source Tesseract to provide OCR to uploaded files; automated PDF creation from the raw image files using Ghostscript; and integrating Mirador to present a folder level display to reflect archival organization as it occurs in the physical collections. These adaptations, which have been shared via GitHub, will be of interest to other institutions looking to present archival material in Omeka.
      PubDate: Tue, 18 Jul 2017 21:03:56 +000
  • Annotation-based enrichment of Digital Objects using open-source

    • Authors: Marcus Emmanuel Barnes; Natkeeran Ledchumykanthan, Kim Pham, Kirsta Stapelfeldt
      Abstract: The W3C Web Annotation Data Model, Protocol, and Vocabulary unify approaches to annotations across the web, enabling their aggregation, discovery and persistence over time. In addition, new javascript libraries provide the ability for users to annotate multi-format content. In this paper, we describe how we have leveraged these developments to provide annotation features alongside Islandora’s existing preservation, access, and management capabilities. We also discuss our experience developing with the Web Annotation Model as an open web architecture standard, as well as our approach to integrating mature external annotation libraries. The resulting software (the Web Annotation Utility Module for Islandora) accommodates annotation across multiple formats. This solution can be used in various digital scholarship contexts.
      PubDate: Tue, 18 Jul 2017 21:03:55 +000
  • The FachRef-Assistant: Personalised, subject specific, and transparent
           stock management

    • Authors: Eike T. Spielberg; Frank Lützenkirchen
      Abstract: We present in this paper a personalized web application for the weeding of printed resources: the FachRef-Assistant. It offers an extensive range of tools for evidence based stock management, based on the thorough analysis of usage statistics. Special attention is paid to the criteria individualization, transparency of the parameters used, and generic functions. Currently, it is designed to work with the Aleph-System from ExLibris, but efforts were spent to keep the application as generic as possible. For example, all procedures specific to the local library system have been collected in one Java package. The inclusion of library specific properties such as collections and systematics has been designed to be highly generic as well by mapping the individual entries onto an in-memory database. Hence simple adaption of the package and the mappings would render the FachRef-Assistant compatible to other library systems.The personalization of the application allows for the inclusion of subject specific usage properties as well as of variations between different collections within one subject area. The parameter sets used to analyse the stock and to prepare weeding and purchase proposal lists are included in the output XML-files to facilitate a high degree of transparency, objectivity and reproducibility.
      PubDate: Tue, 18 Jul 2017 21:03:54 +000
  • The Semantics of Metadata: Avalon Media System and the Move to RDF

    • Authors: Juliet L. Hardesty; Jennifer B. Young
      Abstract: The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services.Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
      PubDate: Tue, 18 Jul 2017 21:03:53 +000
  • OpeNumisma: A Software Platform Managing Numismatic Collections with A
           Particular Focus On Reflectance Transformation Imaging

    • Authors: Avgoustinos Avgousti; Andriana Nikolaidou, Ropertos Georgiou
      Abstract: This paper describes OpeNumisma; a reusable web-based platform focused on digital numismatic collections. The platform provides an innovative merge of digital imaging and data management systems that offer great new opportunities for research and the dissemination of numismatic knowledge online. A unique feature of the platform is the application of Reflectance Transformation Imaging (RTI), a computational photographic method that offers tremendous image analysis and possibilities for numismatic research. This computational photography technique allows the user to observe on browser minor details, unseen with the naked eye just by holding the computer mouse rather than the actual object. The first successful implementation of OpeNumisma has been the creation of a digital library for the medieval coins from the collection of the Bank of Cyprus Cultural Foundation.
      PubDate: Tue, 18 Jul 2017 21:03:52 +000
  • DuEPublicA: Automated bibliometric reports based on the University
           Bibliography and external citation data

    • Authors: Eike T. Spielberg
      Abstract: This paper describes a web application to generate bibliometric reports based on the University Bibliography and the Scopus citation database. Our goal is to offer an alternative to easy-to-prepare automated reports from commercial sources. These often suffer from an incomplete coverage of publication types and a difficult attribution to people, institutes and universities. Using our University Bibliography as the source to select relevant publications solves the two problems. As it is a local system, maintained and set up by the library, we can include every publication type we want. As the University Bibliography is linked to the identity management system of the university, it enables an easy selection of publications for people, institutes and the whole university.The program is designed as a web application, which collects publications from the University Bibliography, enriches them with citation data from Scopus and performs three kinds of analyses:1. A general analysis (number and type of publications, publications per year etc.),2. A citation analysis (average citations per publication, h-index, uncitedness), and3. An affiliation analysis (home and partner institutions)We tried to keep the code highly generic, so that the inclusion of other databases (Web of Science, IEEE) or other bibliographies is easily feasible. The application is written in Java and XML and uses XSL transformations and LaTeX to generate bibliometric reports as HTML pages and in pdf format. Warnings and alerts are automatically included if the citation analysis covers only a small fraction of the publications from the University Bibliography. In addition, we describe a small tool that helps to collect author details for an analysis.
      PubDate: Tue, 18 Jul 2017 21:03:51 +000
  • New Metadata Recipes for Old Cookbooks: Creating and Analyzing a Digital
           Collection Using the HathiTrust Research Center Portal

    • Authors: Gioia Stevens
      Abstract: The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
      PubDate: Tue, 18 Jul 2017 21:03:50 +000
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-