for Journals by Title or ISSN for Articles by Keywords help
Followed Journals
 International Journal on Digital Libraries   [SJR: 0.203]   [H-I: 24]   [503 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1432-1300 - ISSN (Online) 1432-5012    Published by Springer-Verlag  [2281 journals]
exploration
• Abstract: Abstract Despite ongoing advances in information retrieval algorithms, people continue to experience difficulties when conducting online searches within digital libraries. Because their information-seeking goals are often complex, searchers may experience difficulty in precisely describing what they are seeking. Current search interfaces provide limited support for navigating and exploring among the search results and helping searchers to more accurately describe what they are looking for. In this paper, we present a novel visual library search interface, designed with the goal of providing interactive support for common library search tasks and behaviours. This system takes advantage of the rich metadata available in academic collections and employs information visualization techniques to support search results evaluation, forward and backward citation exploration, and interactive query refinement.
PubDate: 2016-04-26

• Font attributes enrich knowledge maps and information retrieval
• Abstract: Abstract Typography is overlooked in knowledge maps (KM) and information retrieval (IR), and some deficiencies in these systems can potentially be improved by encoding information into font attributes. A review of font use across domains is used to itemize font attributes and information visualization theory is used to characterize each attribute. Tasks associated with KM and IR, such as skimming, opinion analysis, character analysis, topic modelling and sentiment analysis can be aided through the use of novel representations using font attributes such as skim formatting, proportional encoding, textual stem and leaf plots and multi-attribute labels.
PubDate: 2016-02-08

• Location-triggered mobile access to a digital library of audio books using
Tipple
• Abstract: Abstract This paper explores the role of audio as a means to access books while being at locations referred to within the books, through a mobile app, called Tipple. The books are sourced from a digital library—either self-contained on the mobile phone, or else over the network—and can either be accompanied by pre-recorded audio or synthesized using text-to-speech. The paper details the functional requirements, design and implementation of Tipple. The developed concept was explored and evaluated through three field studies.
PubDate: 2015-10-19

• Recent applications of Knowledge Organization Systems: introduction to a
special issue
• Abstract: Abstract This special issue of the International Journal of Digital Libraries evolved from the 13th Networked Knowledge Organization Systems (NKOSs) workshop held at the joint Digital Libraries conference 2014 in London. The focus of the workshop was ‘Mapping between Linked Data vocabularies of KOS’ and ‘Meaningful Concept Display and Meaningful Visualization of KOS’. The issue presents six papers on the general theme on both conceptual aspects and technical implementation of NKOS. We dedicate this special issue to our long-term colleague and friend Johan De Smedt who died in June 2015 while we were editing the special issue.
PubDate: 2015-09-04

• Alignment of conceptual structures in controlled vocabularies in the
domain of Chinese art: a discussion of issues and patterns
• PubDate: 2015-09-03

• Guest editors’ introduction to the special issue on the digital
libraries conference 2014
• PubDate: 2015-09-01

• Improving interoperability using vocabulary linked data
• Abstract: Abstract The concept of Linked Data has been an emerging theme within the computing and digital heritage areas in recent years. The growth and scale of Linked Data has underlined the need for greater commonality in concept referencing, to avoid local redefinition and duplication of reference resources. Achieving domain-wide agreement on common vocabularies would be an unreasonable expectation; however, datasets often already have local vocabulary resources defined, and so the prospects for large-scale interoperability can be substantially improved by creating alignment links from these local vocabularies out to common external reference resources. The ARIADNE project is undertaking large-scale integration of archaeology dataset metadata records, to create a cross-searchable research repository resource. Key to enabling this cross search will be the ‘subject’ metadata originating from multiple data providers, containing terms from multiple multilingual controlled vocabularies. This paper discusses various aspects of vocabulary mapping. Experience from the previous SENESCHAL project in the publication of controlled vocabularies as Linked Open Data is discussed, emphasizing the importance of unique URI identifiers for vocabulary concepts. There is a need to align legacy indexing data to the uniquely defined concepts and examples are discussed of SENESCHAL data alignment work. A case study for the ARIADNE project presents work on mapping between vocabularies, based on the Getty Art and Architecture Thesaurus as a central hub and employing an interactive vocabulary mapping tool developed for the project, which generates SKOS mapping relationships in JSON and other formats. The potential use of such vocabulary mappings to assist cross search over archaeological datasets from different countries is illustrated in a pilot experiment. The results demonstrate the enhanced opportunities for interoperability and cross searching that the approach offers.
PubDate: 2015-08-27

• On the composition of ISO 25964 hierarchical relations (BTG, BTP, BTI)
PubDate: 2015-08-20

• A sharing-oriented design strategy for networked knowledge organization
systems
• Abstract: Abstract Designers of networked knowledge organization systems often follow a service-oriented design strategy, assuming an organizational model where one party outsources clearly delineated business processes to another party. But the logic of outsourcing is a poor fit for some knowledge organization practices. When knowledge organization is understood as a process of exchange among peers, a sharing-oriented design strategy makes more sense. As an example of a sharing-oriented strategy for designing networked knowledge organization systems, we describe the design of the PeriodO period gazetteer. We analyze the PeriodO data model, its representation using JavaScript Object Notation-Linked Data, and the management of changes to the PeriodO dataset. We conclude by discussing why a sharing-oriented design strategy is appropriate for organizing scholarly knowledge.
PubDate: 2015-08-14

• CRM ba a CRM extension for the documentation of standing buildings
• Abstract: Abstract Exploring the connections between successive phases and overlapping layers from different ages in an ancient building is paramount for its understanding and study. Archaeologists and cultural heritage experts are always eager to unveil the hidden relations of an archaeological building to reconstruct its history and for its interpretation. This paper presents CRMba, a CIDOC CRM extension developed to facilitate the discovery and the interpretation of archaeological resources through the definition of new concepts required to describe the complexity of historic buildings. The CRMba contributes to solving the datasets interoperability issue by exploiting the use of the CIDOC CRM to overcome data fragmentation, to investigate the semantics of building components, of functional spaces and of the construction phases of historic buildings and complexes, making explicit their physical and topological relations through time and space. The approach used for the development of the CRMba makes the model valid for the documentation of different kinds of buildings, across periods, styles and conservation state.
PubDate: 2015-08-04

• Research-paper recommender systems: a literature survey
PubDate: 2015-07-26

• Knowledge infrastructures in science: data, diversity, and digital
libraries
• Abstract: Abstract Digital libraries can be deployed at many points throughout the life cycles of scientific research projects from their inception through data collection, analysis, documentation, publication, curation, preservation, and stewardship. Requirements for digital libraries to manage research data vary along many dimensions, including life cycle, scale, research domain, and types and degrees of openness. This article addresses the role of digital libraries in knowledge infrastructures for science, presenting evidence from long-term studies of four research sites. Findings are based on interviews ( $$n=208$$ ), ethnographic fieldwork, document analysis, and historical archival research about scientific data practices, conducted over the course of more than a decade. The Transformation of Knowledge, Culture, and Practice in Data-Driven Science: A Knowledge Infrastructures Perspective project is based on a 2  $$\times$$  2 design, comparing two “big science” astronomy sites with two “little science” sites that span physical sciences, life sciences, and engineering, and on dimensions of project scale and temporal stage of life cycle. The two astronomy sites invested in digital libraries for data management as part of their initial research design, whereas the smaller sites made smaller investments at later stages. Role specialization varies along the same lines, with the larger projects investing in information professionals, and smaller teams carrying out their own activities internally. Sites making the largest investments in digital libraries appear to view their datasets as their primary scientific legacy, while other sites stake their legacy elsewhere. Those investing in digital libraries are more concerned with the release and reuse of data; types and degrees of openness vary accordingly. The need for expertise in digital libraries, data science, and data stewardship is apparent throughout all four sites. Examples are presented of the challenges in designing digital libraries and knowledge infrastructures to manage and steward research data.
PubDate: 2015-07-25

• Representing gazetteers and period thesauri in four-dimensional
space–time
• Abstract: Abstract Gazetteers, i.e., lists of place-names, enable having a global vision of places of interest through the assignment of a point, or a region, to a place name. However, such identification of the location corresponding to a place name is often a difficult task. There is no one-to-one correspondence between the two sets, places and names, because of name variants, different names for the same place and homonymy; the location corresponding to a place name may vary in time, changing its extension or even the position; and, in general, there is the imprecision deriving from the association of a concept belonging to language (the place name) to a precise concept (the spatial location). Also for named time periods, e.g., early Bronze Age, which are of current use in archaeology, the situation is similar: they depend on the location to which they refer as the same period may have different time-spans in different locations. The present paper avails of a recent extension of the CIDOC CRM called CRMgeo, which embeds events in a spatio-temporal 4-dimensional framework. The paper uses concepts from CRMgeo and introduces extensions to model gazetteers and period thesauri. This approach enables dealing with time-varying location appellations as well as with space-varying period appellations on a robust basis. For this purpose a refinement/extension of CRMgeo is proposed and a discretization of space and time is used to approximate real space–time extents occupied by events. Such an approach solves the problem and suggests further investigations in various directions.
PubDate: 2015-07-21

• On the combination of domain-specific heuristics for author name
disambiguation: the nearest cluster method
• Abstract: Abstract Author name disambiguation has been one of the hardest problems faced by digital libraries since their early days. Historically, supervised solutions have empirically outperformed those based on heuristics, but with the burden of having to rely on manually labeled training sets for the learning process. Moreover, most supervised solutions just apply some type of generic machine learning solution and do not exploit specific knowledge about the problem. In this article, we follow a similar reasoning, but in the opposite direction. Instead of extending an existing supervised solution, we propose a set of carefully designed heuristics and similarity functions, and apply supervision only to optimize such parameters for each particular dataset. As our experiments show, the result is a very effective, efficient and practical author name disambiguation method that can be used in many different scenarios. In fact, we show that our method can beat state-of-the-art supervised methods in terms of effectiveness in many situations while being orders of magnitude faster. It can also run without any training information, using only default parameters, and still be very competitive when compared to these supervised methods (beating several of them) and better than most existing unsupervised author name disambiguation solutions.
PubDate: 2015-07-07

• When should I make preservation copies of myself?
• Abstract: Abstract We investigate how different replication policies ranging from least aggressive to most aggressive affect the level of preservation achieved by autonomic processes used by web objects (WOs). Based on simulations of small-world graphs of WOs created by the Unsupervised Small-World algorithm, we report quantitative and qualitative results for graphs ranging in order from 10 to 5000 WOs. Our results show that a moderately aggressive replication policy makes the best use of distributed host resources by not causing spikes in CPU resources nor spikes in network activity while meeting preservation goals. We examine different approaches that WOs can communicate with each other and determine the how long it would take for a message from one WO to reach a specific WO, or all WOs.
PubDate: 2015-06-21

• Systems integration of heterogeneous cultural heritage information systems
in museums: a case study of the National Palace Museum
• Abstract: Abstract This study addresses the process of information systems integration in museums. Research emphasis has concentrated on systems integration in the business community after restructuring of commercial enterprises. Museums fundamentally differ from commercial enterprises and thus cannot wholly rely on the business model for systems integration. A case study of the National Palace Museum in Taiwan was conducted to investigate its systems integration of five legacy systems into one information system for museum and public use. Participatory observation methods were used to collect data for inductive analysis. The results suggested that museums are motivated to integrate their systems by internal cultural and administrative operations, external cultural and creative industries, public expectations, and information technology attributes. Four factors related to the success of the systems integration project: (1) the unique attributes of a museum’s artifacts, (2) the attributes and needs of a system’s users, (3) the unique demands of museum work, and (4) the attributes of existing information technology resources within a museum. The results provide useful reference data for other museums when they carry out systems integration.
PubDate: 2015-06-06

• Lost but not forgotten: finding pages on the unarchived web
• Abstract: Abstract Web archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an approach to uncover unarchived web pages and websites and to reconstruct different types of descriptions for these pages and sites, based on links and anchor text in the set of crawled pages. We experiment with this approach on the Dutch Web Archive and evaluate the usefulness of page and host-level representations of unarchived content. Our main findings are the following: First, the crawled web contains evidence of a remarkable number of unarchived pages and websites, potentially dramatically increasing the coverage of a Web archive. Second, the link and anchor text have a highly skewed distribution: popular pages such as home pages have more links pointing to them and more terms in the anchor text, but the richness tapers off quickly. Aggregating web page evidence to the host-level leads to significantly richer representations, but the distribution remains skewed. Third, the succinct representation is generally rich enough to uniquely identify pages on the unarchived web: in a known-item search setting we can retrieve unarchived web pages within the first ranks on average, with host-level representations leading to further improvement of the retrieval effectiveness for websites.
PubDate: 2015-06-03

• Bridging the gap between real world repositories and scalable preservation
environments
• Abstract: Abstract Integrating large-scale processing environments, such as Hadoop, with traditional repository systems, such as Fedora Commons 3, has long proved to be a daunting task. In this paper, we will show how this integration can be achieved using software developed in the scalable preservation environments (SCAPE) project, and also how it can be achieved using a local more direct implementation at the Danish State and University Library inspired by the SCAPE project. Both allow full use of the Hadoop system for massively distributed processing without causing excessive load on the repository. We present a proof of concept SCAPE integration and an in-production local integration based on repository systems at the Danish State and University Library and the Hadoop execution environment. Both use data from the Newspaper Digitisation Project, a collection that will grow to more than 32 million JP2 images. The use case for the SCAPE integration is to perform feature extraction and validation of the JP2 images. The validation is done against an institutional preservation policy expressed in the machine readable SCAPE Control Policy vocabulary. The feature extraction is done using the Jpylyzer tool. We perform an experiment with various-sized sets of JP2 images, to test the scalability and correctness of the solution. The first use case considered from the local Danish State and University Library integration is also feature extraction and validation of the JP2 images, this time using Jpylyzer and Schematron requirements translated from the project specification by hand. We further look at two other use cases: generation of histograms of the tonal distributions of the images; and generation of dissemination copies. We discuss the challenges and benefits of the two integration approaches when having to perform preservation actions on massive collections stored in traditional digital repositories.
PubDate: 2015-05-29

• Introduction to the focused issue of award-nominated papers from JCDL 2013
• PubDate: 2015-05-14

• Not all mementos are created equal: measuring the impact of missing
resources
• Abstract: Abstract Web archives do not always capture every resource on every page that they attempt to archive. This results in archived pages missing a portion of their embedded resources. These embedded resources have varying historic, utility, and importance values. The proportion of missing embedded resources does not provide an accurate measure of their impact on the Web page; some embedded resources are more important to the utility of a page than others. We propose a method to measure the relative value of embedded resources and assign a damage rating to archived pages as a way to evaluate archival success. In this paper, we show that Web users’ perceptions of damage are not accurately estimated by the proportion of missing embedded resources. In fact, the proportion of missing embedded resources is a less accurate estimate of resource damage than a random selection. We propose a damage rating algorithm that provides closer alignment to Web user perception, providing an overall improved agreement with users on memento damage by 17 % and an improvement by 51 % if the mementos have a damage rating delta $$>$$ 0.30. We use our algorithm to measure damage in the Internet Archive, showing that it is getting better at mitigating damage over time (going from a damage rating of 0.16 in 1998 to 0.13 in 2013). However, we show that a greater number of important embedded resources (2.05 per memento on average) are missing over time. Alternatively, the damage in WebCite is increasing over time (going from 0.375 in 2007 to 0.475 in 2014), while the missing embedded resources remain constant (13 % of the resources are missing on average). Finally, we investigate the impact of JavaScript on the damage of the archives, showing that a crawler that can archive JavaScript-dependent representations will reduce memento damage by 13.5 %.
PubDate: 2015-05-06

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327