for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Jurnals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover   International Journal on Digital Libraries
  [SJR: 0.203]   [H-I: 24]   [696 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 1432-1300 - ISSN (Online) 1432-5012
   Published by Springer-Verlag Homepage  [2302 journals]
  • Exploring publication metadata graphs with the LODmilla browser and editor
    • Abstract: Abstract With the LODmilla browser, we try to support linked data exploration in a generic way learning from the 20 years of web browser evolution as well as from scholars’ opinions who try to use it as a research exploration tool. In this paper, generic functions for linked open data (LOD) browsing are presented, and it is also explained what kind of information search tactics they enable with linked data describing publications. Furthermore, LODmilla also supports the sharing of graph views and the correction of LOD data during browsing.
      PubDate: 2015-05-01
       
  • Digital field scholarship and the liberal arts: results from a
           2012–13 sandbox
    • Abstract: Abstract We summarize a recent multi-institutional collaboration in digital field scholarship involving four liberal arts colleges: Davidson College, Lewis & Clark College, Muhlenberg College, and Reed College. Digital field scholarship (DFS) can be defined as scholarship in the arts and sciences for which field-based research and concepts are significant, and digital tools support data collection, analysis, and communication; DFS thus gathers together and extends a wide range of existing scholarship, offering new possibilities for appreciating the connections that define liberal education. Our collaboration occurred as a sandbox, a collective online experiment using a modified WordPress platform (including mapping and other advanced capabilities) built and supported by Lewis & Clark College, with sponsorship provided by the National Institute for Technology in Liberal Education. Institutions selected course-based DFS projects for fall 2012 and/or spring 2013. Projects ranged from documentary photojournalism to home energy efficiency assessment. One key feature was the use of readily available mobile devices and apps for field-based reconnaissance and data collection; another was our public digital scholarship approach, in which student participants shared the process and products of their work via public posts on the DFS website. Descriptive and factor analysis results from anonymous assessment data suggest strong participant response and likely future potential of digital field scholarship across class level and gender. When set into the context of the four institutions that supported the 2012–2013 sandbox, we see further opportunities for digital field scholarship on our and other campuses, provided that an optimal balance is struck between challenges and rewards along technical, pedagogical, and practical axes. Ultimately, digital field scholarship will be judged for its scholarship and for extending the experimental, open-ended inquiry that characterizes liberal education.
      PubDate: 2015-05-01
       
  • Towards robust tags for scientific publications from natural language
           processing tools and Wikipedia
    • Abstract: In this work, two simple methods of tagging scientific publications with labels reflecting their content are presented and compared. As a first source of labels, Wikipedia is employed. A second label set is constructed from the noun phrases occurring in the analyzed corpus. The corpus itself consists of abstracts from 0.7 million scientific documents deposited in the ArXiv preprint collection. We present a comparison of both approaches, which shows that discussed methods are to a large extent complementary. Moreover, the results give interesting insights into the completeness of Wikipedia knowledge in various scientific domains. As a next step, we examine the statistical properties of the obtained tags. It turns out that both methods show qualitatively similar rank–frequency dependence, which is best approximated by the stretched exponential curve. The distribution of the number of distinct tags per document follows also the same distribution for both methods and is well described by the negative binomial distribution. The developed tags are meant for use as features in various text mining tasks. Therefore, as a final step we show the preliminary results on their application to topic modeling.
      PubDate: 2015-05-01
       
  • Linked data authority records for Irish place names
    • Abstract: Abstract Linked Data technologies are increasingly being implemented to enhance cataloguing workflows in libraries, archives and museums. We review current best practice in library cataloguing, how Linked Data is used to link collections and provide consistency in indexing, and briefly describe the relationship between Linked Data, library data models and descriptive standards. As an example we look at the Logainm.ie dataset, an online database holding the authoritative hierarchical list of Irish and English language place names in Ireland. This paper describes the process of creating the new Linked Logainm dataset, including the transformation of the data from XML to RDF and the generation of links to external geographic datasets like DBpedia and the Faceted Application of Subject Terminology. This dataset was then used to enhance the National Library of Ireland’s metadata MARCXML metadata records for its Longfield maps collection. We also describe the potential benefits of Linked Data for libraries, focusing on the use of the Linked Logainm dataset and its future potential for Irish heritage institutions.
      PubDate: 2015-04-01
       
  • A linked open data architecture for the historical archives of the Getulio
           Vargas Foundation
    • Abstract: Abstract This paper presents an architecture for historical archives maintenance based on Open Linked Data technologies and open source distributed development model and tools. The proposed architecture is being implemented for the archives of the Centro de Pesquisa e Documentação de História Contemporânea do Brasil (Center for Research and Documentation of Brazilian Contemporary History) of the Fundação Getulio Vargas (Getulio Vargas Foundation). We discuss the benefits of this initiative and suggest ways of implementing it, as well as describing the preliminary milestones already achieved. We also present some of the possibilities for extending the accessibility and usefulness of the data archives information using semantic web technologies, natural language processing, image analysis tools, and audio–textual alignment, both in progress and planned.
      PubDate: 2015-03-19
       
  • Introduction to the focussed issue on Semantic Digital Archives
    • PubDate: 2015-03-12
       
  • A quantitative approach to evaluate Website Archivability using the CLEAR+
           method
    • Abstract: Abstract Website Archivability (WA) is a notion established to capture the core aspects of a website, crucial in diagnosing whether it has the potential to be archived with completeness and accuracy. In this work, aiming at measuring WA, we introduce and elaborate on all aspects of CLEAR+, an extended version of the Credible Live Evaluation Method for Archive Readiness (CLEAR) method. We use a systematic approach to evaluate WA from multiple different perspectives, which we call Website Archivability Facets. We then analyse archiveready.com, a web application we created as the reference implementation of CLEAR+, and discuss the implementation of the evaluation workflow. Finally, we conduct thorough evaluations of all aspects of WA to support the validity, the reliability and the benefits of our method using real-world web data.
      PubDate: 2015-03-12
       
  • Scalable continual quality control of formative assessment items in an
           educational digital library: an empirical study
    • Abstract: Abstract An essential component of any library of online learning objects is assessment items, for example, homework, quizzes, and self-study questions. As opposed to exams, these items are formative in nature, as they help the learner to assess his or her own progress through the material. When it comes to quality control of these items, their formative nature poses additional challenges. e.g., there is no particular time interval in which learners interact with these items, learners come to these items with very different levels of preparation and seriousness, guessing generates noise in the data, and the numbers of items and learners can be several orders of magnitude larger than in summative settings. This empirical study aims to find a highly scalable mechanism for continual quality control of this class of digital content with a minimalist amount of additional metadata and transactional data, while taking into account also characteristics of the learners. In a subsequent evaluation of the model on a limited set of transactions, we find that taking into account the learner characteristic of ability improves the quality of item metadata, and in a comparison to Item Response Theory (IRT), we find that the developed model in fact performs slightly better in terms of predicting the outcome of formative assessment transactions, while never matching the performance of IRT on predicting the outcome of summative assessment.
      PubDate: 2015-03-11
       
  • A generalized topic modeling approach for automatic document annotation
    • Abstract: Abstract Ecological and environmental sciences have become more advanced and complex, requiring observational and experimental data from multiple places, times, and thematic scales to verify their hypotheses. Over time, such data have not only increased in amount, but also in diversity and heterogeneity of the data sources that spread throughout the world. This heterogeneity poses a huge challenge for scientists who have to manually search for desired data. ONEMercury has recently been implemented as part of the DataONE project to alleviate such problems and to serve as a portal for accessing environmental and observational data across the globe. ONEMercury harvests metadata records from multiple archives and repositories, and makes them searchable. However, harvested metadata records sometimes are poorly annotated or lacking meaningful keywords, which could impede effective retrieval. We propose a methodology that learns the annotation from well-annotated collections of metadata records to automatically annotate poorly annotated ones. The problem is first transformed into the tag recommendation problem with a controlled tag library. Then, two variants of an algorithm for automatic tag recommendation are presented. The experiments on four datasets of environmental science metadata records show that our methods perform well and also shed light on the natures of different datasets. We also discuss relevant topics such as using topical coherence to fine-tune parameters and experiments on cross-archive annotation.
      PubDate: 2015-03-07
       
  • Using ontologies to capture the semantics of a (business) process for
           digital preservation
    • Abstract: Abstract IT-supported business processes and computationally intensive science (called e-science) have become increasingly ubiquitous in the last decades. Along with this trend comes the need to make at least the most important of these processes available for the long term, to allow later analysis of their execution, or even a re-execution. As such, the preservation of scientific experiments and their results enables others to reproduce and verify the results as well as build on the result of earlier work. All but the simplest processes require to be described by a multitude of information objects, as well as their interconnections and relations, to be successfully preserved. To enable a semantic description of these objects in a structured manner, we developed a formal meta-model that can be utilised in the digital preservation of a process. The meta-model describes classes of elements and their relations, in the form of ontologies, with a core ontology describing the generic concepts, and extension mechanisms to map supplementary ontologies describing more specific aspects. In this paper, we present the overall architecture and individual ontologies, and motivate their usefulness via the application to use cases from different domains.
      PubDate: 2015-03-04
       
  • The new knowledge infrastructure
    • PubDate: 2015-02-27
       
  • Introduction to the special issue on digital scholarship
    • PubDate: 2015-02-24
       
  • Visions and open challenges for a knowledge-based culturomics
    • Abstract: Abstract The concept of culturomics was born out of the availability of massive amounts of textual data and the interest to make sense of cultural and language phenomena over time. Thus far however, culturomics has only made use of, and shown the great potential of, statistical methods. In this paper, we present a vision for a knowledge-based culturomics that complements traditional culturomics. We discuss the possibilities and challenges of combining knowledge-based methods with statistical methods and address major challenges that arise due to the nature of the data; diversity of sources, changes in language over time as well as temporal dynamics of information in general. We address all layers needed for knowledge-based culturomics, from natural language processing and relations to summaries and opinions.
      PubDate: 2015-02-18
       
  • What lies beneath?: Knowledge infrastructures in the subseafloor
           biosphere and beyond
    • Abstract: Abstract We present preliminary findings from a three-year research project comprised of longitudinal qualitative case studies of data practices in four large, distributed, highly multidisciplinary scientific collaborations. This project follows a 2 \(\times \) 2 research design: two of the collaborations are big science while two are little science, two have completed data collection activities while two are ramping up data collection. This paper is centered on one of these collaborations, a project bringing together scientists to study subseafloor microbial life. This collaboration is little science, characterized by small teams, using small amounts of data, to address specific questions. Our case study employs participant observation in a laboratory, interviews ( \(n=49\) to date) with scientists in the collaboration, and document analysis. We present a data workflow that is typical for many of the scientists working in the observed laboratory. In particular, we show that, although this workflow results in datasets apparently similar in form, nevertheless a large degree of heterogeneity exists across scientists in this laboratory in terms of the methods they employ to produce these datasets—even between scientists working on adjacent benches. To date, most studies of data in little science focus on heterogeneity in terms of the types of data produced: this paper adds another dimension of heterogeneity to existing knowledge about data in little science. This additional dimension makes more complex the task of management and curation of data for subsequent reuse. Furthermore, the nature of the factors that contribute to heterogeneity of methods suggest that this dimension of heterogeneity is a persistent and unavoidable feature of little science.
      PubDate: 2015-02-15
       
  • A metadata model and mapping approach for facilitating access to
           heterogeneous cultural heritage assets
    • Abstract: Abstract In the last decade, Europe has put a tremendous effort into making cultural, educational and scientific resources publicly available. Based on national or thematic aggregators, initiatives like Europeana nowadays provide a plethora of cultural resources for people worldwide. Although such massive amounts of rich cultural heritage content are available, the potential of its use for educational and scientific purposes still remains largely untapped. Much valuable content is only available in the so-called long tail, i.e. in niche resources such as specifically themed cultural heritage collections, and are difficult to access from the mainstream hubs like major search engines, social networks or online encyclopaedias. The vision of the EEXCESS project is to push high-quality content from the long tail to platforms and devices which are used every day. The realisation of such use cases requires as a basis (and in addition to the functional components) a common metadata representation and tools for mapping between the data sources’ specific data models and this common representation. In this paper, we propose a data model for such a system that combines federated search results from different cultural heritage data sources. We then propose an approach for metadata mapping, with a focus on easy configurability of mappings, which—once properly configured—can then be executed on the fly by an automatic service. We demonstrate the approach using a real-world example.
      PubDate: 2015-01-29
       
  • PREMIS OWL
    • Abstract: Abstract In this article, we present PREMIS OWL. This is a semantic formalisation of the PREMIS 2.2 data dictionary of the Library of Congress. PREMIS 2.2 are metadata implementation guidelines for digitally archiving information for the long term. Nowadays, the need for digital preservation is growing. A lot of the digital information produced merely a decade ago is in danger of getting lost as technologies are changing and getting obsolete. This also threatens a lot of information from heritage institutions. PREMIS OWL is a semantic long-term preservation schema. Preservation metadata are actually a mixture of provenance information, technical information on the digital objects to be preserved and rights information. PREMIS OWL is an OWL schema that can be used as data model supporting digital archives. It can be used for dissemination of the preservation metadata as Linked Open Data on the Web and, at the same time, for supporting semantic web technologies in the preservation processes. The model incorporates 24 preservation vocabularies, published by the LOC as SKOS vocabularies. Via these vocabularies, PREMIS descriptions from different institutions become highly interoperable. The schema is approved and now managed by the Library of Congress. The PREMIS OWL schema is published at http://www.loc.gov/premis/rdf/v1.
      PubDate: 2015-01-11
       
  • Named entity evolution recognition on the Blogosphere
    • Abstract: Abstract Advancements in technology and culture lead to changes in our language. These changes create a gap between the language known by users and the language stored in digital archives. It affects user’s possibility to firstly find content and secondly interpret that content. In a previous work, we introduced our approach for named entity evolution recognition (NEER) in newspaper collections. Lately, increasing efforts in Web preservation have led to increased availability of Web archives covering longer time spans. However, language on the Web is more dynamic than in traditional media and many of the basic assumptions from the newspaper domain do not hold for Web data. In this paper we discuss the limitations of existing methodology for NEER. We approach these by adapting an existing NEER method to work on noisy data like the Web and the Blogosphere in particular. We develop novel filters that reduce the noise and make use of Semantic Web resources to obtain more information about terms. Our evaluation shows the potentials of the proposed approach.
      PubDate: 2014-12-23
       
  • VisInfo: a digital library system for time series research data based on
           exploratory search—a user-centered design approach
    • Abstract: Abstract To this day, data-driven science is a widely accepted concept in the digital library (DL) context (Hey et al. in The fourth paradigm: data-intensive scientific discovery. Microsoft Research, 2009). In the same way, domain knowledge from information visualization, visual analytics, and exploratory search has found its way into the DL workflow. This trend is expected to continue, considering future DL challenges such as content-based access to new document types, visual search, and exploration for information landscapes, or big data in general. To cope with these challenges, DL actors need to collaborate with external specialists from different domains to complement each other and succeed in given tasks such as making research data publicly available. Through these interdisciplinary approaches, the DL ecosystem may contribute to applications focused on data-driven science and digital scholarship. In this work, we present VisInfo (2014) , a web-based digital library system (DLS) with the goal to provide visual access to time series research data. Based on an exploratory search (ES) concept (White and Roth in Synth Lect Inf Concepts Retr Serv 1(1):1–98, 2009), VisInfo at first provides a content-based overview visualization of large amounts of time series research data. Further, the system enables the user to define visual queries by example or by sketch. Finally, VisInfo presents visual-interactive capability for the exploration of search results. The development process of VisInfo was based on the user-centered design principle. Experts from computer science, a scientific digital library, usability engineering, and scientists from the earth, and environmental sciences were involved in an interdisciplinary approach. We report on comprehensive user studies in the requirement analysis phase based on paper prototyping, user interviews, screen casts, and user questionnaires. Heuristic evaluations and two usability testing rounds were applied during the system implementation and the deployment phase and certify measurable improvements for our DLS. Based on the lessons learned in VisInfo, we suggest a generalized project workflow that may be applied in related, prospective approaches.
      PubDate: 2014-12-03
       
  • A pipeline for digital restoration of deteriorating photographic negatives
    • Abstract: Abstract Extending work presented at the second International Workshop on Historical Document Imaging and Processing, we demonstrate a digitization pipeline to capture and restore negatives in low-dynamic range file formats. The majority of early photographs were captured on acetate-based film. However, it has been determined that these negatives will deteriorate beyond repair even with proper conservation and no suitable restoration method is available without physically altering each negative. In this paper, we present an automatic method to remove various non-linear illumination distortions caused by deteriorating photographic support material. First, using a high-dynamic range structured-light scanning method, a 2D Gaussian model for light transmission is estimated for each pixel of the negative image. Estimated amplitude at each pixel provides an accurate model of light transmission, but also includes regions of lower transmission caused by damaged areas. Principal component analysis is then used to estimate the photometric error and effectively restore the original illumination information of the negative. A novel tone mapping approach is then used to produce the final restored image. Using both the shift in the Gaussian light stripes between pixels and their variations in standard deviation, a 3D surface estimate is calculated. Experiments of real historical negatives show promising results for widespread implementation in memory institutions.
      PubDate: 2014-11-08
       
  • Assisting digital interoperability and preservation through advanced
           dependency reasoning
    • Abstract: Abstract Digital material has to be preserved not only against loss or corruption, but also against changes in its ecosystem. A quite general view of the digital preservation problem is to approach it from a dependency management point of view. In this paper, we present a rule-based approach for dependency management which can model also converters and emulators. We show that this modeling approach enables the automatic reasoning needed for reducing the human effort required for checking (and monitoring) whether a task on a digital object is performable. We provide examples demonstrating how real-world converters and emulators can be modeled, and show how the preservation services can be implemented. Subsequently, we detail an implementation based on semantic web technologies, describe the prototype system Epimenides which demonstrates the feasibility of the approach, and finally report various promising evaluation results.
      PubDate: 2014-10-29
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2015