Journal Cover
International Journal on Digital Libraries
Journal Prestige (SJR): 0.441
Citation Impact (citeScore): 2
Number of Followers: 691  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1432-1300 - ISSN (Online) 1432-5012
Published by Springer-Verlag Homepage  [2351 journals]
  • Neural ParsCit: a deep learning-based reference string parser
    • Authors: Animesh Prasad; Manpreet Kaur; Min-Yen Kan
      Pages: 323 - 337
      Abstract: We present a deep learning approach for the core digital libraries task of parsing bibliographic reference strings. We deploy the state-of-the-art long short-term memory (LSTM) neural network architecture, a variant of a recurrent neural network to capture long-range dependencies in reference strings. We explore word embeddings and character-based word embeddings as an alternative to handcrafted features. We incrementally experiment with features, architectural configurations, and the diversity of the dataset. Our final model is an LSTM-based architecture, which layers a linear chain conditional random field (CRF) over the LSTM output. In extensive experiments in both English in-domain (computer science) and out-of-domain (humanities) test cases, as well as multilingual data, our results show a significant gain ( \(p<0.01\) ) over the reported state-of-the-art CRF-only-based parser.
      PubDate: 2018-11-01
      DOI: 10.1007/s00799-018-0242-1
      Issue No: Vol. 19, No. 4 (2018)
  • Promoting user engagement with digital cultural heritage collections
    • Authors: Maristella Agosti; Nicola Orio; Chiara Ponchia
      Pages: 353 - 366
      Abstract: In the context of cooperating in a project whose central aim has been the production of a corpus agnostic research environment supporting access to and exploitation of digital cultural heritage collections, we have worked towards promoting user engagement with the collections. The aim of this paper is to present the methods and the solutions that have been envisaged and implemented to engage a diversified range of users with digital collections. Innovative solutions to stimulate and enhance user engagement have been achieved through a sustained and broad-based involvement of different cohorts of users. In particular, we propose the use of narratives to support and guide users within the collection and present them the main available tools. In moving beyond the specialized, search-based and stereotyped norm, the environment that we have contributed to developing offers a new approach for accessing and interacting with cultural heritage collections. It shows the value of an adaptive interface that dynamically responds to support the user, whatever his or her level of experience with digital environments or familiarity with the content may be.
      PubDate: 2018-11-01
      DOI: 10.1007/s00799-018-0245-y
      Issue No: Vol. 19, No. 4 (2018)
  • Introduction to the special issue on bibliometric-enhanced information
           retrieval and natural language processing for digital libraries (BIRNDL)
    • Authors: Philipp Mayr; Ingo Frommholz; Guillaume Cabanac; Muthu Kumar Chandrasekaran; Kokil Jaidka; Min-Yen Kan; Dietmar Wolfram
      Pages: 107 - 111
      Abstract: The large scale of scholarly publications poses a challenge for scholars in information seeking and sensemaking. Bibliometric, information retrieval (IR), text mining, and natural language processing techniques can assist to address this challenge, but have yet to be widely used in digital libraries (DL). This special issue on bibliometric-enhanced information retrieval and natural language processing for digital libraries (BIRNDL) was compiled after the first joint BIRNDL workshop that was held at the joint conference on digital libraries (JCDL 2016) in Newark, New Jersey, USA. It brought together IR and DL researchers and professionals to elaborate on new approaches in natural language processing, information retrieval, scientometric, and recommendation techniques that can advance the state of the art in scholarly document understanding, analysis, and retrieval at scale. This special issue includes 14 papers: four extended papers originating from the first BIRNDL workshop 2016 and the BIR workshop at ECIR 2016, four extended system reports of the CL-SciSumm Shared Task 2016 and six original research papers submitted via the open call for papers.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0230-x
      Issue No: Vol. 19, No. 2-3 (2018)
  • Reuse and plagiarism in Speech and Natural Language Processing
    • Authors: Joseph Mariani; Gil Francopoulo; Patrick Paroubek
      Pages: 113 - 126
      Abstract: The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy and paste operations between articles in the domain of Natural Language Processing (NLP), including Speech Processing. The search space of the comparisons is a corpus labeled as NLP4NLP, which includes 34 different conferences and journals and gathers a large part of the NLP activity over the past 50 years. This study considers the similarity between the papers of each individual event and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a paper borrowing a fragment of text from another paper of the corpus (that we will call the source paper), or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the corpus. The results show that self-reuse is rather a common practice, but that plagiarism seems to be very unusual, and that both stay within legal and ethical limits.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0211-0
      Issue No: Vol. 19, No. 2-3 (2018)
  • The context of multiple in-text references and their signification
    • Authors: Marc Bertin; Iana Atanassova
      Pages: 127 - 138
      Abstract: In this paper, we consider sentences that contain multiple in-text references (MIR) and their position in the rhetorical structure of articles. We carry out the analysis of MIR in a large-scale dataset of about 80,000 research articles published by the Public Library of Science in 7 journals. We analyze two major characteristics of MIR: their positions in the IMRaD structure of articles and the number of in-text references that make up a MIR in the different journals. We show that MIR are rather frequent in all sections of the rhetorical structure. In the Introduction section, sentences containing MIR account for more than half of the sentences with references. We examine the syntactic patterns that are most used in the contexts of both multiple and single in-text references and show that they are composed, for the most part, of noun groups. We point out the specificity of the Methods section in this respect.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0225-7
      Issue No: Vol. 19, No. 2-3 (2018)
  • Bag of works retrieval: TF*IDF weighting of works co-cited with a seed
    • Authors: Howard D. White
      Pages: 139 - 149
      Abstract: Although not presently possible in any system, the style of retrieval described here combines familiar components—co-citation linkages of documents and TF*IDF weighting of terms—in a way that could be implemented in future databases. Rather than entering keywords, the user enters a string identifying a work—a seed—to retrieve the strings identifying other works that are co-cited with it. Each of the latter is part of a “bag of works,” and it presumably has both a co-citation count with the seed and an overall citation count in the database. These two counts can be plugged into a standard formula for TF*IDF weighting such that all the co-cited items can be ranked for relevance to the seed, given that the entire retrieval is relevant to it by evidence from multiple co-citing authors. The result is analogous to, but different from, traditional “bag of words” retrieval, which it supplements. Some properties of the ranking are illustrated by works co-cited with three seeds: an article on search behavior, an information retrieval textbook, and an article on centrality in networks. While these are case studies, their properties apply to bag of works retrievals in general and have implications for users (e.g., humanities scholars, domain analysts) that go beyond any one example.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0217-7
      Issue No: Vol. 19, No. 2-3 (2018)
  • The references of references: a method to enrich humanities library
           catalogs with citation data
    • Authors: Giovanni Colavizza; Matteo Romanello; Frédéric Kaplan
      Pages: 151 - 161
      Abstract: The advent of large-scale citation indexes has greatly impacted the retrieval of scientific information in several domains of research. The humanities have largely remained outside of this shift, despite their increasing reliance on digital means for information seeking. Given that publications in the humanities have a longer than average life-span, mainly due to the importance of monographs for the field, this article proposes to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. Reference monographs are works considered to be of particular importance in a research library setting, and likely to possess characteristic citation patterns. The article shows how to select a corpus of reference monographs, and proposes a pipeline to extract the network of publications they refer to. Results using a set of reference monographs in the domain of the history of Venice show that only 7% of extracted citations are made to publications already within the initial seed. Furthermore, the resulting citation network suggests the presence of a core set of works in the domain, cited more frequently than average.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0210-1
      Issue No: Vol. 19, No. 2-3 (2018)
  • Insights from CL-SciSumm 2016: the faceted scientific document
           summarization Shared Task
    • Authors: Kokil Jaidka; Muthu Kumar Chandrasekaran; Sajal Rustagi; Min-Yen Kan
      Pages: 163 - 171
      Abstract: We describe the participation and the official results of the 2nd Computational Linguistics Scientific Summarization Shared Task (CL-SciSumm), held as a part of the BIRNDL workshop at the Joint Conference for Digital Libraries 2016 in Newark, New Jersey. CL-SciSumm is the first medium-scale Shared Task on scientific document summarization in the computational linguistics (CL) domain. Participants were provided a training corpus of 30 topics, each comprising of a reference paper (RP) and 10 or more citing papers, all of which cite the RP. For each citation, the text spans (i.e., citances) that pertain to the RP have been identified. Participants solved three sub-tasks in automatic research paper summarization using this text corpus. Fifteen teams from six countries registered for the Shared Task, of which ten teams ultimately submitted and presented their results. The annotated corpus comprised 30 target papers—currently the largest available corpora of its kind. The corpus is available for free download and use at
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0221-y
      Issue No: Vol. 19, No. 2-3 (2018)
  • Computational linguistics literature and citations oriented citation
           linkage, classification and summarization
    • Authors: Lei Li; Liyuan Mao; Yazhao Zhang; Junqi Chi; Taiwen Huang; Xiaoyue Cong; Heng Peng
      Pages: 173 - 190
      Abstract: Scientific literature is currently the most important resource for scholars, and their citations have provided researchers with a powerful latent way to analyze scientific trends, influences and relationships of works and authors. This paper is focused on automatic citation analysis and summarization for the scientific literature of computational linguistics, which are also the shared tasks in the 2016 workshop of the 2nd Computational Linguistics Scientific Document Summarization at BIRNDL 2016 (The Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries). Each citation linkage between a citation and the spans of text in the reference paper is recognized according to their content similarities via various computational methods. Then the cited text span is classified to five pre-defined facets, i.e., Hypothesis, Implication, Aim, Results and Method, based on various features of lexicons and rules via Support Vector Machine and Voting Method. Finally, a summary of the reference paper from the cited text spans is generated within 250 words. hLDA (hierarchical Latent Dirichlet Allocation) topic model is adopted for content modeling, which provides knowledge about sentence clustering (subtopic) and word distributions (abstractiveness) for summarization. We combine hLDA knowledge with several other classical features using different weights and proportions to evaluate the sentences in the reference paper. Our systems have been ranked top one and top two according to the evaluation results published by BIRNDL 2016, which has verified the effectiveness of our methods.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0219-5
      Issue No: Vol. 19, No. 2-3 (2018)
  • Identifying reference spans: topic modeling and word embeddings help IR
    • Authors: Luis Moraes; Shahryar Baki; Rakesh Verma; Daniel Lee
      Pages: 191 - 202
      Abstract: The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text' The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0220-z
      Issue No: Vol. 19, No. 2-3 (2018)
  • Automatic summarization of scientific publications using a feature
           selection approach
    • Authors: Hazem Al Saied; Nicolas Dugué; Jean-Charles Lamirel
      Pages: 203 - 215
      Abstract: Feature Maximization is a feature selection method that deals efficiently with textual data: to design systems that are altogether language-agnostic, parameter-free and do not require additional corpora to function. We propose to evaluate its use in text summarization, in particular in cases where documents are structured. We first experiment this approach in a single-document summarization context. We evaluate it on the DUC AQUAINT corpus and show that despite the unstructured nature of the corpus, our system is above the baseline and produces encouraging results. We also observe that the produced summaries seem robust to redundancy. Next, we evaluate our method in the more appropriate context of SciSumm challenge, which is dedicated to research publications summarization. These publications are structured in sections and our class-based approach is thus relevant. We more specifically focus on the task that aims to summarize papers using those that refer to them. We consider and evaluate several systems using our approach dealing with specific bag of words. Furthermore, in these systems, we also evaluate cosine and graph-based distance for sentence weighting and comparison. We show that our Feature Maximization based approach performs very well in the SciSumm 2016 context for the considered task, providing better results than the known results so far, and obtaining high recall. We thus demonstrate the flexibility and the relevance of Feature Maximization in this context.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0214-x
      Issue No: Vol. 19, No. 2-3 (2018)
  • Task-oriented search for evidence-based medicine
    • Authors: Bevan Koopman; Jack Russell; Guido Zuccon
      Pages: 217 - 229
      Abstract: Research on how clinicians search shows that they pose queries according to three common clinical tasks: searching for diagnoses, searching for treatments and searching for tests. We hypothesise, therefore, that structuring an information retrieval system around these three tasks would be beneficial when searching for evidence-based medicine (EBM) resources in medical digital libraries. Task-oriented (diagnosis, test and treatment) information was extracted from free-text medical articles using a natural language processing pipeline. This information was integrated into a retrieval and visualisation system for EBM search that allowed searchers to interact with the system via task-oriented filters. The effectiveness of the system was empirically evaluated using TREC CDS—a gold standard of medical articles and queries designed for EBM search. Task-oriented information was successfully extracted from 733,138 articles taken from a medical digital library. Task-oriented search led to improvements in the quality of search results and savings in searcher workload. An analysis of how different tasks affected retrieval showed that searching for treatments was the most challenging and that the task-oriented approach improved search for treatments. The most savings in terms of workload were observed when searching for treatments and tests. Overall, taking into account different clinical tasks can improve search according to these tasks. Each task displayed different results, making systems that are more adaptive to the clinical task type desirable. A future user study would help quantify the actual cost-saving estimates.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0209-7
      Issue No: Vol. 19, No. 2-3 (2018)
  • Investigating exploratory search activities based on the stratagem level
           in digital libraries
    • Authors: Zeljko Carevic; Maria Lusky; Wilko van Hoek; Philipp Mayr
      Pages: 231 - 251
      Abstract: In this paper, we present the results of a user study on exploratory search activities in a social science digital library. We conducted a user study with 32 participants with a social sciences background—16 postdoctoral researchers and 16 students—who were asked to solve a task on searching related work to a given topic. The exploratory search task was performed in a 10-min time slot. The use of certain search activities is measured and compared to gaze data recorded with an eye tracking device. We use a novel tree graph representation to visualise the users’ search patterns and introduce a way to combine multiple search session trees. The tree graph representation is capable of creating one single tree for multiple users and identifying common search patterns. In addition, the information behaviour of students and postdoctoral researchers is being compared. The results show that search activities on the stratagem level are frequently utilised by both user groups. The most heavily used search activities were keyword search, followed by browsing through references and citations, and author searching. The eye tracking results showed an intense examination of documents metadata, especially on the level of citations and references. When comparing the group of students and postdoctoral researchers, we found significant differences regarding gaze data on the area of the journal name of the seed document. In general, we found a tendency of the postdoctoral researchers to examine the metadata records more intensively with regard to dwell time and the number of fixations. By creating combined session trees and deriving subtrees from those, we were able to identify common patterns like economic (explorative) and exhaustive (navigational) behaviour. Our results show that participants utilised multiple search strategies starting from the seed document, which means that they examined different paths to find related publications.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0226-6
      Issue No: Vol. 19, No. 2-3 (2018)
  • Retrieval by recommendation: using LOD technologies to improve digital
           library search
    • Authors: Lisa Wenige; Johannes Ruhland
      Pages: 253 - 269
      Abstract: This paper investigates how Linked Open Data (LOD) can be used for recommendations and information retrieval within digital libraries. While numerous studies on both research paper recommender systems and Linked Data-enabled recommender systems have been conducted, no previous attempt has been undertaken to explore opportunities of LOD in the context of search and discovery interfaces. We identify central advantages of Linked Open Data with regard to scientific search and propose two novel recommendation strategies, namely flexible similarity detection and constraint-based recommendations. These strategies take advantage of key characteristics of data that adheres to LOD principles. The viability of Linked Data recommendations was extensively evaluated within the scope of a web-based user experiment in the domain of economics. Findings indicate that the proposed methods are well suited to enhance established search functionalities and are thus offering novel ways of resource access. In addition to that, RDF triples from LOD repositories can complement local bibliographic records that are sparse or of poor quality.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0224-8
      Issue No: Vol. 19, No. 2-3 (2018)
  • Extracting discourse elements and annotating scientific documents using
           the SciAnnotDoc model: a use case in gender documents
    • Authors: Hélène de Ribaupierre; Gilles Falquet
      Pages: 271 - 286
      Abstract: When scientists are searching for information, they generally have a precise objective in mind. Instead of looking for documents “about a topic T”, they try to answer specific questions such as finding the definition of a concept, finding results for a particular problem, checking whether an idea has already been tested, or comparing the scientific conclusions of two articles. Answering these precise or complex queries on a corpus of scientific documents requires precise modelling of the full content of the documents. In particular, each document element must be characterised by its discourse type (hypothesis, definition, result, method, etc.). In this paper, we present a scientific document model (SciAnnotDoc ontology), developed from an empirical study conducted with scientists, that models the discourse types. We developed an automated process that analyses documents effectively identifying the discourse types of each element. Using syntactic rules (patterns), we evaluated the process output in terms of precision and recall using a previously annotated corpus in Gender Studies. We chose to annotate documents in Humanities, as these documents are well known to be less formalised than those in “hard science”. The process output has been used to create a SciAnnotDoc representation of the corpus on top of which we built a faceted search interface. Experiments with users show that searches using with this interface clearly outperform standard keyword searches for precise or complex queries.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0227-5
      Issue No: Vol. 19, No. 2-3 (2018)
  • Scientific document summarization via citation contextualization and
           scientific discourse
    • Authors: Arman Cohan; Nazli Goharian
      Pages: 287 - 303
      Abstract: The rapid growth of scientific literature has made it difficult for the researchers to quickly learn about the developments in their respective fields. Scientific summarization addresses this challenge by providing summaries of the important contributions of scientific papers. We present a framework for scientific summarization which takes advantage of the citations and the scientific discourse structure. Citation texts often lack the evidence and context to support the content of the cited paper and are even sometimes inaccurate. We first address the problem of inaccuracy of the citation texts by finding the relevant context from the cited paper. We propose three approaches for contextualizing citations which are based on query reformulation, word embeddings, and supervised learning. We then train a model to identify the discourse facets for each citation. We finally propose a method for summarizing scientific papers by leveraging the faceted citations and their corresponding contexts. We evaluate our proposed method on two scientific summarization datasets in the biomedical and computational linguistics domains. Extensive evaluation results show that our methods can improve over the state of the art by large margins.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0216-8
      Issue No: Vol. 19, No. 2-3 (2018)
  • Section mixture models for scientific document summarization
    • Authors: John M. Conroy; Sashka T. Davis
      Pages: 305 - 322
      Abstract: In this paper, we present a system for summarization of scientific and structured documents that has three components: section mixture models are used for estimation of the weights of terms; a hypothesis test to select a subset of these terms; and a sentence extractor based on techniques for combinatorial optimization. The section mixture models approach is an adaptation of a bigram mixture model based on the main sections of a scientific document and a collection of citing sentences (citances) from papers that reference the document. The model was adapted from earlier work done on Biomedical documents used in the summarization task of the 2014 Text Analysis Conference (TAC 2014). The mixture model trained on the Biomedical data was used also on the data for the Computational Linguistics scientific summarization task of the Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (CL-SciSumm 2016). This model gives rise to machine-generated summaries with ROUGE scores that are nearly as strong as those seen on the Biomedical data and was also the highest scoring submission to the task of generating a human summary. For sentence extraction, we use the OCCAMS algorithm (Davis et al., in: Vreeken, Ling, Zaki, Siebes, Yu, Goethals, Webb, Wu (eds) ICDM workshops, IEEE Computer Society, pp 454–463, 2012) which takes the sentences from the original document and the assignment of weights of the terms computed by the language models and outputs a set of minimally overlapping sentences whose combined term coverage is maximized. Finally, we explore the importance of an appropriate background model for the hypothesis test to select terms to achieve the best quality summaries.
      PubDate: 2018-09-01
      DOI: 10.1007/s00799-017-0218-6
      Issue No: Vol. 19, No. 2-3 (2018)
  • Heuristic and supervised approaches to handwritten annotation extraction
           for musical score images
    • Authors: Eamonn Bell; Laurent Pugin
      Abstract: Performers’ copies of musical scores are typically rich in handwritten annotations, which capture historical and institutional performance practices. The development of interactive interfaces to explore digital archives of these scores and the systematic investigation of their meaning and function will be facilitated by the automatic extraction of handwritten score annotations. We present several approaches to the extraction of handwritten annotations of arbitrary content from digitized images of musical scores. First, we show promising results in certain contexts when using simple unsupervised clustering techniques to identify handwritten annotations in conductors’ scores. Next, we compare annotated scores to unannotated copies and use a printed sheet music comparison tool, Aruspix, to recover handwritten annotations as additions to the clean copy. Using both of these techniques in a combined annotation pipeline qualitatively improves the recovery of handwritten annotations. Recent work has shown the effectiveness of reframing classical optical musical recognition tasks as supervised machine learning classification tasks. In the same spirit, we pose the problem of handwritten annotation extraction as a supervised pixel classification task, where the feature space for the learning task is derived from the intensities of neighboring pixels. After an initial investment of time required to develop dependable training data, this approach can reliably extract annotations for entire volumes of score images without further supervision. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic Orchestra. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.
      PubDate: 2018-07-11
      DOI: 10.1007/s00799-018-0249-7
  • Image libraries and their scholarly use in the field of art and
           architectural history
    • Authors: Sander Münster; Christina Kamposiori; Kristina Friedrichs; Cindy Kröber
      Abstract: The use of image libraries in the field of art and architectural history has been the subject of numerous research studies over the years. However, since previous investigations have focused, primarily, either on user behavior or reviewed repositories, our aim is to bring together both approaches. Against this background, this paper identifies the main characteristics of research and information behavior of art and architectural history scholars and students in the UK and Germany and presents a structured overview of currently available scholarly image libraries. Finally, the implications for a user-centered design of information resources and, in particular, image libraries are provided.
      PubDate: 2018-07-07
      DOI: 10.1007/s00799-018-0250-1
  • Open information extraction as an intermediate semantic structure for
           Persian text summarization
    • Authors: Mahmoud Rahat; Alireza Talebpour
      Abstract: Semantic applications typically exploit structures such as dependency parse trees, phrase-chunking, semantic role labeling or open information extraction. In this paper, we introduce a novel application of Open IE as an intermediate layer for text summarization. Text summarization is an important method for providing relevant information in large digital libraries. Open IE is referred to the process of extracting machine-understandable structural propositions from text. We use these propositions as a building block to shorten the sentence and generate a summary of the text. The proposed system offers a new form of summarization that is able to break the structure of the sentence and extract the most significant sub-sentential elements. Other advantages include the ability to identify and eliminate less important sections of the sentence (such as adverbs, adjectives, appositions or dependent clauses), or duplicate pieces of sentences which in turn opens up the space for entering more sentences in the summary to enhance the coverage and coherency of it. The proposed system is localized for Persian language; however, it can be adopted to other languages. Experiments performed on a standard data set “Pasokh” with a standard comparison tool showed promising results for the proposed approach. We used summaries produced by the system in a real-world application in the virtual library of Shahid Beheshti University and received good feedbacks from users.
      PubDate: 2018-06-28
      DOI: 10.1007/s00799-018-0244-z
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-