for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Program: Electronic Library and Information Systems
  [SJR: 0.554]   [H-I: 14]   [285 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0033-0337
   Published by Emerald Homepage  [312 journals]
  • De-duplicating a Large Crowd-sourced Catalogue of Bibliographic Records
    • Authors: Ilija Subasic, Nebojsa Gvozdenovic, Kris Jack
      First page: 138
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose Describe a large scale algorithm for generating a catalogue of scientific publication records (citations) from a crowdsourced data. Demonstrate how to learn an optimal combination of distance metrics for duplicate detection and introduce a parallel duplicate clustering algorithm. Design/methodology/approach We developed the algorithm and compared it with state-of-the art systems tackling the same problem. We used benchmark data-sets (3k data points) to test the effectiveness of our algorithm and a real-life data (>90 million) to test the efficiency and scalability of our algorithm. Findings We show that duplicate detection can be improved by an additional step we call duplicate clustering. We also show how to improve the efficiency of map/reduce similarity calculation algorithm by introducing a sampling step. Finally, we find that our system is comparable to the state-of-the art systems for duplicate detection, and that it can scale to deal with 100s of million data points. Research limitations/implications Academic researchers can use this paper to understand some of the issues of transitivity in duplicate detection, and its effects on digital catalogue generations. Practical implications Industry practitioners can use this paper as a use case study for generating a large scale real-life catalogue generation system that deals with millions of records in a scalable and efficient way Originality/value In contrast to other similarity calculation algorithms developed for m/r frameworks we present a specific variant of similarity calculation that is optimized for duplicate detection of bibliographic records by extending previously proposed e algorithm based on inverted index creation. In addition, we are concerned with more than duplicate detection, and investigate how to group detected duplicates. We develop distinct algorithms for duplicate detection and duplicate clustering and use the canopy clustering idea for multi-pass clustering. Our work extends the current state-of-the-art by including the duplicate clustering step and demonstrate new strategies for speeding-up map/reduce similarity calculations.
      Citation: Program
      PubDate: 2016-02-24T10:03:28Z
      DOI: 10.1108/PROG-02-2015-0021
       
  • Exploring the Relationship between Web Presence and Web Usability for
           Universities: A Case Study from Turkey
    • Authors: Serhat Peker, Seyma Kucukozer Cavdar, Kursat Cagiltay
      First page: 157
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose The purpose of this paper is to statistically explore the relationship between web usability and web presence of the universities. As a case study, five Turkish universities in different rankings which were selected from Webometrics rankings were evaluated and compared. Design/methodology/approach Two different methods were employed for performing usability evaluation of the selected universities: A user testing was used to measure the user performance on the selected tasks and a questionnaire to assess the user satisfaction on the website use. Both usability evaluation methods were applied on the pre-determined tasks for each university by participation of 20 subjects. After the usability evaluation, the universities were ranked in terms of usability results and finally, the relationship between web usability and web presence of universities was statistically investigated by using Kendall’s rank correlation. Findings Several common usability problems which were asserted by related previous studies were identified at the end of usability evaluation of university websites. The usability results also revealed that selected Turkish university websites suffer from numerous usability problems. Further, a strong positive correlation (p<.05) between the usability of the university websites and their web presences were identified. Hence, the participants showed a higher success and satisfaction while performing the tasks on the university websites which have strong web presences. Practical implications The findings from this study have practical implications for universities. Correlation results showed that universities can improve their web usability by giving importance to their web presence volumes. Universities can estimate their web usability levels by investigating their web presence rankings and they can also raise their rankings in Webometrics ranking system by improving the usability of their websites. Moreover, university web developers can design more usable and more user-friendly websites by avoiding usability and design problems identified through usability evaluation. Originality/value Different from the prior research efforts focusing on usability of educational web pages, this study contributes to the growing literature by statistically exploring the relationship between web presence and web usability of universities. This study is also precious from the point of view that it is one of the first attempts to evaluate and compare usability levels of a set of universities’ websites from Turkey.
      Citation: Program
      PubDate: 2016-02-24T10:03:29Z
      DOI: 10.1108/PROG-04-2014-0024
       
  • Adoption of DSpace by Indian Libraries
    • Authors: Surendran Cherukodan, Humayoon Kabir S
      First page: 175
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose India occupies second position among the countries that have adopted DSpace open source software for institutional repositories (IRs)/Digital Libraries (DLs). This paper examines the reasons for DSpace being preferred over other software. It inspects various instances of Indian web-based DSpace systems including the nature and growth of collection, the geographical distribution of DSpace systems and the types of institutions that adopt and maintain DSpace. Design/methodology/approach Data were collected from the official website of DSpace, Google search and from online discussion forums during July 2013 to January 2014. The 132 Indian DSpace links provided in the official DSpace website were examined. Further search was carried out on Google to identify Indian IRs and DLs that are not listed by the official website. A list of Indian DSpace websites prepared by the authors was sent to the online discussion forums for further updating. An online directory of Indian DSpace websites was created using WordPress which provided for the adding of comments. The study is limited to the web based DSpace IRs and DLs in India. Findings The majority of the Indian IRs and DLs listed on the official website of DSpace are not visible on the Internet. Many links lead to institution websites, commercial websites and personal pages of authors. Out of 132 DSpace links, only 55 are visible on the Internet to search and use. Indian libraries prefer DSpace over Eprints, Fedora and Greenstone. Many institutions could not maintain continuity of their DSpace systems in the online environment. Institutions having more funding are maintaining web-based DSpace systems. The size, economic conditions, rate of literacy and number of universities in an Indian state do not influence the maintenance of digital systems. Academic institutions involving universities, technical institutions and colleges lead in the adoption and maintenance of web based DSpace in India. Private Universities and colleges have adopted DSpace for IR/DL. Public libraries constitute a minimum percentage of web-based DSpace instances. Indian Courts and Parliament have adopted DSpace. Shodhganga, the Indian ETD project, is running on DSpace. Originality/value This is the first paper examining the adoption of DSpace by Indian libraries with a focus on online visibility and the strength of collection. It can be used to understand the pattern of technology adoption by Indian libraries over the years.
      Citation: Program
      PubDate: 2016-02-24T10:03:23Z
      DOI: 10.1108/PROG-08-2014-0059
       
  • Proposed Metrics for Data Accessibility in the Context of Linked Open Data
    • Authors: Mahdi Zahedi Nooghabi, Akram Fathian Dastgerdi
      First page: 184
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose One of the most important categories in Linked Open Data (LOD) quality models is “data accessibility”. The Purpose of this paper is to propose some metrics and indicators for assessing data accessibility in LOD and the semantic web context. Design/methodology/approach In this paper, at first we consider some data quality and LOD quality models to review proposed subcategories for data accessibility dimension in related texts. Then, based on Goal Question Metric (GQM) approach, we specify the project goals, main issues and some questions. Finally, we propose some metrics for assessing the data accessibility in the context of the semantic web. Findings Based on GQM approach, we determined 3 main issues for data accessibility, including Data Availability, Data Performance, and Data Security Policy. Then we created 4 main questions related to these issues. As a conclusion, we proposed 27 metrics for measuring these questions. . Originality/value Nowadays, one of the main challenges regarding data quality is the lack of agreement on widespread quality metrics and practical instruments for evaluating quality. Accessibility is an important aspect of data quality. However, few researches have been done to provide metrics and indicators for assessing data accessibility in the context of the semantic web. So, in this research, we consider the data accessibility dimension and propose a comparatively comprehensive set of metrics.
      Citation: Program
      PubDate: 2016-02-24T10:03:18Z
      DOI: 10.1108/PROG-01-2015-0007
       
  • Selecting new product development team members with knowledge sharing
           approach: A fuzzy bi-objective optimization model
    • Authors: S. Mahdi Hosseini, Peyman Akhavan, Morteza Abbasi
      First page: 195
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose This paper aims to provide a method for selection of the new product development (NPD) project team members, in such a way to maximize the expertise level of team members and at the same time, optimize knowledge sharing in the organization. Design/methodology/approach According to the Motivation-Opportunity-Ability (MOA) framework, knowledge sharing antecedents were determined. Then, the problem of selecting appropriate members of the project team was formulated as a bi-objective integer nonlinear programming model. Due to the uncertainty conditions in the evaluation of candidates, the fuzzy sets approach was used for modeling. To solve the problem, first, the nonlinear programming model was converted to a linear model. Subsequently, the fuzzy bi-objective linear programming problem was solved by using an approximate algorithm. Findings results of applying the proposed method to an Iranian ship-building company showed its effectiveness in selecting appropriate members of the project team. Practical implications With the aid of the proposed approach, project managers will be able to form effective project teams that while increasing the success probability of the project, facilitate the maintenance of knowledge acquired during the project lifecycle. Originality/value This paper, for the first time, has tried to provide a method for selecting the NPD project team members, in a way that while selecting candidates with highest expertise, maximizes the sharing of knowledge among them.
      Citation: Program
      PubDate: 2016-02-24T10:03:24Z
      DOI: 10.1108/PROG-04-2015-0033
       
  • A Hybrid PSO with Naïve Bayes Classifier for Disengagement Detection
           in Online Learning
    • Authors: GopalaKrishnan T, P Sengottuvelan
      First page: 215
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose The main aim of the E-learning system is to meet the requirements of the learners and to make efficient for learners where the aspects and complexity are taken into consideration. The learner’s motivational states are undertaken by many attempts, mainly by using design. Motivations are started by using analysis of log file. Firstly, the disengaged learners are identified moderately, and then visualize the disengaged learners which includes evaluation of many motivational characteristic for learning. Design/methodology/approach E-learning system can be improved by detecting disengagement learners which would allow personalizing the involvement at proper times of learners. For this improvement, online learning system is researched using data mining techniques in education. There are various attributes identified for this disengagement prediction using log file analysis. Though, there have been several attempts to include motivation in e-Learning systems and currently, the influence on cognition is acknowledged and taken into consideration. Findings Classification is a predictive data mining technique which makes prediction about values of data using known results found from different data. To find out the solution for disengaged learners, used Naive Bayesian classifier with PSO algorithm to classify the dataset and then perform the independent analysis to perform the significant difference among the study. But, this Naïve Bayes algorithm suffers due to over sensitivity on unnecessary/ irrelevant attributes and PSO suffers due to local optima problems. So, the accuracy will reduce in predicting disengagement with the correlated features. Originality/value The computational results indicate that the use of variables apparently unrelated to the class attribute tends to reduce the accuracy and reliability of a classification model. However, the hybrid PSO algorithm clearly tends to find smaller subsets of attributes than the PSO with Naive bayesian classifier. The Naïve Bayesian classifier combined with hybrid PSO feature selection method proves to be the best feature selection capability without degrading the classification accuracy. It is further proved to be an effective method for mining large structural data in much less computation time.
      Citation: Program
      PubDate: 2016-02-24T10:03:19Z
      DOI: 10.1108/PROG-07-2015-0047
       
  • Using a Semantic Wiki for Technology Forecast and Technology Monitoring
    • First page: 225
      Abstract: Program, Volume 50, Issue 2, April 2016.
      Purpose This paper aims to present extensions of Semantic MediaWiki for the purpose of technology forecast and technology monitoring. The user friendliness and applicability of the components is evaluated by task-based user studies. Design/methodology/approach Based on the requirements given by potential end users (technology experts), visualization possibilities were designed and implemented. Potential users used the new features of technology forecast and monitoring within a semantic wiki and were controlled regarding the effectiveness. Findings Although semantic wikis are ideal tools for knowledge management in industry settings, especially due to their user-friendly way of storing and retrieving knowledge, they have rarely been used for technology forecast and monitoring purposes so far. We show that the additional requirements for such purposes can be met and provide established technology analysis possibilities within Semantic MediaWiki. In that way, a new application area of Semantic MediaWiki is introduced. Originality/value Tools and techniques for Semantic MediaWiki are presented, opening the application area of Semantic MediaWiki for technology and innovation management. Our research provides evidence that the open-source implemented visualization and storage techniques can be applied in real-world settings, where so far mainly costly dedicated software has had to be used.
      Citation: Program
      PubDate: 2016-02-24T10:03:20Z
      DOI: 10.1108/PROG-06-2015-0043
       
  • The Information Resources Utilization Index: a Case Study in China
    • Authors: chaolemen borjigen, Huiling Feng, Bin Zhang, Chaolemen Borjigin
      First page: 2
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose This paper aims to introduce a novel method for measuring the utilization of information resources in order to provide a complementary index for existing information development indices and to reveal the links between the use of information resources and the readiness of ICT. Design/methodology/approach This research mainly employs three types of research methodologies: Literature study was conducted for defining the term of information resources and for finding the common features of the relevant indices; Methods to construct composite indicators are used for developing a theoretical framework, selecting variables, imputation of missing data, normalization of data, weighting and aggregation of the novel index; A case study is carried out to provide a typical application for the index and to reveal the underlying links between the use of information resources and the readiness of ICT. Findings This paper for the first time proposes a method to measure the utilization of information resources from a Chinese perspective and provides its theoretical foundations, conceptual frameworks, main steps and curial techniques. Further, correlations between the use of information resources and the readiness of ICT in China between 2009 and 2011 are also descried. Practical implications Measuring the utilization of information resources provides the authorities with an alternative tool to monitor the evolutions a country toward information society. In addition, the novel index presented in this paper can also serve as a method to indentify the gaps among regions in deploying their information resources. Originality/value This is the first paper to introduce a new measure for utilization of information resources and is also the first paper to reveal links between the use of information resources and the readiness of ICT as well as GDP in China.
      Citation: Program
      PubDate: 2015-11-13T09:19:42Z
      DOI: 10.1108/PROG-07-2013-0041
       
  • Unifying Heterogeneous and Distributed Information about Marine Species
           through the Top Level Ontology MarineTLO
    • Authors: Yannis Tzitzikas, Carlo Allocca, Chryssoula Bekiari, Yannis Marketakis, Pavlos Fafalios, Martin Doerr, Nikos Minadakis, Theodore Patkos, Leonardo Candela
      First page: 16
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose Marine species data are scattered across a series of heterogeneous repositories and information systems. There is no repository that can claim to have all Marine Species data. Moreover, information on marine species are made available through different formats and protocols. Our research aims at providing models and methods that allow integrating such information either for publishing it, browsing it, or querying it. Aiming at providing a valid and reliable knowledge ground for enabling semantic interoperability of marine species data, in this paper we motivate a top level ontology, called MarineTLO and discuss its use for creating MarineTLO-based warehouses. Design/methodology/approach In this paper we introduce a set of motivating scenarios that highlight the need of having a top level ontology. Afterwards we describe the main data sources (FLOD, ECOSCOPE, WoRMS, FishBase, and DBpedia) that will be used as a basis for constructing the MarineTLO. Findings The paper discusses about the exploitation of MarineTLO for the construction of a warehouse. Furthermore a series of uses of the MarineTLO-based warehouse is being reported. Originality/value In this paper we described the design of a Top Level Ontology for the marine domain able to satisfy the need for maintaining integrated sets of facts about marine species and thus assisting ongoing research on biodiversity. Apart from the ontology we also elaborated with the mappings that are required for building integrated warehouses.
      Citation: Program
      PubDate: 2015-11-13T09:19:40Z
      DOI: 10.1108/PROG-10-2014-0072
       
  • Exploring the dynamic contribution behavior of editors in wikis based on
           time series analysis
    • Authors: Linghe Huang, Qinghua Zhu, Jia Tina Du, Baozhen Lee
      First page: 41
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose Wiki is a new form of information production and organization, which has become one of the most important knowledge resources. In recent years, with the increase of users in wikis, “free rider problem” has been serious. In order to motivate editors to contribute more to a wiki system, it is important to fully understand their contribution behavior. The purpose of this paper is to explore the law of dynamic contribution behavior of editors in wikis. Design/methodology/approach After developing a dynamic model of contribution behavior, we employed both the metrological and clustering methods to process the time series data. The experimental data were collected from Baidu Baike, a renowned Chinese wiki system similar to Wikipedia. Findings There are four categories of editors: “testers”, “dropouts”, “delayers” and “stickers”. Testers, who contribute the least content and stop contributing rapidly after editing a few articles. After editing a large amount of content, dropouts stop contributing completely. Delayers are the editors who do not stop contributing during the observation time, but they may stop contributing in the near future. Stickers, who keep contributing and edit the most content, are the core editors. In addition, there are significant time-of-day and holiday effects on the number of editors’ contributions. Originality/value By using the method of time series analysis, some new characteristics of editors and editor types were found. Compared with the former studies, this research also had a larger sample. Therefore, the results are more scientific and representative and can help managers to better optimize the Wiki systems and formulate incentive strategies for editors.
      Citation: Program
      PubDate: 2015-11-13T09:19:42Z
      DOI: 10.1108/PROG-06-2013-0034
       
  • EM3B2—A Semantic Integration Engine for Materials Science
    • Authors: Chongchong Zhao, Chao Dong, Xiaoming Zhang
      First page: 58
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose The integration and retrieval of the vast data have attracted sufficient attention, thus the W3C workgroup releases R2RML to standardize the transformation from relational data to semantic-aware data. However, it only provides a data transform mechanism to RDF. The generation of mapping alignments still needs manual work or other algorithms. Therefore, the purpose of this paper is to propose a domain-oriented automatic mapping method and an application of the R2RML standard. Design/methodology/approach In this paper, materials science is focused to show an example of domain-oriented mapping. SFC (Source Field Concept) and M3B2 (Metal Materials Mapping Background Base) knowledge bases are established to support the auto-recommending algorithm. As for the generation of RDF files, our idea is to generate the triples and the links respectively. The links of the triples follow the object-subject relationship, and the links of the object properties can be achieved by the range individuals and the trail path. Findings Consequently based on our previous work, we proposed EM3B2, a semantic integration engine for materials science. EM3B2 not only offers friendly graphical interfaces, but also provides auto-recommending mapping based on materials knowledge to enable users to avoid vast manually work. The experimental result indicates that EM3B2 supplies accurate mapping. Moreover, the running time of E3MB2 is also competitive as classical methods. Originality/value This paper proposed EM3B2 semantic integration engine, which contributes to the RDB-to-RDF mapping by the application of W3C R2RML standard and the domain-oriented mapping.
      Citation: Program
      PubDate: 2015-11-13T09:19:45Z
      DOI: 10.1108/PROG-01-2015-0004
       
  • Performance and Accuracy Analysis of Semantic Kernel Functions
    • Authors: Manoj Manuja, Deepak Garg
      First page: 83
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose Syntax based text classification (TC) mechanisms have been overtly replaced by semantic based systems in recent years. Semantic based TC systems are particularly useful in those scenarios where similarity among documents is computed considering semantic relationships among their terms. Kernel functions have received major attention because of the unprecedented popularity of SVMs in the field of TC. Most of the kernel functions exploit syntactic structures of the text, but quite a few also use a-priori semantic information for knowledge extraction. Investigation of semantic kernel functions in the context of TC is the purpose of this research. Design/methodology/approach This work presents performance and accuracy analysis of seven semantic kernel functions (Semantic Smoothing Kernel, Latent Semantic Kernel, Semantic WordNet-based Kernel, Semantic Smoothing Kernel having Implicit Superconcept Expansions, CoBD Kernel function, Omiotis-based S-VSM semantic kernel function and Top-k S-VSM semantic kernel) being implemented with SVM as kernel method. All seven semantic kernels are implemented in SVM-Light tool. Findings Performance and accuracy parameters of seven semantic kernel functions have been evaluated and compared. The experimental results show that Top-k S-VSM semantic kernel has the highest performance and accuracy among all the evaluated kernel functions which make it a preferred building block for kernel methods for text classification and retrieval. Research limitations/implications A combination of semantic kernel function with syntactic kernel function needs to be investigated as there is a scope of further improvement in terms of accuracy and performance in all the seven semantic kernel functions. Practical implications This research provides an insight into text classification using a-priori semantic knowledge. Three commonly used datasets are being exploited. It will be quite interesting to explore these kernel functions on live web data which may test their actual utility in real business scenarios. Originality/value Comparison of performance and accuracy parameters is the novel point of this research paper. To the best of our knowledge, this type of comparison has not been done previously.
      Citation: Program
      PubDate: 2015-11-13T09:19:43Z
      DOI: 10.1108/PROG-04-2014-0028
       
  • COMAT: CORE ONTOLOGY OF MATTER
    • Authors: Devika Madalli, Anila Sulochana, Abhinav Kumar Singh
      First page: 103
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose Matter is an important topic of science as a discipline since its inception. Nevertheless, along with the evolution of Semantic Web, matter has got equal importance among the ontology developers. The current work describes an ontology of matter that we developed in our lab. Main purpose is to come up with an exhaustive list of concepts and relations to cover matter domain under one umbrella, after identifying the gaps in the present ontologies. Design/methodology/approach Ontology was developed following faceted analytico-synthetic approach of knowledge organization. We followed hybrid developmental approach which includes top-down as well as bottom-up development strategy, for creating classes and subclasses. We modeled matter domain comprehensively considering different aspects of matter. The theories behind the modeling approach helps to maintain the consistency of further extensions. Findings Final ontology has around 280 concepts and as many as 60 properties which include both object property and datatype property. Research limitations/implications There exists very vague definition of concepts in different subject areas, as matter is a domain of study in physics, chemistry, material science, metallurgy, etc. Same material has been adopted differently depending upon purpose of its study/use in that field. For example aspirin is simply a chemical compound in chemistry, whereas in medicine it is also an agent. Practical implications Present work claims to influence the ontology engineers to develop more extension to this core ontology of matter. Also this will find its use in information retrieval, semantic annotations and in several other semantic knowledge based systems. Originality/value Core ontology of matter is the most recent work of the domain. Originality lies in the way matter domain has been looked up, from a very wide perspective, as well as in the approach of modeling the domain.
      Citation: Program
      PubDate: 2015-11-13T09:19:44Z
      DOI: 10.1108/PROG-01-2015-0002
       
  • Analyzing the complementarity of web infrastructure and eInnovation for
           business value generation
    • Authors: Simona Popa, Pedro Soto-Acosta, Euripidis Loukis; Euripidis
      First page: 118
      Abstract: Program, Volume 50, Issue 1, February 2016.
      Purpose In recent years, there has been much debate about the value generated by the firms’ investments in Information Technology (IT). Although literature suggests that technology itself will rarely create superiority, Web infrastructure can be critical for knowledge sharing and the formation of virtual teams to execute innovation processes which, in turn, may enhance e-Innovation and business value. Building on these antecedents, this study seeks to explore whether and how Web infrastructure and e-Innovation can create business value by complementing each other. Design/methodology/approach Based on the Resource-Based View (RBV) of the firm this paper develops a conceptual model to assess the effects of Web infrastructure and e-Innovation on business value as well as the complementarity between these resources. To test the associated hypotheses, a regression model was estimated and tested on a large sample of Spanish firms from different industries. Findings The results show that Web infrastructure is not positively related to business value, but on the contrary e-innovation has a positive impact on business value. However, support for complementarity between Web infrastructure and e-Innovation was not found. Originality/value The present study tests the RBV logic, arguing that not all IT resources are source of competitive advantage. In the same vein, this study shows that e-Innovation, as it requires combination of IT infrastructure with other unique intangible resources, is much more difficult to imitate, leading to competitive advantages.
      Citation: Program
      PubDate: 2015-11-13T09:19:39Z
      DOI: 10.1108/PROG-09-2015-0065
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2015