for Journals by Title or ISSN
for Articles by Keywords
help

Publisher: (Total: journals)

The Journal TOCs for this publisher/subject is currently unavailable. Please visit later.
Similar Journals
Journal Cover
International Journal of Web Information Systems
Number of Followers: 4  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1744-0084
This journal is no longer being updated because:
    RSS feed has been removed by the publisher
  • Review of short-text classification
    • Pages: 155 - 182
      Abstract: International Journal of Web Information Systems, Volume 15, Issue 2, Page 155-182, June 2019.
      Purpose Rapid developments in social networks and their usage in everyday life have caused an explosion in the amount of short electronic documents. Thus, the need to classify this type of document based on their content has a significant implication in many applications. The need to classify these documents in relevant classes according to their text contents should be interested in many practical reasons. Short-text classification is an essential step in many applications, such as spam filtering, sentiment analysis, Twitter personalization, customer review and many other applications related to social networks. Reviews on short text and its application are limited. Thus, this paper aims to discuss the characteristics of short text, its challenges and difficulties in classification. The paper attempt to introduce all stages in principle classification, the technique used in each stage and the possible development trend in each stage. Design/methodology/approach The paper as a review of the main aspect of short-text classification. The paper is structured based on the classification task stage. Findings This paper discusses related issues and approaches to these problems. Further research could be conducted to address the challenges in short texts and avoid poor accuracy in classification. Problems in low performance can be solved by using optimized solutions, such as genetic algorithms that are powerful in enhancing the quality of selected features. Soft computing solution has a fuzzy logic that makes short-text problems a promising area of research. Originality/value Using a powerful short-text classification method significantly affects many applications in terms of efficiency enhancement. Current solutions still have low performance, implying the need for improvement. This paper discusses related issues and approaches to these problems.
      Citation: International Journal of Web Information Systems
      PubDate: 2019-01-21T01:52:26Z
      DOI: 10.1108/IJWIS-12-2017-0083
       
  • From monitoring of seismic fields to the automatic forecasting of
           earthquakes
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose The purpose of this paper is to offer two Web-based platforms for systematic analysis of seismic processes. Both platforms are designed to analyze and forecast the state of the environment and, in particular, the level of seismic hazard. The first platform analyzes the fields representing the properties of the seismic process; the second platform forecasts strong earthquakes. Earthquake forecasting is based on a new one-class classification method. Design/methodology/approach The paper suggests an approach to systematic forecasting of earthquakes and examines the results of tests. This approach is based on a new method of machine learning, called the method of the minimum area of alarm. The method allows to construct a forecast rule that optimizes the probability of detecting target earthquakes in a learning sample set, provided that the area of the alarm zone does not exceed a predetermined one. Findings The paper presents two platforms alongside the method of analysis. It was shown that these platforms can be used for systematic analysis of seismic process. By testing of the earthquake forecasting method in several regions, it was shown that the method of the minimum area of alarm has satisfactory forecast quality. Originality/value The described technology has two advantages: simplicity of configuration for a new problem area and a combination of interactive easy analysis supported by intuitive operations and a simplified user interface with a detailed, comprehensive analysis of spatio-temporal processes intended for specialists. The method of the minimum area of alarm solves the problem of one-class classification. The method is original. It uses in training the precedents of anomalous objects and statistically takes into account normal objects.
      Citation: International Journal of Web Information Systems
      PubDate: 2019-06-25T02:12:18Z
      DOI: 10.1108/IJWIS-12-2018-0087
       
  • A machine learning-based methodology to predict learners’ dropout,
           success or failure in MOOCs
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose Even if MOOCs (massive open online courses) are becoming a trend in distance learning, they suffer from a very high rate of learners’ dropout, and as a result, on average, only 10 per cent of enrolled learners manage to obtain their certificates of achievement. This paper aims to give tutors a clearer vision for an effective and personalized intervention as a solution to “retain” each type of learner at risk of dropping out. Design/methodology/approach This paper presents a methodology to provide predictions on learners’ behaviors. This work, which uses a Stanford data set, was divided into several phases, namely, a data extraction, an exploratory study and then a multivariate analysis to reduce dimensionality and to extract the most relevant features. The second step was the comparison between five machine learning algorithms. Finally, the authors used the principle of association rules to extract similarities between the behaviors of learners who dropped out from the MOOC. Findings The results of this work have given that deep learning ensures the best predictions in terms of accuracy, which is an average of 95.8 per cent, and is comparable to other measures such as precision, AUC, Recall and F1 score. Originality/value Many research studies have tried to tackle the MOOC dropout problem by proposing different dropout predictive models. In the same context, comes the present proposal with which the authors have tried to predict not only learners at a risk of dropping out of the MOOCs but also those who will succeed or fail.
      Citation: International Journal of Web Information Systems
      PubDate: 2019-06-21T07:02:58Z
      DOI: 10.1108/IJWIS-11-2018-0080
       
  • On verifying the authenticity of e-commercial crawling data by a
           semi-crosschecking method
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose Data crawling in e-commerce for market research often come with the risk of poor authenticity due to modification attacks. The purpose of this paper is to propose a novel data authentication model for such systems. Design/methodology/approach The data modification problem requires careful examinations in which the data are re-collected to verify their reliability by overlapping the two datasets. This approach is to use different anomaly detection techniques to determine which data are potential for frauds and to be re-collected. The paper also proposes a data selection model using their weights of importance in addition to anomaly detection. The target is to significantly reduce the amount of data in need of verification, but still guarantee that they achieve their high authenticity. Empirical experiments are conducted with real-world datasets to evaluate the efficiency of the proposed scheme. Findings The authors examine several techniques for detecting anomalies in the data of users and products, which give the accuracy of 80 per cent approximately. The integration with the weight selection model is also proved to be able to detect more than 80 per cent of the existing fraudulent ones while being careful not to accidentally include ones which are not, especially when the proportion of frauds is high. Originality/value With the rapid development of e-commerce fields, fraud detection on their data, as well as in Web crawling systems is new and necessary for research. This paper contributes a novel approach in crawling systems data authentication problem which has not been studied much.
      Citation: International Journal of Web Information Systems
      PubDate: 2019-06-03T08:36:25Z
      DOI: 10.1108/IJWIS-10-2018-0075
       
  • Augmented reality-based approach for interactivity in MOOCs
    • Pages: 134 - 154
      Abstract: International Journal of Web Information Systems, Volume 15, Issue 2, Page 134-154, June 2019.
      Purpose MOOCs represent a new concept that offers learning content to participants freely, anywhere and anytime. However, they suffer from several unsolved problems such as high dropout percentage, low completion rate or uncontrollable understanding level of the participants that can be caused by the lack of the practical activities and simulations. This article aims to propose a solution to ensure the integration of virtual manipulations in MOOCs. Design/methodology/approach This paper proposes the integration of virtual manipulations (simulations and practical activities) relying on augmented reality. To ensure the manipulation of the used 3D objects, two methods have been proposed based on markers or hand gestures. Customized markers are used, facilitating their recognition by the users, to visualize the objects and to ensure their interactions. Hand gestures have been proposed to perform the manipulation easily. Consequently, hand detection and gestures classification using hand contour detection and HSV filter have been applied. Findings Two MOOCs pedagogically similar were proposed to evaluate the effectiveness of the proposed solution. The only difference is that the second MOOC contains virtual manipulations that the participants can perform to understand better and to interact during the courses. The finding results show that the participants’ understanding and satisfaction levels in the second MOOC were higher, and the dropout rate was lower than the first one. Originality/value The integration of practical activities/simulations in MOOCs using augmented reality is the key novelty of our work. To do so, two manipulation methods have been proposed, so the instructor can feel free to choose the adequate method to ensure a better progress of the manipulations.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-11-19T12:21:57Z
      DOI: 10.1108/IJWIS-04-2018-0033
       
  • Analyzing street crimes in Kobe city using PRISM
    • Pages: 183 - 200
      Abstract: International Journal of Web Information Systems, Volume 15, Issue 2, Page 183-200, June 2019.
      Purpose In a previous research, the authors proposed a security information service, called Personalized Real-time Information with Security Map (PRISM), which personalizes the incident information based on living area of individual users. The purpose of this paper is to extend PRISM to conduct sophisticated analysis of street crimes. The extended features enable to look back on past incident information and perform statistical analysis. Design/methodology/approach To analyze street crimes around living area in more detail, the authors add three new features to PRISM: showing a past heat map, showing a heat map focused on specified type of incidents and showing statistics of incidents for every type. Using these features, the authors visualize the dynamic transition of street crimes in a specific area and the whole region within Kobe city. They also compare different districts by statistics of street crimes. Findings Dynamical visualization clarifies when, where and what kind of incident occurs frequently. Most incidents occurred along three train lines in Kobe city. Wild boars are only witnessed in a certain region. Statistics shows that the characteristics of street crimes is completely different depending on living area. Originality/value Previously, many studies have been conducted to clarify factors relevant to street crimes. However, these previous studies mainly focus on interesting regions as a whole, but do not consider individual’s living area. In this paper, the authors analyze street crimes according to users’ living area using personalized security information service PRISM.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-12-11T10:19:24Z
      DOI: 10.1108/IJWIS-04-2018-0032
       
  • Enhanced prediction of vulnerable Web components using Stochastic Gradient
           Boosting Trees
    • Pages: 201 - 214
      Abstract: International Journal of Web Information Systems, Volume 15, Issue 2, Page 201-214, June 2019.
      Purpose Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models. Design/methodology/approach An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure. Findings The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components. Originality/value This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-11-21T04:47:21Z
      DOI: 10.1108/IJWIS-05-2018-0041
       
  • A hybrid model for exploring the antecedents of cloud ERP continuance
    • Pages: 215 - 235
      Abstract: International Journal of Web Information Systems, Volume 15, Issue 2, Page 215-235, June 2019.
      Purpose This study aims to propose a hybrid model based on expectation–confirmation model (ECM), DeLone and McLean IS success model and task-technology fit (TTF) model to examine whether quality factors and TTF as the antecedents to user beliefs can affect organizational users’ continuance intention of cloud enterprise resource planning (ERP). Design/methodology/approach Sample data for this study were collected from end-users of cloud ERP working in companies in Taiwan. A total of 370 questionnaires were distributed in the 37 sample companies, and 315 (85.1 per cent) usable questionnaires were analyzed using structural equation modeling in this study. Findings This study’s findings indicated that users’ perceptions of system quality, information quality and TTF all contributed significantly to their satisfaction, confirmation and perceived usefulness (PU) of cloud ERP, which in turn directly or indirectly led to their continuance intention of cloud ERP; that is, the results strongly supported the hybrid model integrating ECM, DeLone and McLean IS success model and TTF model with all hypothesized links being significant. Originality/value Empirically, this study’s research model capturing ECM, DeLone and McLean IS success model and TTF model has significantly shed light on the possible formulation of a richer post-adoption model. Noteworthily, considerably more attention should be paid to the fit between information-related characteristics and cloud ERP users’ work goals and needs, while TTF, based on both system-related and information-related technology characteristics, significantly contributes to users’ satisfaction, confirmation and PU, which together directly or indirectly explain their intention to continue using cloud ERP.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-11-27T08:39:20Z
      DOI: 10.1108/IJWIS-07-2018-0056
       
  • Open Taiwan Government data recommendation platform using DBpedia and
           Semantic Web based on cloud computing
    • Pages: 236 - 254
      Abstract: International Journal of Web Information Systems, Volume 15, Issue 2, Page 236-254, June 2019.
      Purpose In recent years, governments around the world are actively promoting the Open Government Data (OGD) to facilitate reusing open data and developing information applications. Currently, there are more than 35,000 data sets available on the Taiwan OGD website. However, the existing Taiwan OGD website only provides keyword queries and lacks a friendly query interface. This study aims to address these issues by defining a DBpedia cloud computing framework (DCCF) for integrating DBpedia with Semantic Web technologies into Spark cluster cloud computing environment. Design/methodology/approach The proposed DCCF is used to develop a Taiwan OGD recommendation platform (TOGDRP) that provides a friendly query interface to automatically filter out the relevant data sets and visualize relationships between these data sets. Findings To demonstrate the feasibility of TOGDRP, the experimental results illustrate the efficiency of the different cloud computing models, including Hadoop YARN cluster model, Spark standalone cluster model and Spark YARN cluster model. Originality/value The novel solution proposed in this study is a hybrid approach for integrating Semantic Web technologies into Hadoop and Spark cloud computing environment to provide OGD data sets recommendation.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-11-19T12:18:17Z
      DOI: 10.1108/IJWIS-02-2018-0015
       
  • Implementation of a volunteer fire corps activity support system during
           fires
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose The purpose of this paper is to implement an activity support system for volunteer fire corps using Web-GIS technology. Design/methodology/approach In this study, the authors implemented a volunteer fire corps activity support system comprising a dispatch request system, a water sources geographic information system, a fire-vehicle location confirmation system, a route history system and an integrated management system. They implemented this system as a Web application in consideration of responsive design assuming that it is used on mobile terminals. Findings To evaluate the effectiveness of the proposed system, the authors conducted a survey of 18 fire corps volunteers who were asked to evaluate the system for operability, relevance, functionality, usability and effectiveness. Originality/value In this research work, the authors implemented the volunteer fire corps activity support system using Web-GIS technology. The authors conducted a questionnaire survey of the volunteer fire corps activity support system, asking 18 fire corps volunteers to evaluate the system for operability, relevance, functionality, usability and effectiveness. The results of the survey indicate an overwhelming positive response to the volunteer fire corps activity support system for all five measures.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-12-19T02:53:19Z
      DOI: 10.1108/IJWIS-09-2018-0068
       
  • Ontology-based approach to enhance medical web information extraction
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose The purpose of this study is to propose a framework for extracting medical information from the Web using domain ontologies. Patient–Doctor conversations have become prevalent on the Web. For instance, solutions like HealthTap or AskTheDoctors allow patients to ask doctors health-related questions. However, most online health-care consumers still struggle to express their questions efficiently due mainly to the expert/layman language and knowledge discrepancy. Extracting information from these layman descriptions, which typically lack expert terminology, is challenging. This hinders the efficiency of the underlying applications such as information retrieval. Herein, an ontology-driven approach is proposed, which aims at extracting information from such sparse descriptions using a meta-model. Design/methodology/approach A meta-model is designed to bridge the gap between the vocabulary of the medical experts and the consumers of the health services. The meta-model is mapped with SNOMED-CT to access the comprehensive medical vocabulary, as well as with WordNet to improve the coverage of layman terms during information extraction. To assess the potential of the approach, an information extraction prototype based on syntactical patterns is implemented. Findings The evaluation of the approach on the gold standard corpus defined in Task1 of ShARe CLEF 2013 showed promising results, an F-score of 0.79 for recognizing medical concepts in real-life medical documents. Originality/value The originality of the proposed approach lies in the way information is extracted. The context defined through a meta-model proved to be efficient for the task of information extraction, especially from layman descriptions.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-12-11T02:05:41Z
      DOI: 10.1108/IJWIS-03-2018-0017
       
  • Learning representations of Web entities for entity resolution
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose Matching instances of the same entity, a task known as entity resolution, is a key step in the process of data integration. This paper aims to propose a deep learning network that learns different representations of Web entities for entity resolution. Design/methodology/approach To match Web entities, the proposed network learns the following representations of entities: embeddings, which are vector representations of the words in the entities in a low-dimensional space; convolutional vectors from a convolutional layer, which capture short-distance patterns in word sequences in the entities; and bag-of-word vectors, created by a bow layer that learns weights for words in the vocabulary based on the task at hand. Given a pair of entities, the similarity between their learned representations is used as a feature to a binary classifier that identifies a possible match. In addition to those features, the classifier also uses a modification of inverse document frequency for pairs, which identifies discriminative words in pairs of entities. Findings The proposed approach was evaluated in two commercial and two academic entity resolution benchmarking data sets. The results have shown that the proposed strategy outperforms previous approaches in the commercial data sets, which are more challenging, and have similar results to its competitors in the academic data sets. Originality/value No previous work has used a single deep learning framework to learn different representations of Web entities for entity resolution.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-12-10T01:39:59Z
      DOI: 10.1108/IJWIS-07-2018-0059
       
  • Ontology-based heuristic patent search
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose Large collections of patent documents disclosing novel, non-obvious technologies are publicly available and beneficial to academia and industries. To maximally exploit its potential, searching these patent documents has increasingly become an important topic. Although much research has processed a large size of collections, a few studies have attempted to integrate both patent classifications and specifications for analyzing user queries. Consequently, the queries are often insufficiently analyzed for improving the accuracy of search results. This paper aims to address such limitation by exploiting semantic relationships between patent contents and their classification. Design/methodology/approach The contributions are fourfold. First, the authors enhance similarity measurement between two short sentences and make it 20 per cent more accurate. Second, the Graph-embedded Tree ontology is enriched by integrating both patent documents and classification scheme. Third, the ontology does not rely on rule-based method or text matching; instead, an heuristic meaning comparison to extract semantic relationships between concepts is applied. Finally, the patent search approach uses the ontology effectively with the results sorted based on their most common order. Findings The experiment on searching for 600 patent documents in the field of Logistics brings better 15 per cent in terms of F-Measure when compared with traditional approaches. Research limitations/implications The research, however, still requires improvement in which the terms and phrases extracted by Noun and Noun phrases making less sense in some aspect and thus might not result in high accuracy. The large collection of extracted relationships could be further optimized for its conciseness. In addition, parallel processing such as Map-Reduce could be further used to improve the search processing performance. Practical implications The experimental results could be used for scientists and technologists to search for novel, non-obvious technologies in the patents. Social implications High quality of patent search results will reduce the patent infringement. Originality/value The proposed ontology is semantically enriched by integrating both patent documents and their classification. This ontology facilitates the analysis of the user queries for enhancing the accuracy of the patent search results.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-12-03T11:29:52Z
      DOI: 10.1108/IJWIS-06-2018-0053
       
  • Utilising ontology for “heteregeneous data analysis in
           organizational goals”
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose The purpose of this paper is to assess the level of the organizational goal accomplishment by assessing the reliance relationship between organizational data and organizational goals. Design/methodology/approach The evaluation of the organizational goals is based on design and operational level, which can serve in ranking of the organizational goals achievement and hence assist the decision-making process in achieving the organizational goals. To achieve this aim, the authors propose an ontology to develop the relationship between organizational data and organizational goals. Findings Data goals dependency shows the dependency relationship between organizational data and organizational goals. At the same time, data goals dependency assists the process of identifying data attributes, where the authors suggest that these data attributes are relevant in relation to the organizational goals. Originality/value The contribution of this paper will serve as the first step to evaluate the relevance of organizational data to assist decision-making in relation to the organizational goals.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-11-27T08:48:02Z
      DOI: 10.1108/IJWIS-05-2018-0046
       
  • Guiding the selection of service identification methods based on SOA
           maturity level
    • Abstract: International Journal of Web Information Systems, Ahead of Print.
      Purpose A mature adoption of a service-oriented architecture (SOA) goes steadily through different levels of maturity, whereby each level has its own types of services (e.g. software services or business services). However, the identification of such services is not an easy task even though there exist many service identification methods (SIMs). This paper aims to propose a new approach to select SIMs. The proposed selection approach for SIMs uses the desired SOA maturity level as the main guidance to assist the organizations in selecting a suitable SIM for each level of SOA maturity. Design/methodology/approach The methodology consists of three activities: surveying and selecting a suitable evaluation framework for SIMs, surveying and selecting a suitable SOA maturity model (SOAMM) and using the selected evaluation framework to decide a suitable SIM for the desired SOA maturity level with respect to the selected SOAMM. Findings Welke’s SOAMM and two existing evaluation frameworks for SIMs were found suitable to validate the proposed selection approach for SIMs. The two selected frameworks utilized the proposed selection approach to different degrees. To fully utilize the proposed selection approach, a comprehensive evaluation framework is required that addresses the most significant aspects of the existing SIMs. Originality/value In this research, the authors propose a new way of using Welke’s SOAMM to guide the organizations in selecting a suitable SIM from the existing evaluation frameworks for SIMs based on the desired SOA maturity level. In addition, the proposed selection approach improves the applicability of the existing evaluation frameworks, as it provides the organizations with a new way to select the methods.
      Citation: International Journal of Web Information Systems
      PubDate: 2018-11-21T09:20:09Z
      DOI: 10.1108/IJWIS-05-2018-0045
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.172.234.236
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-