A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> ELECTRONICS (Total: 207 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
International Journal of Granular Computing, Rough Sets and Intelligent Systems
Number of Followers: 1  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1757-2703 - ISSN (Online) 1757-2711
Published by Inderscience Publishers Homepage  [439 journals]
  • Association rule mining algorithm DB-growth based on relational
           database

    • Free pre-print version: Loading...

      Authors: Sixue Bai, Shilin Duan
      Pages: 1 - 12
      Abstract: Excavating potential multidimensional valuable association rules from big data has wide application. The main association rule mining algorithm Apriori has the bottlenecks of scanning repeatedly database and generating big number of candidate sets, though the FP algorithm does not generate candidate sets, but FP-tree cannot handle the problem of storage and traversal of big data. In addition, Apriori and FP-growth algorithm needs to reconstruct association rules while implementing increment mining, its not available for growth-oriented data mining. Facing those problems, designing DB-growth algorithm based on relational database table SourceIndex, applying string combinate to generate pattern, insert or update database to construct frequent sets, mining association rules by querying database. In addition, it supports increment mining and depth mining.
      Keywords: association rules mining; apriori algorithm; FP-growth algorithm; DB-growth algorithm; increment mining; depth mining; relational databases; big data; data mining
      Citation: International Journal of Granular Computing, Rough Sets and Intelligent Systems, Vol. 4, No. 1 (2015) pp. 1 - 12
      PubDate: 2016-02-16T23:20:50-05:00
      DOI: 10.1504/IJGCRSIS.2015.074721
      Issue No: Vol. 4, No. 1 (2016)
       
  • A multi-class boosting method for learning from imbalanced data

    • Free pre-print version: Loading...

      Authors: Xiaohui Yuan, Mohamed Abouelenien
      Pages: 13 - 29
      Abstract: The acquisition of face images is usually limited due to policy and economy considerations, and hence the number of training examples of each subject varies greatly. The problem of face recognition with imbalanced training data has drawn attention of researchers and it is desirable to understand in what circumstances imbalanced dataset affects the learning outcomes, and robust methods are needed to maximise the information embedded in the training dataset without relying much on user introduced bias. In this article, we study the effects of uneven number of training images for automatic face recognition and proposed a multi-class boosting method that suppresses the face recognition errors by training an ensemble with subsets of examples. By recovering the balance among classes in the subsets, our proposed multiBoost.imb method circumvents the class skewness and demonstrates improved performance. Experiments are conducted with four popular face datasets and two synthetic datasets. The results of our method exhibits superior performance in high imbalanced scenarios compared to AdaBoost.M1, SAMME, RUSboost, SMOTEboost, SAMME with SMOTE sampling and SAMME with random undersampling. Another advantage that comes with ensemble training using subsets of examples is the significant gain in efficiency.
      Keywords: classification; imbalanced data; multi-class boosting; learning; biometrics; image acquisition; facial images; face recognition; training data
      Citation: International Journal of Granular Computing, Rough Sets and Intelligent Systems, Vol. 4, No. 1 (2015) pp. 13 - 29
      PubDate: 2016-02-16T23:20:50-05:00
      DOI: 10.1504/IJGCRSIS.2015.074722
      Issue No: Vol. 4, No. 1 (2016)
       
  • An improved statistical disclosure attack

    • Free pre-print version: Loading...

      Authors: Bin Tang, Rajiv Bagai, Huabo Lu
      Pages: 30 - 38
      Abstract: Statistical disclosure attack (SDA) is known to be an effective long-term intersection attack against mix-based anonymising systems, in which an attacker observes a large volume of the incoming and outgoing traffic of a system and correlates its senders with receivers that they often send messages to. In this paper, we further strengthen the effectiveness of this attack. We show, by both an example and a proof, that by employing a weighted mean of the observed relative receiver popularity, the attacker can determine more accurately the set of receivers that a user sends messages to, than by using the existing arithmetic mean-based technique.
      Keywords: statistical disclosure attack; SDA; anonymity; traffic analysis; security; intersection attacks
      Citation: International Journal of Granular Computing, Rough Sets and Intelligent Systems, Vol. 4, No. 1 (2015) pp. 30 - 38
      PubDate: 2016-02-16T23:20:50-05:00
      DOI: 10.1504/IJGCRSIS.2015.074731
      Issue No: Vol. 4, No. 1 (2016)
       
  • Optimum weights and biases for feed forward neural network by particle
           swarm optimisation

    • Free pre-print version: Loading...

      Authors: Pratik R. Hajare, Narendra G. Bawane, Poonam T. Agarkar
      Pages: 39 - 46
      Abstract: This paper introduces particle swarm intelligence (PSI) in feed forward neural network (FFNN) with backpropagation for finding initial weights and biases of the feed forward neural network. The combination of particle swarm optimisation (PSO) and FFNN greatly help in fast convergence of FFNN in classification and prediction to various benchmark problems by overcoming the disadvantage of backpropagation of getting stuck at local minima or local maxima. The benchmarking databases for neural network contain various datasets from various different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. Two such benchmarking problems are selected in this paper for comparison and the performance of PSO with FFNN for finding weights and biases is implemented and compared with random initialisation of weights and biases with normal FFNN. The result shows that using PSO minimises the prediction error.
      Keywords: particle swarm optimisation; PSO; feedforward neural networks; FFNN; backpropagation; convergence; benchmarking; realistic problems; prediction error; local minima; local maxima; optimum weights; biases
      Citation: International Journal of Granular Computing, Rough Sets and Intelligent Systems, Vol. 4, No. 1 (2015) pp. 39 - 46
      PubDate: 2016-02-16T23:20:50-05:00
      DOI: 10.1504/IJGCRSIS.2015.074737
      Issue No: Vol. 4, No. 1 (2016)
       
  • A steganography system and implementation utilising pseudo random number
           embedding and dynamic image production with graphs

    • Free pre-print version: Loading...

      Authors: Ruben Aguilar, Jianchao Han
      Pages: 47 - 63
      Abstract: This paper proposes a new steganographic system that matches the image to the message by generating coherent noise. We utilise graphs as a means to organise the coherent noise over the message. Previous steganography methodologies are reviewed. The implementation of the proposed new system, RandSteg, is presented. The algorithms employed in the implementation system are discussed. Finally, further possibilities for research on this field are suggested.
      Keywords: steganography; pseudo random number embedding; dynamic image production; graphics; ciphers; encoding; decoding; graphs; information hiding
      Citation: International Journal of Granular Computing, Rough Sets and Intelligent Systems, Vol. 4, No. 1 (2015) pp. 47 - 63
      PubDate: 2016-02-16T23:20:50-05:00
      DOI: 10.1504/IJGCRSIS.2015.074745
      Issue No: Vol. 4, No. 1 (2016)
       
  • Comparisons between near open sets and rough approximations

    • Free pre-print version: Loading...

      Authors: Ruben Aguilar, Jianchao Han
      Pages: 64 - 83
      Abstract: The present paper is devoted to introduce the new concepts 'j-near approximations' and studying its applications. In fact, these operators can be considered as easy tools for removing the vagueness (uncertainty) of rough sets. The basic notions of j-near approximations introduced and sufficiently illustrated. Comparisons between the accuracy of these types of approximations are superimposed. We further investigated some new generalised definitions for rough membership functions. Accordingly, several types of fuzzy sets constructed. Finally, several examples given to indicate counter connections.
      Keywords: j-neighbourhood spaces; rough sets; rough membership functions; fuzzy sets; topology; near open sets; rough approximations
      Citation: International Journal of Granular Computing, Rough Sets and Intelligent Systems, Vol. 4, No. 1 (2015) pp. 64 - 83
      PubDate: 2016-02-16T23:20:50-05:00
      DOI: 10.1504/IJGCRSIS.2015.074749
      Issue No: Vol. 4, No. 1 (2016)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.232.56.9
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-