Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

MATHEMATICS (714 journals)            First | 1 2 3 4     

Showing 601 - 538 of 538 Journals sorted alphabetically
Results in Control and Optimization     Open Access  
Results in Mathematics     Hybrid Journal  
Results in Nonlinear Analysis     Open Access  
Review of Symbolic Logic     Full-text available via subscription   (Followers: 2)
Reviews in Mathematical Physics     Hybrid Journal   (Followers: 1)
Revista Baiana de Educação Matemática     Open Access  
Revista Bases de la Ciencia     Open Access  
Revista BoEM - Boletim online de Educação Matemática     Open Access  
Revista Colombiana de Matemáticas     Open Access   (Followers: 1)
Revista de Ciencias     Open Access  
Revista de Educación Matemática     Open Access  
Revista de la Escuela de Perfeccionamiento en Investigación Operativa     Open Access  
Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas     Partially Free  
Revista de Matemática : Teoría y Aplicaciones     Open Access   (Followers: 1)
Revista Digital: Matemática, Educación e Internet     Open Access  
Revista Electrónica de Conocimientos, Saberes y Prácticas     Open Access  
Revista Integración : Temas de Matemáticas     Open Access  
Revista Internacional de Sistemas     Open Access  
Revista Latinoamericana de Etnomatemática     Open Access  
Revista Latinoamericana de Investigación en Matemática Educativa     Open Access  
Revista Matemática Complutense     Hybrid Journal  
Revista REAMEC : Rede Amazônica de Educação em Ciências e Matemática     Open Access  
Revista SIGMA     Open Access  
Ricerche di Matematica     Hybrid Journal  
RMS : Research in Mathematics & Statistics     Open Access  
Royal Society Open Science     Open Access   (Followers: 7)
Russian Journal of Mathematical Physics     Full-text available via subscription  
Russian Mathematics     Hybrid Journal  
Sahand Communications in Mathematical Analysis     Open Access  
Sampling Theory, Signal Processing, and Data Analysis     Hybrid Journal  
São Paulo Journal of Mathematical Sciences     Hybrid Journal  
Science China Mathematics     Hybrid Journal   (Followers: 1)
Science Progress     Full-text available via subscription   (Followers: 1)
Sciences & Technologie A : sciences exactes     Open Access  
Selecta Mathematica     Hybrid Journal   (Followers: 1)
SeMA Journal     Hybrid Journal  
Semigroup Forum     Hybrid Journal   (Followers: 1)
Set-Valued and Variational Analysis     Hybrid Journal  
SIAM Journal on Applied Mathematics     Hybrid Journal   (Followers: 11)
SIAM Journal on Computing     Hybrid Journal   (Followers: 11)
SIAM Journal on Control and Optimization     Hybrid Journal   (Followers: 18)
SIAM Journal on Discrete Mathematics     Hybrid Journal   (Followers: 8)
SIAM Journal on Financial Mathematics     Hybrid Journal   (Followers: 3)
SIAM Journal on Mathematics of Data Science     Hybrid Journal   (Followers: 1)
SIAM Journal on Matrix Analysis and Applications     Hybrid Journal   (Followers: 3)
SIAM Journal on Optimization     Hybrid Journal   (Followers: 12)
Siberian Advances in Mathematics     Hybrid Journal  
Siberian Mathematical Journal     Hybrid Journal  
Sigmae     Open Access  
SILICON     Hybrid Journal  
SN Partial Differential Equations and Applications     Hybrid Journal  
Soft Computing     Hybrid Journal   (Followers: 7)
Statistics and Computing     Hybrid Journal   (Followers: 14)
Stochastic Analysis and Applications     Hybrid Journal   (Followers: 3)
Stochastic Partial Differential Equations : Analysis and Computations     Hybrid Journal   (Followers: 2)
Stochastic Processes and their Applications     Hybrid Journal   (Followers: 6)
Stochastics and Dynamics     Hybrid Journal   (Followers: 2)
Studia Scientiarum Mathematicarum Hungarica     Full-text available via subscription   (Followers: 1)
Studia Universitatis Babeș-Bolyai Informatica     Open Access  
Studies In Applied Mathematics     Hybrid Journal   (Followers: 1)
Studies in Mathematical Sciences     Open Access   (Followers: 1)
Superficies y vacio     Open Access  
Suska Journal of Mathematics Education     Open Access   (Followers: 1)
Swiss Journal of Geosciences     Hybrid Journal   (Followers: 1)
Synthesis Lectures on Algorithms and Software in Engineering     Full-text available via subscription   (Followers: 2)
Synthesis Lectures on Mathematics and Statistics     Full-text available via subscription   (Followers: 1)
Tamkang Journal of Mathematics     Open Access  
Tatra Mountains Mathematical Publications     Open Access  
Teaching Mathematics     Full-text available via subscription   (Followers: 10)
Teaching Mathematics and its Applications: An International Journal of the IMA     Hybrid Journal   (Followers: 4)
Teaching Statistics     Hybrid Journal   (Followers: 8)
Technometrics     Full-text available via subscription   (Followers: 8)
The Journal of Supercomputing     Hybrid Journal   (Followers: 1)
The Mathematica journal     Open Access  
The Mathematical Gazette     Full-text available via subscription   (Followers: 1)
The Mathematical Intelligencer     Hybrid Journal  
The Ramanujan Journal     Hybrid Journal  
The VLDB Journal     Hybrid Journal   (Followers: 2)
Theoretical and Mathematical Physics     Hybrid Journal   (Followers: 7)
Theory and Applications of Graphs     Open Access  
Topological Methods in Nonlinear Analysis     Full-text available via subscription  
Transactions of the London Mathematical Society     Open Access   (Followers: 1)
Transformation Groups     Hybrid Journal  
Turkish Journal of Mathematics     Open Access  
Ukrainian Mathematical Journal     Hybrid Journal  
Uniciencia     Open Access  
Uniform Distribution Theory     Open Access  
Unisda Journal of Mathematics and Computer Science     Open Access  
Unnes Journal of Mathematics     Open Access   (Followers: 1)
Unnes Journal of Mathematics Education     Open Access   (Followers: 2)
Unnes Journal of Mathematics Education Research     Open Access   (Followers: 1)
Ural Mathematical Journal     Open Access  
Vestnik Samarskogo Gosudarstvennogo Tekhnicheskogo Universiteta. Seriya Fiziko-Matematicheskie Nauki     Open Access  
Vestnik St. Petersburg University: Mathematics     Hybrid Journal  
VFAST Transactions on Mathematics     Open Access   (Followers: 1)
Vietnam Journal of Mathematics     Hybrid Journal  
Vinculum     Full-text available via subscription  
Visnyk of V. N. Karazin Kharkiv National University. Ser. Mathematics, Applied Mathematics and Mechanics     Open Access   (Followers: 2)
Water SA     Open Access   (Followers: 1)
Water Waves     Hybrid Journal  
Zamm-Zeitschrift Fuer Angewandte Mathematik Und Mechanik     Hybrid Journal   (Followers: 1)
ZDM     Hybrid Journal   (Followers: 2)
Zeitschrift für angewandte Mathematik und Physik     Hybrid Journal   (Followers: 2)
Zeitschrift fur Energiewirtschaft     Hybrid Journal  
Zetetike     Open Access  

  First | 1 2 3 4     

Similar Journals
Journal Cover
The Journal of Supercomputing
Journal Prestige (SJR): 0.407
Citation Impact (citeScore): 2
Number of Followers: 1  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1573-0484 - ISSN (Online) 0920-8542
Published by Springer-Verlag Homepage  [2469 journals]
  • Machine learning-based techniques for fault diagnosis in the semiconductor
           manufacturing process: a comparative study

    • Free pre-print version: Loading...

      Abstract: Abstract Industries are going through the fourth industrial revolution (Industry 4.0), where technologies like the Industrial Internet of things, big data analytics, and machine learning (ML) are extensively utilized to improve the productivity and efficiency of manufacturing systems and processes. This work aims to further investigate the applicability and improve the effectiveness of ML prediction models for fault diagnosis in the smart manufacturing process. Hence, we propose several methodologies and ML models for fault diagnosis for smart manufacturing process applications. A case study has been conducted on a real dataset from a semiconductor manufacturing (SECOM) process. However, this dataset contains missing values, noisy features, and class imbalance problem. This imbalance problem makes it so difficult to accurately predict the minority class, due to the majority class size difference. In the literature, efforts have been made to alleviate the class imbalance problem using several synthetic data generation techniques (SDGT) on the UCI machine learning repository SECOM dataset. In this work, to handle the imbalance problem, we employed, compared, and evaluated the feasibility of three SDGT on this dataset. To handle issues related to the missing values and noisy features, we implemented two missing values imputation techniques and feature selection techniques, respectively. We then developed and compared the performance of ten predictive ML models against these proposed methodologies. The results obtained across several evaluation metrics of performance were significant. A comparative analysis shows the feasibility and validate the effectiveness of these SDGT and the proposed methodologies. Some among the proposed methodologies could produce an accuracy in the range of 99.5% to 100%. Furthermore, based on a comparative analysis with similar models from the literature, our proposed models outpaced those proposed in the literature.
      PubDate: 2022-08-06
       
  • MAN and CAT: mix attention to nn and concatenate attention to YOLO

    • Free pre-print version: Loading...

      Abstract: Abstract CNNs have achieved remarkable image classification and object detection results over the past few years. Due to the locality of the convolution operation, although CNNs can extract rich features of the object itself, they can hardly obtain global context in images. It means the CNN-based network is not a good candidate for detecting objects by utilizing the information of the nearby objects, especially when the partially obscured object is hard to detect. ViTs can get a rich context and dramatically improve the prediction in complex scenes with multi-head self-attention. However, it suffers from long inference time and huge parameters, which leads ViT-based detection network that is hardly be deployed in the real-time detection system. In this paper, firstly, we design a novel plug-and-play attention module called mix attention (MA). MA combines channel, spatial and global contextual attention together. It enhances the feature representation of individuals and the correlation between multiple individuals. Secondly, we propose a backbone network based on mix attention called MANet. MANet-Base achieves the state-of-the-art performances on ImageNet and CIFAR. Last but not least, we propose a lightweight object detection network called CAT-YOLO, where we make a trade-off between precision and speed. It achieves the AP of 25.7% on COCO 2017 test-dev with only 9.17 million parameters, making it possible to deploy models containing ViT on hardware and ensure real-time detection. CAT-YOLO could better detect obscured objects than other state-of-the-art lightweight models.
      PubDate: 2022-08-06
       
  • Traffic-aware dynamic controller placement in SDN using NFV

    • Free pre-print version: Loading...

      Abstract: Abstract Software-Defined Networking (SDN) and Network Function Virtualization (NFV) are promising technologies for delivering software-based networks to the user community. The application of Machine Learning (ML) in SDN and NFV enables innovation and easiness towards network management. The shift towards the softwarization of networks broadens the many doors of innovation and challenges. As the number of devices connected to the Internet is increasing swiftly, the SDNFV traffic management mechanism will provide a better solution. Many ML techniques applied to SDN focus more on the classification problems like network attack patterns, routing techniques, QoE/QoS provisioning. The approach of the application of ML to SDNFV and SDN controller placement has a lot of scope to explore. This work aims to develop an ML approach for network traffic management by predicting the number of controllers likely to be placed in the network. The proposed prediction mechanism is a centralized one and deployed as Virtual Network Function (VNF) in the NFV environment. The number of controllers is estimated using the predicted traffic and placed in the optimal location using the K-Medoid algorithm. The proposed method is suitably analysed for performances metrics. The proposed approach effectively combines SDN, NFV and ML for the better achievement of network automation.
      PubDate: 2022-08-06
       
  • Traffic sign detection based on multi-scale feature extraction and cascade
           feature fusion

    • Free pre-print version: Loading...

      Abstract: Abstract Existing algorithms have difficulty in solving the two tasks of localization and classification simultaneously when performing traffic sign detection on realistic images of complex traffic scenes. In order to solve the above problems, a new road traffic sign dataset is created, and based on the YOLOv4 algorithm, for the complexity of realistic traffic scene images and the large variation in the size of traffic signs in the images, the multi-scale feature extraction module, cascade feature fusion module and attention mechanism module are designed to improve the algorithm’s ability to locate and classify traffic signs simultaneously. Experimental results on the newly created dataset show that the improved algorithm achieves a mean average precision of 84.44%, which is higher than several major CNN-based object detection algorithms for the same type of task.
      PubDate: 2022-08-06
       
  • On design and performance analysis of improved shuffle exchange gamma
           interconnection network layouts

    • Free pre-print version: Loading...

      Abstract: Abstract Multistage Interconnection Networks (MINs) are an effective means of communication between multiple processors and memory modules in many parallel processing systems. Literature consists of numerous fault-tolerant MIN designs. However, due to the recent advances in the field of parallel processing, requiring large processing power, an increase in the demand to design and develop more reliable, cost-effective and fault-tolerant MINs is being observed. This work proposes two novel MIN designs, namely, Augmented-Shuffle Exchange Gamma Interconnection Network (A-SEGIN) and Enhanced A-SEGIN (EA-SEGIN). The proposed MINs utilize chaining of switches, and multiplexers & demultiplexers for providing a large number of alternative paths and thereby better fault tolerance. Different reliability measures, namely, 2-terminal, multi-source multi-destination, broadcast and network/global, respectively, of the proposed MINs have been evaluated with the help of all enumerated paths and well-known Sum-of-Disjoint Products approach. Further, overall performance, with respect to the number of paths, different reliability measures, hardware cost and cost per unit, of the proposed MINs has been compared with 19 other well-studied MIN layouts. The results suggest that the proposed MINs are very strong competitors of the preexisting MINs of their class owing to their better reliability and cost effectiveness.
      PubDate: 2022-08-05
       
  • An experimental design for facial and color emotion expression of a social
           robot

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we investigate the relationship between emotions and colors by showing robot animated emotion faces and colors to the participants through a series of surveys. We focused on representing a visualized emotion through a robot's facial expression and background colors. To complete the emotion design with animated faces and color background, we gave an experimental design for surveying the users' thoughts. We took an example of a robot animated face by using the ASUS Zenbo. We selected 11 colors as our color background and 24 facial expressions from Zenbo. To analyze our results from questionnaires, we used histograms to show the basic data situation and the multiple logistic regression analysis (MLRA) to see the marginal relationships. We separated our questionnaires into positive and negative questionnaires and divided the dataset into three cases to discuss the different relationships between color and emotion. Results showed that people preferred the blue color no matter whether the face was showing positive or negative emotion. The MLRA also showed the correct percentage is outstanding in case 2, either positive emotion or negative emotion. Participants thought Zenbo's robotic animated face was the same as they thought. Through our experimental design, we hope that people can consider more colors with emotion to design the human–robot interface that will be closer to the users' thoughts and make life more colorful with comfortable reactions with robots.
      PubDate: 2022-08-05
       
  • Simulating hybrid SysML models: a model transformation approach under the
           DEVS framework

    • Free pre-print version: Loading...

      Abstract: Abstract As the complexity of the cyber-physical systems (CPSs) increase, system modeling and simulation tend to be performed on different platforms where collaborative modeling activities are performed on distributed clients, while the simulations of systems are carried out in specific simulation environments, such as high-performance computing (HPC). However, there is a great gap between system models usually designed in system modeling language (SysML) and simulation code, and the existing model transformation-based simulation methods and tools mainly focus on either discrete or continuous models, ignoring the fact that the simulation of hybrid models is quite important in designing complex systems. To this end, a model transformation approach is proposed to simulate hybrid SysML models under a discrete event system specification (DEVS) framework. In this approach, to depict hybrid models, simulation-related meta-models with discrete and continuous features are extracted from SysML views without additional extension. Following the meta object facility (MOF), DEVS meta-models are constructed based on the formal definition of DEVS models, including discrete, hybrid and coupled models. Moreover, a series of concrete mapping rules is defined to transform the discrete and continuous behaviors based on the existing state machine mechanism and constraints of SysML, separately. Such an approach may facilitate a SysML system engineer to use a DEVS-based simulator to validate system models without the necessity of understanding DEVS theory. Finally, the effectiveness of the proposed method is verified by a defense system case.
      PubDate: 2022-08-05
       
  • Test case generation for enterprise business services based on enterprise
           architecture design

    • Free pre-print version: Loading...

      Abstract: Based on the service orientation, a business service represents a coherent functionality that offers added value to the environment, regardless of how it is realized internally. The enterprise business service is a crucial section of enterprise architecture. Although many leading-edge enterprise architecture frameworks describe architecture in levels of abstraction, they still cannot provide an accurate syntactic and semantic description. If test cases are generated based on accurate descriptions of enterprise business services, the subsequent revisions and changes can be reduced. This research has one main contribution: it starts from the enterprise level, gains benefits from the enriched descriptions for enterprise business service, continues to generate appropriate syntactic and semantic models, and generates test cases from the formal model. In the suggested method, the goals of the enterprise will initially be extracted based on The Open Group Architecture Framework. Then, it will be subjected to syntactic modeling based on the ArchiMate language. Next, the semantics are added in terms of the Web Service Modeling Ontology framework and are manually formalized in B language by applying the defined transformation rules. Finally, the test coverage set will be examined on the formal model to generate test cases. The suggested method has been implemented in the marketing department of a petrochemical company. The results indicate the validity and efficiency of the method.
      PubDate: 2022-08-04
       
  • A novel technique to optimize quality of service for directed acyclic
           graph (DAG) scheduling in cloud computing environment using heuristic
           approach

    • Free pre-print version: Loading...

      Abstract: Abstract At present, the cloud computing environment (CCE) has emerged as one of the significant technologies in communication, computing, and the Internet. It facilitates on-demand services of different types based on pay-per-use access such as platforms, applications and infrastructure. Because of its growing reputation, the massive requests need to be served in an efficient way which gives the researcher a challenging problem known as task scheduling. These requests are handled by method of efficient allocation of resources. In the process of resource allocation, task scheduling is accomplished where there is a dependency between tasks, which is a Directed Acyclic Graph (DAG) scheduling. DAG is one of the most important scheduling due to wide range of its applicable in different areas such as environmental technology, resources, and energy optimization. NP-complete is a renowned concern, so various models deals with NP-complete that have been suggested in the literature. However, as the Quality of Service (QoS)-aware services in the CCEplatform have turned into an attractive and prevalent way to provide computing resources emerges as a novel critical issue. Therefore, the key aim of this manuscript is to develop a novel DAG scheduling model for optimizing the QoS parameters in the CCEplatform and validation of this can be done with the help of extensive simulation technique. Each simulated result is compared with the existing results, and it is found that newly developed algorithm performs better in comparison to the state-of-the-art algorithms.
      PubDate: 2022-08-04
       
  • Design of novel area-efficient coplanar reversible arithmetic and logic
           unit with an energy estimation in quantum-dot cellular automata

    • Free pre-print version: Loading...

      Abstract: Abstract A quantum-dot cellular automaton is a new technology that solves all the disputes CMOS technology faces. Quantum-dot cellular automata-based computations run at ultra-high speeds with very high device density and low power consumption. Reversible logic design, featured in quantum-dot cellular automata, permits fully invertible computation. The arithmetic and logic units are the major components in all microprocessor-based systems that probably serve as the processing device's heart. This paper discusses an area-efficient quantum-dot cellular automata technology-based coplanar, reversible arithmetic and logic unit using the double Peres and Feynman gates. With a latency of \(2.5\) clocks and a total area of 0.1 μm2, the proposed arithmetic and logic unit performs 19 logic and arithmetic operations. QCA Designer and QD-E are used to simulate the proposed design and energy consumption, respectively. The proposed design's total energy dissipation, as measured by QCA Designer-E, is 5.45e−002 eV, and the average energy dissipation is 4.95e−003 eV. The proposed method has a considerable number of improvements in terms of latency, the number of operations, and area compared to earlier work.
      PubDate: 2022-08-04
       
  • Enabling efficient and reliable IoT deployment in 5G and LTE cellular
           areas for optimized service provisioning

    • Free pre-print version: Loading...

      Abstract: Abstract Reliable and efficient delivery of diverse services with different requirements is among the main challenges of IoT systems. The challenges become particularly significant for IoT deployment in larger areas and high-performance services. The low-rate wireless personal area networks, as standard IoT systems, are well suited for a wide range of multi-purpose IoT services. However, their coverage distance and data rate constraints can limit the given IoT applications and restrict the creation of new ones. Accordingly, this work proposes a model that aims to correlate and expand the standard IoT systems from personal to wide areas, thus improving performance in terms of providing fast data processing and distant connectivity for IoT data access. The model develops two IoT systems for these purposes. The first system, 5GIoT, is based on 5G cellular, while the second, LTEIoT, is based on 4G long-term evolution (LTE). The precise assessment requires a reference system, for which the model further includes a standard IoT system. The model is implemented and results are obtained to determine the performance of the systems for diverse IoT use cases. The level of improvement provided by the 5GIoT and LTEIoT systems is determined by comparing them to each other as well as to the standard IoT system to evaluate their advantages and limitations in the IoT domain. The results show the relatively close performance of 5GIoT and LTEIoT systems while they both outperform the standard IoT by offering higher speed and distance coverage.
      PubDate: 2022-08-04
       
  • A deep learning-based framework for accurate identification and crop
           estimation of olive trees

    • Free pre-print version: Loading...

      Abstract: Abstract Over the last several years, olive cultivation has grown throughout the Mediterranean countries. Among them, Spain is the world’s leading producer of olives. Due to its high economic significance, it is in the best interest of these countries to maintain the crop spread and its yield. Manual enumeration of trees over such extensive fields is impractical and humanly infeasible. There are several methods presented in the existing literature; nonetheless, the optimal method is of greater significance. In this paper, we propose an automated method of olive tree detection as well as crop estimation. The proposed approach is a two-step procedure that includes a deep learning-based classification model followed by regression-based crop estimation. During the classification phase, the foreground tree information is extracted using an enhanced segmentation approach, specifically the K-Mean clustering technique, followed by the synthesis of a super-feature vector comprised of statistical and geometric features. Subsequently, these extracted features are utilized to estimate the expected crop yield. Furthermore, the suggested method is validated using satellite images of olive fields obtained from Google Maps. In comparison with existing methods, the proposed method contributed in terms of novelty and accuracy, outperforming the rest by an overall classification accuracy of 98.1% as well as yield estimate with a root mean squared error of 0.185 respectively.
      PubDate: 2022-08-03
       
  • Stable parallel training of Wasserstein conditional generative adversarial
           neural networks

    • Free pre-print version: Loading...

      Abstract: Abstract We propose a stable, parallel approach to train Wasserstein conditional generative adversarial neural networks (W-CGANs) under the constraint of a fixed computational budget. Differently from previous distributed GANs training techniques, our approach avoids inter-process communications, reduces the risk of mode collapse and enhances scalability by using multiple generators, each one of them concurrently trained on a single data label. The use of the Wasserstein metric also reduces the risk of cycling by stabilizing the training of each generator. We illustrate the approach on the CIFAR10, CIFAR100, and ImageNet1k datasets, three standard benchmark image datasets, maintaining the original resolution of the images for each dataset. Performance is assessed in terms of scalability and final accuracy within a limited fixed computational time and computational resources. To measure accuracy, we use the inception score, the Fréchet inception distance, and image quality. An improvement in inception score and Fréchet inception distance is shown in comparison to previous results obtained by performing the parallel approach on deep convolutional conditional generative adversarial neural networks as well as an improvement of image quality of the new images created by the GANs approach. Weak scaling is attained on both datasets using up to 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.
      PubDate: 2022-08-03
       
  • A cost and makespan aware scheduling algorithm for dynamic multi-workflow
           in cloud environment

    • Free pre-print version: Loading...

      Abstract: Abstract With the development of cloud computing, a growing number of workflows are deployed on cloud platform that can dynamically provides cloud resources on demand for users. In clouds, one basic problem is how to schedule workflow for minimizing the execution cost and the workflow completion time. Aiming at the problem that the maximum completion time and cost of multiple workflows are too high, this paper proposes a model of dynamic multi-workflow scheduling in cloud environment and a new scheduling algorithm which is named as MT (multi-workflow scheduling technology). In MT, the heterogeneity of resources is considered when calculating the priority of tasks. Then, the technique for order of preference by similarity to ideal solution (TOPSIS) method is used to rank the resources when selecting resources for tasks. Finally, MT takes the estimated minimum completion time of the workflow and the cost of the task as two attribute indexes in TOPSIS decision matrix. Also, it uses a fixed reference point instead of calculating ideal solution, which ensures the uniqueness of the evaluation criteria when there is a change in the number of resources. Simulation experiments are illustrated to verify the effectiveness of the proposed algorithm in reducing the maximum completion time and cost of multiple workflows. Compared with the state-of-the-art methods, the maximum completion time and cost can be reduced by at most 17 and \(9\%\) , respectively.
      PubDate: 2022-08-02
       
  • Enhanced genetic algorithm with some heuristic principles for task graph
           scheduling

    • Free pre-print version: Loading...

      Abstract: Abstract Multiprocessor systems with parallel computing play an important role in data processing. Considering the optimal use of existing computing systems, scheduling on parallel systems has gained great significance. Usually, a sequential program to run on parallel systems is modeled by a task graph. Because scheduling of task graphs onto processors is considered the most crucial NP-complete problem, many attempts have been made to find the most approximate optimal scheduling using genetic algorithms. Its chromosomal representation largely influences the performance of the genetic algorithm. The chromosomal structure used in the existing genetic algorithms does not entirely scan the solution space. As a result, these algorithms fail to produce an appropriate schedule frequently. To overcome this constraint, the present study proposed a new method for constructing chromosomal representation. The proposed approach was divided into three phases: ranking, clustering, and cluster scheduling, where a genetic algorithm schedules clusters. To optimize the proposed genetic algorithm’s performance, it was equipped with four heuristic principles: load balancing, reuse of idle time, task duplication, and critical path. Finally, by comparing the obtained results for 6 task graphs in 3 types, the amount of optimization was equal to the results of previous best algorithm, but in the other 3 types, the amount of optimization was a value between 4.25 and 6.88%.
      PubDate: 2022-08-02
       
  • Graph entropies-graph energies indices for quantifying network structural
           irregularity

    • Free pre-print version: Loading...

      Abstract: Abstract Quantifying similarities/dissimilarities among different graph models and studying the irregularity (heterogeneity) of graphs in graphs and complex networks are one of the fundamental issues as well as a challenge of scientific and practical importance in many fields of science and engineering. This paper has been motivated by the necessity to establish novel and efficient entropy-based methods to quantify the structural irregularity properties of graphs, measure structural complexity, classify, and compare complex graphs and networks. In particular, we explore how criteria such as Shannon entropy, Von Newman, and generalized graph entropies, already defined for graphs, can be used to evaluate and measure irregularities in complex graphs and networks. To do so, we use some results obtained from graph spectral theory related to the construction of matrices obtained from graphs. We show how to use these irregularity indices based on graph entropies and demonstrate the usefulness and efficiency of each of these complexity measures on both synthetic networks and real-world data sets.
      PubDate: 2022-08-02
       
  • Decision-making for the anomalies in IIoTs based on 1D convolutional
           neural networks and Dempster–Shafer theory (DS-1DCNN)

    • Free pre-print version: Loading...

      Abstract: Abstract The main motivation of the Internet of Things (IoT) is to enable everyday physical objects to sense and process data and communicate with other objects. Its applications in industry are called industrial Internet of Things (IIoTs) or Industry 4.0. One of the main goals of the IIoT is to automatically monitor and detect unexpected events, changes, and alterations to the collected data. Anomaly detection includes all techniques that identify data patterns deviating from the expected behavior. Deep learning (DL) can search for a specific relationship in billions of corporate IoT data and reach a meaningful goal by analyzing and classifying collected data, leading to making the right decisions. The realization of the IoT is entirely dependent on making the proper decisions. However, the conventional methods for processing voluminous IIoT data are not qualified. Hence, DL is indispensable for making the intended inferences through big IIoT data. Likewise, due to the advancement of sensor technology, various sensor resources such as sound, vibration, and current can be used to obtain appropriate inferences. Accordingly, the decision fusion theory can be used to make optimal decisions when there are multiple sources of information. Therefore, this paper proposes a method that combines one-dimensional convolution neural networks (1DCNNs) and the Dempster–Shafer (DS) decision-fusion method (DS-1DCNN) for decision-making on IIoT anomalies. According to obtained simulation results, this proposed method increases the decision accuracy and significantly decreases uncertainty. The proposed method was compared with long short-term memory, random forest and CNN models, which obtained better performance than these algorithms. The proposed method on the Mill dataset got an average recall of 0.9763 and an average precision of 0.9899, which is an acceptable and reliable result for decision-making.
      PubDate: 2022-08-02
       
  • uLog: a software-based approximate logarithmic number system for
           computations on SIMD processors

    • Free pre-print version: Loading...

      Abstract: Abstract This paper presents a new number representation based on logarithmic number system (LNS) called unsigned logarithmic number system (ulog), as an alternative to the conventional floating-point (FP) number format, to use in approximate computing applications. ulog is tailored for software implementation on commercial general-purpose processors, and uses the same dynamic range as conventional IEEE Standard FP formats to prevent overflow and underflow. ulog converts FP numbers to fixed-point numbers and uses integer operations for all computations. Moreover, vectorization and approximate logarithmic addition have been used to increase the performance of the software implementation of ulog. Then, we used different BLAS benchmarks to evaluate the performance of the proposed format than IEEE standard formats. 16- and 32-bit ulog improve the runtime than double-precision at most 70.26% and 46.36%, respectively. Besides, accuracy analysis of the ulog based on different logarithm bases showed that base 4 has the lowest error in most cases.
      PubDate: 2022-08-02
       
  • On the performance evaluation of object classification models in low
           altitude aerial data

    • Free pre-print version: Loading...

      Abstract: Abstract This paper compares the classification performance of machine learning classifiers vs. deep learning-based handcrafted models and various pretrained deep networks. The proposed study performs a comprehensive analysis of object classification techniques implemented on low-altitude UAV datasets using various machine and deep learning models. Multiple UAV object classification is performed through widely deployed machine learning-based classifiers such as K nearest neighbor, decision trees, naïve Bayes, random forest, a deep handcrafted model based on convolutional layers, and pretrained deep models. The best result obtained using random forest classifiers on the UAV dataset is 90%. The handcrafted deep model's accuracy score suggests the efficacy of deep models over machine learning-based classifiers in low-altitude aerial images. This model attains 92.48% accuracy, which is a significant improvement over machine learning-based classifiers. Thereafter, we analyze several pretrained deep learning models, such as VGG-D, InceptionV3, DenseNet, Inception-ResNetV4, and Xception. The experimental assessment demonstrates nearly 100% accuracy values using pretrained VGG16- and VGG19-based deep networks. This paper provides a compilation of machine learning-based classifiers and pretrained deep learning models and a comprehensive classification report for the respective performance measures.
      PubDate: 2022-08-01
       
  • A Shapley value-based thermal-efficient workload distribution in
           heterogeneous data centers

    • Free pre-print version: Loading...

      Abstract: Abstract Thermal-aware (TA) task allocation is one of the most effective software-based dynamic thermal management techniques to minimize energy consumption in data centers (DCs). Compared to its counterparts, TA scheduling attains significant gains in energy consumption. However, the existing literature overlooks the heterogeneity of computing elements in terms of thermal constraints while allocating or migrating user jobs, which may significantly affect the reliability of racks and all the equipment therein. Moreover, the workload distribution among these racks/servers is not fair and efficient in terms of thermal footprints; it is potentially beneficial to determine the workload proportion for each computing node (rack/server) based on its marginal contribution in disturbing the thermal uniformity (TU) in a DC environment. To solve the said problems, we model the workload distribution in DCs as a coalition formation game with the Shapley Value (SV) solution concept. Also, we devise Shapley Workload (SW), a TA scheduling scheme based on the SV to optimize the TU and minimize the cooling cost of DCs. Specifically, the scheduling decisions are based on the ambient effect of the neighboring nodes, for the ambient temperature is affected by the following two factors: (1) the current temperature of computing components and (2) the physical organization of computing elements. This results in lower temperature values and better TU, consequently leading to lower cooling costs. Simulation results demonstrate that the proposed strategy greatly reduces the total energy consumption compared to the existing state-of-the-art.
      PubDate: 2022-08-01
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 35.172.223.251
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-