Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jiangting Song, Fujiang Jin, Lichun Zhou Abstract: International Journal of Cooperative Information Systems, Ahead of Print. To address the issues of lengthy modeling time and substantial result error while determining the content of each component in a multi-component system simultaneously using spectrophotometry, a new determination method is provided. If we imagine the test light as a one-dimensional optical quantum and the multi-component system as a one-dimensional square potential barrier, then the process of simultaneously determining the content of each component by spectrophotometer is the process of photon tunneling through the potential barrier of multi-component system; the potential energy matrix of the multi-component system is established and transformed into a Jordan matrix analogous to it. Light quantum tunneling across a barrier in a system with several components is the superposition of light quantum tunneling through each individual component systems, with each component system serving as the “diagonal” in a Jordan block. For a multi-component system, the concentration ratio of each component can be calculated by determining the weight coefficient of the transmission wave of each single molecule system inside the total transmission wave using the multi-core learning (MKL) approach. Linear superposition of photon-tunneling single-molecule models is used to create a multi-component spectral soft sensing model. Results show that this technique offers a new theoretical foundation for spectrophotometers to use in measuring multi-component solution concentrations, and that it solves the production problem of infinite formulas; simultaneously, it is discovered that the simultaneous determination technique for spectrophotometers based on the Lambert–Beer law is a special case of this technique. Citation: International Journal of Cooperative Information Systems PubDate: 2024-05-29T07:00:00Z DOI: 10.1142/S0218843024500084
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:P. Sathyaprakash, Poovendran Alagarsundaram, Mohanarangan Veerappermal Devarajan, Ahmed Alkhayyat, Parthasarathy Poovendran, Deevi Radha Rani, V Savitha Abstract: International Journal of Cooperative Information Systems, Ahead of Print. From a Licensed Medical Practitioner’s (LMP) perspective, e-Healthcare Risk Prediction plays a vital role in Health Big Data. This also is a hot issue in e-healthcare because of the lack of security and privacy protections. To overcome this deficiency, this research article proposes heterogeneous network systems (HNS), an efficient and privacy-preserving e-Healthcare Risk Prediction method for e-healthcare. In comparison to the existing research contribution, the proposed HNS accomplish two steps of disease risk prediction, namely Analysis of HNS, and Heterogeneous Network (HetNet) concerning the LMP for analyzing the in-hospital involvement care by collecting and explaining the “Health Big Data” as per the view of the LMP. This will help to access the services from the hospital. In the LMP-Centric Heterogeneous Network Powered Efficient e-Healthcare Risk Prediction phase, the “Polygenic Score” is calculated for risk prediction for health big data. Through the characteristics of “non-predictive applications” and “Predictive applications,” procedural aspects are analyzed with the LMP-Centric HetNet against the Efficient e-Healthcare Risk Prediction. This will be applied to the Medical extensive data integration and clustering for handling Health Big Data. Finally, the LMP-Centric HetNet Powered Efficient e-Healthcare Risk Prediction for Health Big Data treats the LMP perspective efficiently. The proposed system increased prediction accuracy to 45.9%, and the monogenic score increased from 3% to 19%. The density accuracy range is increased from 13.9% to 39%. The increased execution time is improved from 29.95% to 36.05%. This comprehensive prediction analysis accuracy range is 73.98% efficient. Citation: International Journal of Cooperative Information Systems PubDate: 2024-05-23T07:00:00Z DOI: 10.1142/S0218843024500126
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Shweta S. Aladakatti, Senthil Kumar Swami Durai Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Nowadays, all the records in various languages are accessible with their advanced structures. For simple recovery of these digitized records, these reports should be ordered into a class as indicated by their content. Text Categorization is an area of Text Mining which helps to overcome this challenge. Text Classification is a demonstration of allotting classes to records. This paper investigates Text Classification works done in foreign Languages, regional languages and a list of books’ content. Messages available in different languages force the difficulties of NLP approaches. This study shows that supervised ML algorithms such as Logistic regression, Naive Bayes classifier, [math]-Nearest-Neighbor classifier, Decision Tree and SVMs performed better for Text Classification tasks. The automated document classification technique is useful in our day-to-day life to find out the type of language and different department books based on their text content. We have been using different foreign and regional languages here to classify such as Tamil, Telugu, Kannada, Bengali, English, Spanish, French, Russian and German. Here, we utilize one versus all SVMs for multi-characterization with 3-crease Cross Validation in all cases and see that SVMs outperform different classifiers. This implementation is done by using hybrid classifiers and it depicts analyses with delicate edge straight SVMs as well as bit-based SVMs. Citation: International Journal of Cooperative Information Systems PubDate: 2024-05-20T07:00:00Z DOI: 10.1142/S0218843023500041
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Hao Liu Abstract: International Journal of Cooperative Information Systems, Ahead of Print. The purpose is to study the new application of artificial intelligence (AI) technology in the Agro logistics industry distribution modes. This work summarizes the problems in agricultural logistics by investigating the current situation of the logistics distribution modes (LDMs) of Agro products and establishes a joint LDM integrating logistics and agricultural industry chain (AIC). An intelligent virtual center is established for the joint LDM according to the specific situation of Xi’an. Experts are invited to evaluate the proposed LDM. The results show that the existing third-party logistics (3PL), agricultural supermarket docking LDM, and company+farmer shared LDM have respective advantages and disadvantages. The virtual center for the proposed joint LDM multiplies the weight matrix [math] and fuzzy evaluation matrix A to obtain a comprehensive fuzzy evaluation result. It evaluates the result according to the maximum membership criterion. Most experts have a relatively good evaluation of the joint LDM. In the comprehensive fuzzy evaluation results, the comprehensive “excellent” score is 0.3755, and the comprehensive “good” score is 0.2678. According to the principle of maximum subordination, the AIC logistics integrating Agro products logistics has an excellent performance, and the overall satisfaction level of performance has exceeded 85%. In addition, Yonghui supermarket adopts the joint LDM, and the price of fruits and vegetables is lower than that of other LDMs. Therefore, the proposed AI-based joint LDM can optimize the distribution route, improve distribution efficiency, and save logistics costs to a great extent. Citation: International Journal of Cooperative Information Systems PubDate: 2024-04-24T07:00:00Z DOI: 10.1142/S0218843024500096
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Weijian Yang, Han Yang Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Since the 1970s, China’s economy has been changing rapidly, especially after joining the World Economic and Trade Organization, and with the increase in multinational enterprises, the development of overseas markets has become more convenient, contributing to the further strengthening of our economic power. The financial industry plays an essential role in the development of a country, and commercial banks are the leaders in the financial sector. They have made significant contributions to national economic growth. Under the background of substantial economic development, the economic management decisions of commercial banks have become more complex, which makes the risks faced by commercial banks increase continuously. The rapid growth of global informatization has effectively boosted the development of all walks of life. Based on the intelligent characteristics of big data, this paper analyzes the influencing factors of economic management decisions of commercial banks based on wise choices of big data, hoping to enable commercial banks to achieve the ideal expectation of financial management of “reducing risk and creating value” to enhance the competitiveness of commercial banks in the whole market. Citation: International Journal of Cooperative Information Systems PubDate: 2024-04-12T07:00:00Z DOI: 10.1142/S0218843024500114
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:P. Punitha, Lakshmana Kumar, S. Revathi, R. Premalatha, R. S. Aiswarya Abstract: International Journal of Cooperative Information Systems, Ahead of Print. As with IoT devices, cloud computation increases its applicability to other areas that use the information and puts it in a commonplace that is necessary for computing and analysis. These devices need the cloud to store and retrieve data since they are unable to store and process data on their own. Cloud computing offers a variety of services to consumers, including IaaS, PaaS, and SaaS. The usage of cloud resources for data storage that may be accessible by all users associated with cloud computing is a key disadvantage of cloud computing. Without disclosing the data’s contents, the usage of Public Key Encryptions with Keyword Search (PEKS) secures publicly encrypted keys against third-party search capabilities that are not trustworthy. PEKs provide a security risk every time Interior Keywords Guessing Attacks (IKGA) are discovered in the system since an unauthorized service estimates the keywords in the trapdoor. This issue can be resolved using various methodologies such as the Certificateless Fleshed Public Key Authenticated Encryption of Keyword Search (CL-HPAEKS), which also uses Altered Elliptic Curve Cryptography (MECC), and the Mutation Centred Flower Pollinations Algorithm (CM-FPA), which uses optimization in keys to be improving the algorithm’s performance. The system’s security is achieved by adding a Message Fragments 5 (MD5) scramble mechanism. The proposed system achieves system security of 96%, and it takes less time to implement than earlier encryption methods. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-18T07:00:00Z DOI: 10.1142/S0218843024500011
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Amani K. Samha, Ghalib H. Alshammri, Sasidhar Attuluri, Preetam Suman, Arvind Yadav Abstract: International Journal of Cooperative Information Systems, Ahead of Print. The prevalence of distributed denial of service (DDoS) flooding assaults is one of the most serious risks to cloud computing security. These types of assaults have as their primary objective the exhaustion of the system’s available resources, that is, the target of the attack, in order to make the system in question unavailable to authorized users. Internet thieves often conduct flooding assaults of the kind known as DDoS, focusing primarily on the application and network levels. When the computer infrastructure is multi-mesh-geo distributed, includes multi-parallel services, and a high number of domains, it may be difficult to detect assaults. This is particularly true when a substantial number of domains are present. When there are a big number of independent administrative users using the services, the situation gets more complicated. The purpose of this body of research is to identify signs that may be utilized to detect DDoS flooding assaults; this is its main objective. As a result, throughout the course of our study, we established a composite metric that considers application, system, network, and infrastructure elements as possible indicators of the incidence of DDoS assaults. According to our research, DDoS assaults may be triggered by a combination of variables. Investigations of simulated traffic are being conducted in the cloud. High traffic may be the result of flooding assaults. The composite metric-based intrusion detection system will be the name of a one-of-a-kind intrusion detection system (IDS) that has been agreed upon ICMIDS. This system will use [math]-Means clustering and the Genetic Algorithm (GA) to detect whether an effort has been made to flood the cloud environment. CMIDS employs a multi-threshold algorithmic strategy in order to identify malicious traffic occurring on a cloud-based network. Cisco has created this technology. This strategy necessitates a comprehensive investigation of all factors, which is crucial for assuring the continuation of cloud-based computing-based activities. This monitoring system involves the development, administration, and storage of a profile database, denoted as Profile DB. This database is used for recording and using the composite metric for each virtual machine. The results of a series of tests are compared to the ISCX benchmark dataset and statistical settings. The results indicate that ICMIDS has a reasonably high detection rate and the lowest false alarm rate in the majority of situations examined during the series of tests done to validate and verify its efficacy. This was shown by the fact that ICMIDS had the lowest false alarm rate among all examined conditions. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-18T07:00:00Z DOI: 10.1142/S0218843024500035
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Zilan Cao, Senyao Hu, Hangyu Cao, Zheng Tao Abstract: International Journal of Cooperative Information Systems, Ahead of Print. As people become more aware of environmental issues, Tesla, a pillar of the global electric vehicle market, has become a hot commodity in recent years. The prediction of Tesla’s share price is also a hot topic in the investment market. This experiment extracts sentiment factors of tweets about Tesla’s comments, combined with Tesla’s historical stock price as the dataset for training, testing and inspection. This experiment builds time series prediction models based on LSTM, XGBoost and RF algorithms to predict Tesla’s stock price, and evaluates the prediction effectiveness of the three algorithms based on the fit and error of the prediction results. The analysis of the data shows that XGBoost has the best fit and the lowest error among the three algorithms, and that the sentiment factor has its unique utility as raw data. The experimental results also empirically demonstrate the applicability of sentiment factor analysis and the three algorithms LSTM, XGBoost and RF in the field of stock price prediction. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-18T07:00:00Z DOI: 10.1142/S0218843024500072
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yuling Zhang Abstract: International Journal of Cooperative Information Systems, Ahead of Print. With the rapid development of IoT, low-power wireless networks are becoming more and more important in engineering applications. For different IoT applications, a variety of corresponding low-power wireless technologies are studied and used. Among them, ZigBee technology is widely used for wireless data long-distance communication networks that require low-power consumption, low communication rate and large capacity self-organizing networks. However, the standard ZigBee module is limited by the large noise factor of the receiver and the small radiation power of the transmitter in the module, which cannot meet the requirements of long-distance communication. In this paper, we combine RF and communication technologies to design a long-range, low-power transceiver based on ZigBee technology for wireless data long-range communication simulation design and application. Experimental results show that the model in the urban area of 500[math]m range, the system’s packet loss rate is kept at a low 0.03 or so, and the average Rssi is kept above and below [math][math]dBm, which can meet the general communication requirements and achieve the expected communication effect. Under the same packet rate, the model combines the improved AODVjr and Cluster_Tree algorithms, and the packet delivery rate can reach 98.5%, which greatly extends the wireless data long-distance communication life. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-15T07:00:00Z DOI: 10.1142/S0218843024500047
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Chiranjit Dutta, R. M. Rani, Amar Jain, I. Poonguzhali, Dipmala Salunke, Ruchi Patel Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Cloud computing has attracted significant attention because of the growing service demands of businesses that outsource computationally intensive tasks to the data center. Meanwhile, the infrastructure of a data center is comprised of hardware resources that consume a great deal of energy and release harmful levels of carbon dioxide. Cloud data centers demand massive amounts of electrical power as modern applications and organizations grow. To prevent resource waste and promote energy efficiency, virtual machines (VMs) must be dispersed over numerous physical machines (PMs) in a data center in the cloud. The actual allocation of VMs to PMs can involve more complex decision-making processes, such as considering the resource utilization, load balancing, performance requirements, and constraints of the system. Advanced techniques, like intelligent placement algorithms or dynamic resource allocation, may be employed to optimize resource utilization and achieve efficient VM distribution across multiple PMs. Cloud service suppliers aim to lower operational expenses by reducing energy consumption while offering clients competitive services. Minimizing large-scale data center power usage while maintaining the quality of service (QoS), especially for social media-based cloud computing systems, is crucial. Consolidating VMs has been highlighted as a promising method for improving resource efficiency and saving energy in data centers. This research provides deep learning augmented reinforcement learning (RL)-based energy efficient and QoS-aware virtual machine consolidation (VMC) approach to meet the difficulties. The proposed deep learning modified reinforcement learning-virtual machine consolidation (DLMRL-VMC) model can motivate both cloud providers and customers to distribute cloud infrastructure resources to achieve high CPU utilization and good energy efficiency as measured by power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE). The suggested model, DLMRL-VMC, offers a VM placement approach based on resource usage and dynamic energy consumption to determine the best-matched host and VM selection strategy, Average Utilization Migration Time (AUMT). Based on AUMT, deep learning modified reinforcement learning (DLMRL) will choose a VM with a low average CPU utilization and a short migration time. The DLMRL-VMC Energy-efficient, Resource Allocation strategy is evaluated on the trace of the CloudSim VM to attain good PUE and CPU utilization. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-15T07:00:00Z DOI: 10.1142/S0218843024500059
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Amani K. Samha, Ghalib H. Alshammri, Niroj Kumar Pani, Yogesh Misra, Venkata Ratnam Kolluru Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Wireless sensor networks (WSNs) are a powerful support system for the fundamental infrastructure that is required to monitor physiological and activity parameters (WSN). Wearable devices, which are also referred to as wireless nodes in the scientific world, are what are used in order to measure one or more of the user’s vital signs. Each and every wireless node is a teeny-tiny device that is meant to be supplied with enough amounts of storage space, power, and transmission capability. The loss of data packets may occur during the transmission of data via a wireless medium for a number of reasons. These reasons include interferences, improper deployment circumstances, distance, and inadequate signal strength. The monitoring of a user’s physiological information and postural activity information in various applications, such as home care and hospital care, is the primary emphasis of this study. In this work, the WSN was shown thanks to the introduction of wireless sensor nodes that were created locally. These wireless sensor nodes are used in the process of analyzing many aspects of a network, such as the received signal strength, transmission offset, packet delivery ratio (PDR), and signal-to-noise interference. The work significantly improves the capabilities of conventional WSN by implementing a variety of alternative communication approaches, such as network-coded cooperative communication (NC-CC) and cooperative communication (CC). The system that is being shown makes it feasible to localize the user’s approximate position inside an indoor setting without making use of any camera network connections. This is made possible by the system’s ability to determine the user’s location via triangulation. This is one of the benefits that the system provides. A hospital sensor network, an example of which is being shown here, is capable of doing real-time monitoring of a patient’s postural activity as well as their general health. The method is being promoted in order to ensure that the patient will get assistance in a timely manner that is adequate to his/her needs. Involving NC-CC enables the effective sharing of real-time data among the group of privileged duty nurses while simultaneously minimizing the amount of network traffic, latency, and throughput. This is possible because of NC-CC protocol. The findings of the experiments showed that the proposed method of communication, which is known as dynamic retransmit/rebroadcast decision control, is a significant advancement in the network coding approach that is presently being utilized. This was demonstrated by the fact that the method was shown to be significantly more effective. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-13T07:00:00Z DOI: 10.1142/S0218843024500060
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tian Xia Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Background noise, for example, can influence the outcome of target recognition in the visual communication design of weak target images, lowering the visual communication effect. In order to achieve this, this paper proposes a visual communication design method for weak target images based on spatiotemporal domain filtering. It uses guided filtering to smooth the image and raise the gray level of weak target points, and then obtains the background baseline of target points in the image sequence through the micro-partial equation to complete the image’s spatiotemporal domain background suppression. The best visual communication design result is achieved. The experimental findings demonstrate that the suggested approach has a good visual communication effect and that the pace of communication is not constrained by the size of the images. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-13T07:00:00Z DOI: 10.1142/S0218843024500102
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jinna Zhang Abstract: International Journal of Cooperative Information Systems, Ahead of Print. With the progress of information technology recently, mobile communication system edge computing (EC) has been widely used in all walks of life, but the traditional mobile communication system EC mode has security problems such as privacy disclosure, malicious tampering, and virus attacks. Computer algorithms has brought new vitality to EC in mobile communication systems. This paper analyzed the application of computer algorithm in EC mode of mobile communication system, and selected 20 users as the research object. This paper adopted traditional computing mode (such as cloud computing) and computer algorithm-based mobile communication system EC security research. This text compared the effects of two modes on security performance, data transmission efficiency, energy consumption, cost savings, and user satisfaction. The experimental results in this paper showed that the average security of EC mode of mobile communication system based on computer algorithm was 84%, and the average data transmission time was 4.8[math]s. The energy consumption was 40%, and the cost savings and user satisfaction were 432,000 yuan and 13 points, respectively. Both were superior to the traditional edge counting mode. The EC mode of mobile communication system using computer algorithms can significantly improve the security of mobile communication, data transmission speed, cost savings, user satisfaction, and reduce energy consumption. This model has important significance and value for social development. Citation: International Journal of Cooperative Information Systems PubDate: 2024-03-09T08:00:00Z DOI: 10.1142/S0218843024500023
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Emna Benmohamed, Adel Thaljaoui, Salim El Khediri, Suliman Aladhadh, Mansor Alohali Abstract: International Journal of Cooperative Information Systems, Ahead of Print. With the growth in services supplied over the internet, network infrastructure has become more exposed to cyber-attacks, particularly Distributed Denial of Service (DDoS) attacks, which can easily cause the disruption of services. The key factor for fighting against these attacks is the earlier separation and detection of the traffic in networks. In this paper, a novel approach, named Half Autoencoder-Stacked DNNs (HAE-SDNN) model, is proposed. We suggest using a Stacked Deep Neural Networks (SDNN) model. as a deep learning model, in order to detect DDoS attacks. Our approach allows feature selection from a preprocessed dataset using a Half AutoEncoder (HAE), resulting in a final set of important features. These features are subsequently used to train the DNNs that are stacked together by applying Softmax layer to combine their outputs. Experiments were performed on a benchmark cybersecurity dataset, named CICDDoS2017, containing various DDoS attack types. The experimental results demonstrate that the introduced model attained an overall accuracy rate of 99.95%. Moreover, the HAE-SDNN model outperformed existing models, highlighting its superiority in accurately classifying attacks. Citation: International Journal of Cooperative Information Systems PubDate: 2023-10-10T07:00:00Z DOI: 10.1142/S0218843023500259
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kamal Upreti, Prashant Vats, Aravindan Srinivasan, K. V. Daya Sagar, R. Mahaveerakannan, G. Charles Babu Abstract: International Journal of Cooperative Information Systems, Ahead of Print. When income, assets, sales, and profits are inflated while expenditures, debts, and losses are artificially lowered, the outcome is a set of fraudulent financial statements (FFS). Manual auditing and inspections are time-consuming, inefficient, and expensive options for spotting these false statements. Auditors will find great assistance from the use of intelligent methods in the analysis of several financial declarations. Now more than ever, victims of financial fraud are at risk since more and more individuals are using the Internet to conduct their financial transactions. And the frauds are getting more complex, evading the protections that banks have put in place. In this paper, we offer a new-fangled method for detecting fraud using NLP models: an ensemble model comprising Feedforward neural networks (FNNs) and Long Short-Term Memories (LSTMs). The Spotted Hyena Optimizer is a unique metaheuristic optimization technique used to choose weights and biases for LSTM (SHO). The proposed method takes inspiration from the law of gravity and is meant to mimic the group dynamics of spotted hyenas. Mathematical models and discussions of the three fundamental phases of SHO — searching for prey, encircling prey, and at-tacking prey — are presented. We build a model of the user’s spending habits and look for suspicious outliers to identify fraud. We do this by using the ensemble mechanism, which helps us predict and make the most of previous trades. Based on our analysis of real-world data, we can confidently say that our model provides superior performance compared to state-of-the-art approaches in a variety of settings, with respect to both precision and. Citation: International Journal of Cooperative Information Systems PubDate: 2023-10-06T07:00:00Z DOI: 10.1142/S0218843023500247
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Zeqing Xiao, Hui Ou Abstract: International Journal of Cooperative Information Systems, Ahead of Print. The amount of voltage fault data collection is limited to signal acquisition instruments and simulation software. Generative adversarial networks (GAN) have been successfully applied to the data generation tasks. However, there is no theoretical basis for the selection of the network structure and parameters of generators and discriminators in these GANs. It is difficult to achieve the optimal selection basically by experience or repeated attempts, resulting in high cost and time-consuming deployment of GAN computing in practical applications. The existing methods of neural network optimization are mainly used to compress and accelerate the deep neural network in classification tasks. Due to different goals and training processes, they cannot be directly applied to the data generation task of GAN. In the three-generation scenario, the hidden layer filter nodes of the initial GAN generator and discriminator are growing firstly, then the GAN parameters after the structure adjustment are optimized by particle swarm optimization (PSO), and then the node sensitivity is analyzed. The nodes with small contribution to the output are pruned, and then the GAN parameters after the structure adjustment are optimized using PSO algorithm to obtain the GAN with optimal structure and parameters (GP-PSO-GAN). The results show that GP-PSO-GAN has good performance. For example, the simulation results of generating unidirectional fault data show that the generated error of GP-PSO-GAN is reduced by 70.4% and 15.2% compared with parameters optimization only based on PSO (PSO-GAN) and pruning- PSO-GAN (P-PSO-GAN), respectively. The convergence curve shows that GP-PSO-GAN has good convergence. Citation: International Journal of Cooperative Information Systems PubDate: 2023-09-29T07:00:00Z DOI: 10.1142/S0218843023500235
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tao Wang, Tianbang Song Abstract: International Journal of Cooperative Information Systems, Ahead of Print. At present, the financial situation of China’s supply chain finance is still relatively unstable, and there are still some problems between supply chain enterprises and banks such as asymmetric information, insufficient model innovation and high operational risks. Based on this, this paper proposes and constructs a risk control model of financial big data analysis based on collaborative filtering algorithm. The purpose of this study is to realize the resource integration of supply chain enterprises and optimize the logistics chain, financial chain and information chain through the analysis of financial big data based on collaborative filtering algorithm, provide quality services for supply chain enterprises and good support for solving the financing problems of small and medium-sized enterprises. In order to verify the feasibility of the model, an experimental analysis is carried out. The experimental results show that this model has good scalability and operability, and the algorithm itself also has good scalability. The results of empirical analysis further verify that the design method in this paper has a good recommendation effect in terms of matching degree and user satisfaction. Compared with other risk control models, it is more practical and feasible. This research has certain practical significance for the financial management of supply chain enterprises. Citation: International Journal of Cooperative Information Systems PubDate: 2023-09-27T07:00:00Z DOI: 10.1142/S0218843023500223
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Minquan Wang, Siyang Lu, Sizhe Xiao, Dong Dong Wang, Xiang Wei, Ningning Han, Liqiang Wang Abstract: International Journal of Cooperative Information Systems, Ahead of Print. We consider the problem of real-time log anomaly detection for distributed system with deep neural networks by unsupervised learning. There are two challenges in this problem, including detection accuracy and analysis efficacy. To tackle these two challenges, we propose GLAD, a simple yet effective approach mining for anomalies in distributed systems. To ensure detection accuracy, we exploit the gradient features in a well-calibrated deep neural network and analyze anomalous pattern within log files. To improve the analysis efficacy, we further integrate one-class support vector machine (SVM) into anomalous analysis, which significantly reduces the cost of anomaly decision boundary delineation. This effective integration successfully solves both accuracy and efficacy in real-time log anomaly detection. Also, since anomalous analysis is based upon unsupervised learning, it significantly reduces the extra data labeling cost. We conduct a series of experiments to justify that GLAD has the best comprehensive performance balanced between accuracy and efficiency, which implies the advantage in tackling practical problems. The results also reveal that GLAD enables effective anomaly mining and consistently outperforms state-of-the-art methods on both recall and F1 scores. Citation: International Journal of Cooperative Information Systems PubDate: 2023-09-22T07:00:00Z DOI: 10.1142/S0218843023500181
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:S. Deepa, A. Umamageswari, S. Neelakandan, Hanumanthu Bhukya, I. V. Sai Lakshmi Haritha, Manjula Shanbhog Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Machine learning (ML) is currently a crucial tool in the field of cyber security. Through the identification of patterns, the mapping of cybercrime in real time, and the execution of in-depth penetration tests, ML is able to counter cyber threats and strengthen security infrastructure. Security in any organization depends on monitoring and analyzing user actions and behaviors. Due to the fact that it frequently avoids security precautions and does not trigger any alerts or flags, it is much more challenging to detect than traditional malicious network activity. ML is an important and rapidly developing anomaly detection field in order to protect user security and privacy, a wide range of applications, including various social media platforms, have incorporated cutting-edge techniques to detect anomalies. A social network is a platform where various social groups can interact, express themselves, and share pertinent content. By spreading propaganda, unwelcome messages, false information, fake news, and rumours, as well as by posting harmful links, this social network also encourages deviant behavior. In this research, we introduce Deep Belief Network (DBN) with Triple DES, a hybrid approach to anomaly detection in unbalanced classification. The results show that the DBN-TDES model can typically detect anomalous user behaviors that other models in anomaly detection cannot. Citation: International Journal of Cooperative Information Systems PubDate: 2023-09-20T07:00:00Z DOI: 10.1142/S0218843023500168
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Mingxing Liu Abstract: International Journal of Cooperative Information Systems, Ahead of Print. With the rapid development of computer technology, distributed systems have become an indispensable part of the field of information storage and management. In the process of large-scale data processing, it is an important issue to compress or replay files to ensure their integrity. In order to solve the problem of large-scale computing resources and data storage, the distributed file system emerged as a new system structure. The purpose of this paper to is study the key technologies of the distributed memory file system of high-performance computers is to improve the capability and efficiency of the distributed system. This paper mainly uses the experimental method and the comparative method to analyze the key technology of the distributed memory file system of the high-performance computer. Experimental results show that the maximum bandwidth value of DFMS in file memory processing can reach more than 2000, and the value becomes more stable as the file increases. Citation: International Journal of Cooperative Information Systems PubDate: 2023-08-24T07:00:00Z DOI: 10.1142/S0218843023500193
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Swathy Vodithala, Raghuram Bhukya Abstract: International Journal of Cooperative Information Systems, Ahead of Print. In today’s digital environment, business intelligence advances make it difficult to stay competitive and up to date on business trends. Decision-making in the financial industry is increasingly being powered by big data and machine learning. A decision-making process may be thought of as any sequence of processes that an individual goes through in order to select the option or course of action that is most suitable to meet their needs and necessities. The ability to anticipate the onset of a financial crisis is a significant economic phenomenon. A nation’s economic development and strength can be gauged by its capacity to provide an accurate assessment of the number of failed firms and the frequency with which they fail. The economics of the globe have been ravaged by recent global crises like as the COVID-19 pandemic and other recent environmental, financial, and economic disasters, which have marginalized efforts to construct a maintainable economy and civilization. The health and growth of a nation’s economy can be determined by precisely estimating the number of enterprises that will fail and the number that will succeed. Historically, there have been numerous strategies for constructing a successful financial crisis prediction (FPC) method. Effectively predicting business failures is a gauge of a country’s economic health. Several strategies are available for effective FCP. Classification performance, forecast accuracy, and legality are insufficient for practical use. Several of the suggested methods work for some issues. The specific dataset is not expandable. To improve classification, design a good prediction model adaptable to several datasets. An effective financial crisis prediction method (FPC) requires the right qualities. ML models can also be used to classify a company’s financial health. This research presents political optimizer-based feature selection (POFS) with optimal cascaded deep forest (OCDF) for FCP in big data environments. Hadoop Map Reduce handles huge datasets. POFS reduces computing complexity by handling feature selection. POFS is an original FCP algorithm categorization using OCDF. SFO is used to optimize CDF model parameters. A thorough simulation study was performed to evaluate POFS performance on benchmark datasets OCDFs. The results confirmed the POFS-OCDF method’s superiority over state-of-the-art approaches. With an outstanding sensitivity of 0.912, specificity of 0.953, accuracy of 0.944, F-score of 0.930, and Matthews correlation coefficient (MCC) of 0.912, the proposed POFS-OCDF technique has shown optimum results. The experimental results demonstrated that the POFS-OCDF technique outperformed other recently developed strategies on a variety of criteria. As previously stated, Sunflower optimization (SFO) is also used to tune the Cascaded Deep Forest (CDF) parameters. A detailed simulation analysis is performed based on the benchmark dataset to evaluate the higher classification efficiency of the POFS-OCDF technique. The invention of the POFS algorithm for FCP exemplifies the work’s originality. Citation: International Journal of Cooperative Information Systems PubDate: 2023-08-22T07:00:00Z DOI: 10.1142/S021884302350020X
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:T. Priya, M. Prasanna Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Developing test cases is the most challenging and crucial step in the software testing process. The initial test data must be optimized using a strong optimization technique due to many testing scenarios and poor testing effectiveness. Test prioritization is essential for testing the developed software products in a production line with a restricted budget in terms of time and money. A good understanding of the trade-off between costs (e.g. time and resources needed) and efficiency (e.g. component coverage) is necessary to prioritize test case scenarios for one or more software products. So, this paper proposes an efficient Multi-objective Test Case Generation and Prioritization using an Improved Genetic Algorithm (MTCGP-IGA) in Component-based Software Development (CSD). A random search-based method for creating and prioritizing multi-objective tests has been employed utilizing numerous cost and efficacy criteria. Specifically, the multi-objective optimization comprises maximizing the Prioritized Range of test cases (PR), Pairwise Coverage of Characteristics (PCC), Fault-Finding Capability (FFC), and minimizing Total Implementation Cost (TIC). For this test prioritizing problem, a unique fitness function is constructed with cost-effectiveness metrics. IGA is a robust search technique that exhibits excellent benefits and significant efficacy in resolving challenging issues, including ample space, multiple-peak, stochastic, and universal optimization. Relying on the use of IGA, this paper classifies, computes the objective function, introduces the Nondominated Sorting Genetic Algorithm-II (NSGA-II) method, evaluates each branch’s proximity on the handling route, and arranges the path set to get the best answer. The outcomes demonstrate that the proposed MTCGP-IGA with NSGA-II performed the best than other baseline algorithms in terms of prioritizing the test cases (mean value of 195.2), PCC (mean score of 0.7828), and FFC (mean score of 0.8136). Citation: International Journal of Cooperative Information Systems PubDate: 2023-08-17T07:00:00Z DOI: 10.1142/S021884302350017X
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Mustafa Musa Jaber, Salman Yussof, Mohammed Hassan Ali, Sura Khalil Abd, Mustafa Mohammed Jassim, Ahmed Alkhayyat, H. Mubarak Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Nowadays, Green IoT-Based Agriculture plays an essential role in farming to improve the yield. Here, IoT devices are embedded in the farming equipment, which helps to enhance the irrigation and yield with minimum cost-cutting. Data security and privacy are major challenges in green IoT-related agriculture. Therefore, a secured system should create to maintain data confidentiality, authentication, integrity, availability, and privacy. This system uses the privacy-preserving data aggregation (PPDA) with a Fair access framework (FAF) that manages the data security. The data aggregation concept is used to protect the green IoT data from false data injection. The FAF utilizes the blockchain technique to grant, get, revoke and delegate access to the user. The developed security system can adapt the green IoT-based agriculture and provide confidentiality, which is done with the help of an enhanced ciphertext access control mechanism. This system resolves the security and privacy issues involved in the Green IoT-based agriculture, and the effectiveness of the system is evaluated using implementation results. Citation: International Journal of Cooperative Information Systems PubDate: 2023-07-25T07:00:00Z DOI: 10.1142/S0218843022500071
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:P. Anbumani, R. Dhanapal, G. K. D. Prasanna Venkatesan Abstract: International Journal of Cooperative Information Systems, Ahead of Print. In today’s world, cloud computing is widely used in various applications and services, which refers to the utilization of computational resources as per user requirements through the Internet. However, despite the numerous advancements in cloud services and applications, there are various security threats associated with it, mainly due to data outsourcing to third-party controlled data centers. In this context, this paper introduces a new model called Attribute-based Advanced Security Model (AASM) for Reliable Data Sharing in the Cloud. This model combines the Advanced Encryption Technique (AET) with Attribute-Based Signature (ABS) to ensure secure data sharing in the Cloud while efficiently controlling data access. The model enables encrypted access control from the data owner’s side with advanced access privileges, ensuring user privacy with an anonymous authentication model using ABS. By implementing these measures, the model provides security for cloud providers and users while safeguarding against malicious attacks. The effectiveness of the proposed model is evaluated based on factors such as time complexities, security and accountability. Citation: International Journal of Cooperative Information Systems PubDate: 2023-06-30T07:00:00Z DOI: 10.1142/S0218843023500089
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yanlan Liu Abstract: International Journal of Cooperative Information Systems, Ahead of Print. The main obstacles to migrant workers starting businesses when they return to their hometowns are their own lack of financial literacy and aptitude, a lack of sufficient venture capital, and inadequate financing options. This paper conducts an analysis of the opportunities and challenges under the rural revitalization environment in conjunction with the Internet of Things Technology, through which a migrant labor entrepreneurship analysis platform is built. Data crawling technology is applied to capture internet data resources related to innovative entrepreneurship in the data collection module, and the entrepreneurship data is processed by the Internet of Things Technology. Based on the research, the approach suggested in this paper has its benefits for analyzing the opportunities and difficulties that migrant workers encounter in a rural revitalization context, in order to finally provide some theoretical support for pertinent research. Citation: International Journal of Cooperative Information Systems PubDate: 2023-06-21T07:00:00Z DOI: 10.1142/S021884302350003X
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tamizharasi Thirugnanam, Mohammad Gouse Galety, Manas Ranjan Pradhan, Ruchi Agrawal, A. Shobanadevi, Saman M. Almufti, R. Lakshmana Kumar Abstract: International Journal of Cooperative Information Systems, Ahead of Print. Medical cancer rehabilitation healthcare center data maintenance is a global challenge with increased mortality risk. The Internet of Things (IoT)-based applications in healthcare were implemented through sensors and various connecting devices. The main problem of this procedure is the privacy of data, which is the biggest challenge with IoT, as all the connected devices transfer data in real time, the integration of multiple and other protocols can be hacked by the end-to-end connection, and it is not secure, security issues may crop up due to handling of such massive data in real time. Recent studies showed that a more structured risk assessment is needed to secure the medical cancer rehabilitation healthcare center data maintenance. In this respect, collaborative learning frameworks, such as Deep Federated Collaborative Learning (DFCL), are implemented for the study of medical cancer rehabilitation healthcare center data maintenance based on IoT-based systems and are proposed with smart short-term Bayesian convolution network systems for data analysis. This DFCL approach has been preferred in this context, strengthening privacy by allowing sensitive data to be retained. Experiments on benchmark datasets demonstrate that the federated model balances fairness, privacy, and accuracy. In this paper, we analyze administrative data count by medical stages taken from 2016 to 2022, the administrative data include data for routine operations. It is frequently used to assess by achieving an accuracy range of 19.8%. The leading diagnoses taken as per the patient’s cost and stay count identifying a disease, illness, or problem by examining the unusual combination of symptoms made an accurate diagnosis which is 26% more efficient than the leading diagnosis. The hospital dictionary analysis is based on dictionary analysis count and data visualization summary; accuracy is 50% higher than the existing data visualization summary. By comparing the hospital dictionary, home health care analysis shows a 44.5% efficient analysis rate for patient data maintenance. Moreover, the adult day-care centers analyzed 88.6% efficient analysis rate for patient data maintenance with 750 patients. Citation: International Journal of Cooperative Information Systems PubDate: 2023-06-06T07:00:00Z DOI: 10.1142/S0218843023500053