![]() |
IEEE Transactions on Information Forensics and Security
Journal Prestige (SJR): 1.274 ![]() Citation Impact (citeScore): 7 Number of Followers: 27 ![]() ISSN (Print) 1556-6013 Published by IEEE ![]() |
- Gait Attribute Recognition: A New Benchmark for Learning Richer Attributes
From Human Gait Patterns-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xu Song;Saihui Hou;Yan Huang;Chunshui Cao;Xu Liu;Yongzhen Huang;Caifeng Shan;
Pages: 1 - 14
Abstract: Compared to gait recognition, Gait Attribute Recognition (GAR) is a seldom-investigated problem. However, since gait attribute recognition can provide richer and finer semantic descriptions, it is an indispensable part of building intelligent gait analysis systems. Nonetheless, the types of attributes considered in the existing datasets are very limited. This paper contributes a new benchmark dataset for gait attribute recognition named Multi-Attribute Gait (MA-Gait). Our MA-Gait contains 95 subjects recorded from 12 camera views, resulting in more than 13000 sequences, with 16 attributes labeled, including six attributes that have never been considered in the literature. Moreover, we propose a Multi-Scale Motion Encoder (MSME) to extract robust motion features, and an Attribute-Guided Feature Selection Module (AGFSM) to adaptively capture the most discriminative attribute features from static appearance features and dynamic motion features for different attributes. Our method achieves the best GAR accuracy on the new dataset. Comprehensive experiments show the effectiveness of the proposed method through both quantitative and qualitative evaluations.
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Steganography Embedding Cost Learning With Generative Multi-Adversarial
Network-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dongxia Huang;Weiqi Luo;Minglin Liu;Weixuan Tang;Jiwu Huang;
Pages: 15 - 29
Abstract: Since the generative adversarial network (GAN) was proposed by Ian Goodfellow et al. in 2014, it has been widely used in various fields. However, there are only a few works related to image steganography so far. Existing GAN-based steganographic methods mainly focus on the design of generator, and just assign a relatively poorer steganalyzer in discriminator, which inevitably limits the performances of their models. In this paper, we propose a novel Steganographic method based on Generative Multi-Adversarial Network (Steg-GMAN) to enhance steganography security. Specifically, we first employ multiple steganalyzers rather than a single steganalyzer like existing methods to enhance the performance of discriminator. Furthermore, in order to balance the capabilities of the generator and the discriminator during training stage, we propose an adaptive way to update the parameters of the proposed GAN model according to the discriminant ability of different steganalyzers. In each iteration, we just update the poorest one among all steganalyzers in discriminator, while update the generator with the gradients derived from the strongest one. In this way, the performance of generator and discriminator can be gradually improved, so as to avoid training failure caused by gradient vanishing. Extensive comparative results show that the proposed method can achieve state-of-the-art results compared with the traditional steganography and the modern GAN-based steganographic methods. In addition, a large number of ablation experiments verify the rationality of the proposed model.
PubDate: WED, 27 SEP 2023 09:17:37 -04
Issue No: Vol. 19, No. null (2023)
-
- WF-Transformer: Learning Temporal Features for Accurate Anonymous Traffic
Identification by Using Transformer Networks-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qiang Zhou;Liangmin Wang;Huijuan Zhu;Tong Lu;Victor S. Sheng;
Pages: 30 - 43
Abstract: Website Fingerprinting (WF) is a network traffic mining technique for anonymous traffic identification, which enables a local adversary to identify the target website that an anonymous network user is browsing. WF attacks based on deep convolutional neural networks (CNN) get the state-of-the-art anonymous traffic classification performance. However, due to the locality restriction of CNN architecture for feature extraction on sequence data, these methods ignore the temporal feature extraction in the anonymous traffic analysis. In this paper, we present Website Fingerprinting Transformer (WF-Transformer), a novel anonymous network traffic analysis method that leverages Transformer networks for temporal feature extraction of traffic traces and improves the classification performance of Tor encrypted traffic. The architecture of WF-Transformer is specially designed for traffic trace processing and can classify anonymous traffic effectively. Furthermore, we evaluate the performance of WF-Transformer in both closed-world and open-world scenarios. In the closed-world scenario, WF-Transformer attains 99.1% accuracy on Tor traffic without defenses, better than state-or-the-art attacks, and archives 92.1% accuracy on the traces defended by WTF-PAD method. In the open-world scenario, WF-Transformer has better precision and recall on both defended and non-defended traces. Furthermore, WF-Transformer with a short input length (2000 cells) outperforms the DF method with a long input length (5000 cells).
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- BG: A Modular Treatment of BFT Consensus Toward a Unified Theory of BFT
Replication-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiao Sui;Sisi Duan;Haibin Zhang;
Pages: 44 - 58
Abstract: We provide an expressive framework that allows analyzing and generating provably secure, state-of-the-art Byzantine fault-tolerant (BFT) protocols over graph of nodes, a notion formalized in the HotStuff protocol. Our framework is hierarchical, including three layers. The top layer is used to model the message pattern and abstract core functions on which BFT algorithms can be built. The intermediate layer provides the core functions with high-level properties sufficient to prove the security of the top-layer algorithms. The bottom layer presents operational realizations for the core functions. Using our framework, designing a BFT protocol is reduced to instantiating two core functions together with their specific properties. Unlike prior BFT frameworks, our framework can analyze and recast BFT protocols in an exceedingly fine-grained manner. More importantly, our framework can readily generate new BFT protocols. In this paper, we show that the framework allows us to fully specify and formally prove the security for a family of BFT protocols, including known protocols such as HotStuff, Fast-HotStuff, and SBFT. Additionally, we show that our framework can generate four new protocols outperforming existing ones, including 1) two protocols with $5f+1$ replicas achieving optimal message complexity; 2) the first BFT protocol achieving optimal message complexity with $4f+1$ replicas; and 3) a two-phase protocol with $3f+1$ replicas achieving linear authenticator complexity in the fast path.
PubDate: MON, 25 SEP 2023 09:20:07 -04
Issue No: Vol. 19, No. null (2023)
-
- A Causality-Aligned Structure Rationalization Scheme Against Adversarial
Biased Perturbations for Graph Neural Networks-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ju Jia;Siqi Ma;Yang Liu;Lina Wang;Robert H. Deng;
Pages: 59 - 73
Abstract: The graph neural networks (GNNs) are susceptible to adversarial perturbations and distribution biases, which pose potential security concerns for real-world applications. Current endeavors mainly focus on graph matching, while the subtle relationships between the nodes and structures of graph-structured data remain under-explored. Accordingly, two fundamental challenges arise as follows: 1) the intricate connections among nodes may induce the distribution shift of graph samples even under the same scenario, and 2) the perturbations of inherent graph-structured representations can introduce spurious shortcuts, which lead to GNN models relying on biased data to make unstable predictions. To address these problems, we propose a novel causality-aligned structure rationalization (CASR) scheme to construct invariant rationales by probing the coherent and causal patterns, which facilitates GNN models to make stable and reliable predictions in case of adversarial biased perturbations. Specifically, the initial graph samples across domains are leveraged to boost the diversity of datasets and perceive the interaction between shortcuts. Subsequently, the causal invariant rationales can be obtained during the interventions. This allows the GNN model to extrapolate risk variations from a single observed environment to multiple unknown environments. Moreover, the query feedback mechanism can progressively promote the consistency-driven optimal rationalization by reinforcing real essences and eliminating spurious shortcuts. Extensive experiments demonstrate the effectiveness of our scheme against adversarial biased perturbations from data manipulation attacks and out-of-distribution (OOD) shifts on various graph-structured datasets. Notably, we reveal that the capture of distinctive rationales can greatly reduce the dependence on shortcut cues and improve the robustness of OOD generalization.
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Differentially Private Federated Learning With an Adaptive Noise Mechanism
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Rui Xue;Kaiping Xue;Bin Zhu;Xinyi Luo;Tianwei Zhang;Qibin Sun;Jun Lu;
Pages: 74 - 87
Abstract: Federated Learning (FL) enables multiple distributed clients to collaboratively train a model with owned datasets. To avoid the potential privacy threat in FL, researchers propose the DP-FL strategy, which utilizes differential privacy (DP) to add elaborate noise to the exchanged parameters to hide privacy information. DP-FL guarantees the privacy of FL at the cost of model performance degradation. To balance the trade-off between model accuracy and security, we propose a differentially private federated learning scheme with an adaptive noise mechanism. This is challenging, as the distributed nature of FL makes it difficult to appropriately estimate sensitivity, where sensitivity is a concept in DP that determines the scale of noise. To resolve this, we design a generic method for sensitivity estimates based on local and global historical information. We also provide instances on four commonly used optimizers to verify its effectiveness. The experiments on MNIST, FMNIST and CIFAR-10 convincingly prove that our proposed scheme achieves higher accuracy while keeping high-level privacy protection compared to prior works.
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- DeCloak: Enable Secure and Cheap Multi-Party Transactions on Legacy
Blockchains by a Minimally Trusted TEE Network-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qian Ren;Yue Li;Yingjun Wu;Yuchen Wu;Hong Lei;Lei Wang;Bangdao Chen;
Pages: 88 - 103
Abstract: The crucial blockchain privacy and scalability demand has boosted off-chain contract execution frameworks for years. Some have recently extended their capabilities to transition blockchain states by off-chain multi-party computation while ensuring public verifiability. This new capability is defined as acrfull mpt. However, existing MPT solutions lack at least one of the following properties crucially valued by communities: data availability, financial fairness, delivery fairness, and delivery atomicity. This paper proposes a novel MPT-enabled off-chain contract execution framework, Decloak. Using TEEs, Decloak solves identified properties with lower gas costs and a weaker assumption. Notably, Decloak is the first to achieve data availability and also achieve all of the above properties. This achievement is coupled with its ability to tolerate all-but-one Byzantine parties and TEE executors. Evaluating 10 MPTs in different businesses, Decloak reduces the gas cost of the SOTA, Cloak, by 65.6%. This efficiency advantage further amplifies with an increasing number of MPT’s parties. Consequently, we establish an elevated level of secure and cheap MPT, being the first to demonstrate the feasibility of achieving gas costs comparable to Ethereum transactions while evaluating MPTs.
PubDate: MON, 25 SEP 2023 09:20:07 -04
Issue No: Vol. 19, No. null (2023)
-
- NTD: Non-Transferability Enabled Deep Learning Backdoor Detection
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yinshan Li;Hua Ma;Zhi Zhang;Yansong Gao;Alsharif Abuadbba;Minhui Xue;Anmin Fu;Yifeng Zheng;Said F. Al-Sarawi;Derek Abbott;
Pages: 104 - 119
Abstract: To mitigate recent insidious backdoor attacks on deep learning models, advances have been made by the research community. Nonetheless, state-of-the-art defenses are either limited to specific backdoor attacks (i.e., source-agnostic attacks) or non-user-friendly in that machine learning expertise and/or expensive computing resources are required. This work observes that all existing backdoor attacks have an inadvertent and inevitable intrinsic weakness, termed as non-transferability —that is, a trigger input hijacks a backdoored model but is not effective in another model that has not been implanted with the same backdoor. With this key observation, we propose non-transferability enabled backdoor detection to identify trigger inputs for a model-under-test during run-time. Specifically, our detection allows a potentially backdoored model-under-test to predict a label for an input. Moreover, our detection leverages a feature extractor to extract feature vectors for the input and a group of samples randomly picked from its predicted class label, and then compares the similarity between the input and the samples in the feature extractor’s latent space to determine whether the input is a trigger input or a benign one. The feature extractor can be provided by a reputable party or is a free pre-trained model privately reserved from any open platform (e.g., ModelZoo, GitHub, Kaggle) by a user and thus our detection does not require the user to have any machine learning expertise or perform costly computations. Extensive experimental evaluations on four common tasks affirm that our detection scheme has high effectiveness (low false acceptance rate) and usability (low false rejection rate) with low detection latency against different types of backdoor attacks.
PubDate: THU, 07 SEP 2023 10:02:19 -04
Issue No: Vol. 19, No. null (2023)
-
- Region Generation and Assessment Network for Occluded Person
Re-Identification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Shuting He;Weihua Chen;Kai Wang;Hao Luo;Fan Wang;Wei Jiang;Henghui Ding;
Pages: 120 - 132
Abstract: Person Re-identification (ReID) plays a more and more crucial role in recent years with a wide range of applications. Existing ReID methods are suffering from the challenges of misalignment and occlusions, which degrade the performance dramatically. Most methods tackle such challenges by utilizing external tools to locate body parts or exploiting matching strategies. Nevertheless, the inevitable domain gap between the datasets utilized for external tools and the ReID datasets and the complicated matching process make these methods unreliable and sensitive to noises. In this paper, we propose a Region Generation and Assessment Network (RGANet) to effectively and efficiently detect the human body regions and highlight the important regions. In the proposed RGANet, we first devise a Region Generation Module (RGM) which utilizes the pre-trained CLIP to locate the human body regions using semantic prototypes extracted from text descriptions. Learnable prompt is designed to eliminate domain gap between CLIP datasets and ReID datasets. Then, to measure the importance of each generated region, we introduce a Region Assessment Module (RAM) that assigns confidence scores to different regions and reduces the negative impact of the occlusion regions by lower scores. The RAM consists of a discrimination-aware indicator and an invariance-aware indicator, where the former indicates the capability to distinguish from different identities and the latter represents consistency among the images of the same class of human body regions. Extensive experimental results for six widely-used benchmarks including three tasks (occluded, partial, and holistic) demonstrate the superiority of RGANet against state-of-the-art methods.
PubDate: MON, 25 SEP 2023 09:20:09 -04
Issue No: Vol. 19, No. null (2023)
-
- MCRe: A Unified Framework for Handling Malicious Traffic With Noise Labels
Based on Multidimensional Constraint Representation-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qingjun Yuan;Gaopeng Gou;Yanbei Zhu;Yuefei Zhu;Gang Xiong;Yongjuan Wang;
Pages: 133 - 147
Abstract: Due to the limitations of the existing annotation methods, the prevalence of label noise can be caused in realistic malicious traffic datasets, which has a significant impact on the training and evaluation of deep learning-based intrusion detection models. Recently, various methods have been proposed to deal with noise-containing labeled datasets, and they can be roughly divided into two categories: data cleaning and robust training. However, the different processing ideas lead these two types of methods to ignore the information in different components of the dataset, resulting in a cliff-like drop in performance under high noise conditions. To this end, this study proposes a unified framework for handling noise malicious traffic based on the multidimensional constrained representations named MCRe, which unifies data cleaning and robust training into an ideal representation function approximation. According to the properties of the ideal representation function, information integrity constraints, cluster separability constraints and core proximity constraints are defined to drive MCRe to approximate the ideal representation during iteration. These constraints led MCRe to learn the individual, intra-class, and global levels of distributed knowledge, thus avoiding irrational domain knowledge extraction and ensuring strong label noise robustness of the representation network. We validated MCRe on a dataset that includes 22 types of realistic malicious traffic. Experimental results show that MCRe can outperform the state-of-the-art methods in both data cleaning and robust training downstream tasks, achieving 85% pure sample rate and 82% classification accuracy even under the condition of up to 90% noise labels. In addition, the generalizability of MCRe was verified on several public datasets. Finally, MCRe was also well-extended to enhance other data cleaning and robust training approaches.
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Robust Camera Model Identification Over Online Social Network Shared
Images via Multi-Scenario Learning-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Haiwei Wu;Jiantao Zhou;Xinyu Zhang;Jinyu Tian;Weiwei Sun;
Pages: 148 - 162
Abstract: Camera model identification (CMI) can be widely used in image forensics such as authenticity determination, copyright protection, forgery detection, etc. Meanwhile, with the vigorous development of the Internet, online social networks (OSNs) have become the dominant channels for image sharing and transmission. However, the inevitable lossy operations on OSNs, such as compression and post-processing, impose great challenges to the existing CMI schemes, as they severely destroy the camera traces left in the images under investigation. In this work, we propose a novel CMI method that is robust against the lossy operations of various OSN platforms. Specifically, it is observed that a camera trace extractor can be easily trained on a single degradation scenario (e.g., one specific OSN platform); while much more difficult on mixed degradation scenarios (e.g., multiple OSN platforms). Inspired by this observation, we design a new multi-scenario learning (MSL) strategy, enabling us to extract robust camera traces across different OSNs. Furthermore, noticing that image smooth regions incur less distortions by OSN and less interference by image signal itself, we suggest a SmooThness-Aware Trace Extractor (STATE) that can adaptively extract camera traces according to the smoothness of the input image. The superiority of our method is verified by comparative experiments with four state-of-the-art methods, especially under various OSN transmission scenarios. Particularly, for the open-set camera model verification task, we greatly surpass the second-place by 15.30% in AUC on the FODB dataset; while for the close-set camera model classification task, we are significantly ahead of the second-place by 34.51% in F1 on the SIHDR dataset. The code of our proposed method is available at https://github.com/HighwayWu/CameraTraceOSN.
PubDate: MON, 25 SEP 2023 09:20:09 -04
Issue No: Vol. 19, No. null (2023)
-
- To Act or Not to Act: An Adversarial Game for Securing Vehicle Platoons
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Guoxin Sun;Tansu Alpcan;Benjamin I. P. Rubinstein;Seyit Camtepe;
Pages: 163 - 177
Abstract: Vehicle platooning systems are vulnerable to malicious attacks that exploit vehicle-to-vehicle (V2V) communication, causing potential instability and increased collision risks. Conventional machine learning (ML) detection methods show promise but can be circumvented by intelligent adversaries. In this paper, we present a novel, end-to-end attack detection and mitigation approach that uniquely incorporates advancements in (adversarial) machine learning, control theory, and game theory. We employ a non-cooperative security game with imperfect information to model complex attack/defense interactions. This aids in making informed decisions regarding detector deployment and attack mitigation, even amidst possibly misleading attack detection reports. We model our control system reconfiguration attack mitigation approach as a switched system and provide a n in-depth stability analysis. The simulations conducted in a sophisticated simulator demonstrate our approach’s potential for real-world online deployment. Our game-based defense formulation significantly improves inter-vehicle distance and defense utilities against both cyber-physical and adversarially-masked attacks while reducing the distance disturbance caused by the ambient traffic by up to 87% compared to baseline defense approaches.
PubDate: THU, 28 SEP 2023 09:17:20 -04
Issue No: Vol. 19, No. null (2023)
-
- Two-User Gaussian Broadcast Wiretap Channel With Common Message and
Feedback: Revisit-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Haoheng Yuan;Yanghe Feng;Chuanchuan Yang;Zhuojun Zhuang;Bin Dai;
Pages: 178 - 193
Abstract: The two-user Gaussian broadcast wiretap channel with common message and feedback (GBC-WTC-CM-F) is revisited. Traditionally, achievable secrecy rate of this model is achieved by combining Marton’s coding scheme for the two-user broadcast channel (BC) and the secret-key based feedback scheme, where both of the two feedback links are used to transmit secret keys shared between the transceivers. Recently, it has been shown that for the Gaussian wiretap channel with feedback, the Schalkwijk-Kailath (SK) feedback scheme achieves its secrecy capacity. Then it is natural to ask: can we do better when applying the SK-type scheme to the GBC-WTC-CM-F' In this paper, we answer this question by proposing two kinds of SK-type schemes. Specifically, first, we propose a hybrid scheme where one feedback link is used to transmit a secret key, and the other one is used for SK-type coding. We show that this hybrid scheme may perform better than the existing one in some cases. Next, we show that Ozarow’s extended SK scheme for the two-user Gaussian BC with feedback, where both feedback links are used for SK-type coding, is self-secure (satisfying perfect weak secrecy constraint by itself) and may perform the best. We further show that Ozarow’s scheme is in fact a secure finite blocklength coding scheme, and extend it to the static fading SISO and SIMO cases. Finally, the results of this paper are further explained by numerical examples.
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Dr. Defender: Proactive Detection of Autopilot Drones Based on CSI
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jiangyi Deng;Xiaoyu Ji;Beibei Wang;Bin Wang;Wenyuan Xu;
Pages: 194 - 206
Abstract: The market for consumer drones is growing and drones are becoming ever more pervasive than before in our life. However, drones have also brought about severe privacy violations and even safety issues. Especially, drones with cameras can snap pictures or take private videos. Researchers have designed drone detection mechanisms by passively inspecting the radio frequency (RF) signal in the communication channel between a drone and its controller. However, passive detection solutions shall fail when drones are in autopilot mode without control signals from controllers. In this paper, we seek to detect autopilot drones that transmit no RF signals by developing a proactive detection system named Dr. Defender. To this end, we resort to the Wi-Fi signals prevalent at each house and propose a proactive drone detection mechanism. To facilitate the detection of drones with Wi-Fi, we first study the motion characteristics of drones, including the shifting, moving, and spinning of propellers that can uniquely represent a drone. Then we investigate the physical layer information of Wi-Fi signals, i.e., the channel state information (CSI), to reveal specific motions of a drone. Finally, we implement our CSI-based proactive drone detection system, which requires no signal transmission from a drone or its controller. We extensively validate the feasibility and performance of our solution under different distances and directions of drones relative to a window. Results show that Dr. Defender can accurately detect drones 10 meters away.
PubDate: MON, 04 SEP 2023 10:04:21 -04
Issue No: Vol. 19, No. null (2023)
-
- An Efficient and Dynamic Privacy-Preserving Federated Learning System for
Edge Computing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xinyu Tang;Cheng Guo;Kim-Kwang Raymond Choo;Yining Liu;
Pages: 207 - 220
Abstract: Federated learning (FL) has been used to enhance privacy protection in edge computing systems. However, attacks on uploaded model gradients may lead to private data leakage, and edge devices frequently joining and leaving will impact the system running. In this paper, we propose a dynamic and flexible federated edge learning (FEL) scheme that can defend against malicious edge servers and edge devices to recover sensitive data and efficiently manage edge devices. A heterogeneity-aware scheduling strategy is designed to take into account the different impacts of heterogeneous edge devices on global model performance. The strategy determines the order of devices participation in each round based on the relative contribution level of the online edge device model, and the edge device with the highest contribution level is selected first. Numerical experiments show that our system improves test accuracy and time, and the security analyses show that our scheme meets the security requirements.
PubDate: THU, 28 SEP 2023 09:17:20 -04
Issue No: Vol. 19, No. null (2023)
-
- Encrypted and Compressed Key-Value Store With Pattern-Analysis Security in
Cloud Systems-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chen Zhang;Yulong Ming;Mingyue Wang;Yu Guo;Xiaohua Jia;
Pages: 221 - 234
Abstract: With the increasing concern about data privacy and data explosion, some encrypted and compressed key-value (KV) stores have been proposed. A remarkable way to combine encryption and compression is to pack KV pairs into packs, and then compress and encrypt each pack separately. Recent research has shown that even if the data is encrypted, adversaries can still use the leaked information about data length and access frequency to launch pattern-analysis attacks. For this problem, some schemes have been proposed to protect the length and frequency distribution of packs. However, existing solutions protect such information at the cost of high storage and bandwidth overhead. In this paper, we propose an encrypted and compressed KV store with pattern-analysis security, which can resist pattern-analysis attacks with minimal overhead. We first devise a secure KV pair packing scheme, which guarantees pack length security with bounded storage overhead. Then we propose a $K$ -indistinguishable pack frequency smoothing scheme. It can protect the distribution of pack frequency with minimal bandwidth overhead. We formally analyze the security of our design and implement our proposed secure KV storage system on Redis and RocksDB. Performance evaluation results demonstrate that our design minimizes the overhead of achieving pattern-analysis security.
PubDate: THU, 28 SEP 2023 09:17:20 -04
Issue No: Vol. 19, No. null (2023)
-
- CGFormer: ViT-Based Network for Identifying Computer-Generated Images With
Token Labeling-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weize Quan;Pengfei Deng;Kai Wang;Dong-Ming Yan;
Pages: 235 - 250
Abstract: The advanced graphics rendering techniques and image generation algorithms significantly improve the visual quality of computer-generated (CG) images, and this makes it more challenging to distinguish between CG images and natural images (NIs) for a forensic detector. For the identification of CG images, human beings often need to inspect and evaluate the entire image and its local region as well. In addition, we observe that the distributions of both near and far patch-wise correlation have differences between CG images and NIs. Current mainstream methods adopt the CNN-based architecture with the classical cross entropy loss, however, there are several limitations: 1) the weakness of long-distance relationship modeling of image content due to the local receptive field of CNN; 2) the pixel sensitivity due to the convolutional computation; 3) the insufficient supervision due to the training loss on the whole image. In this paper, we propose a novel vision transformer (ViT)-based network with token labeling for CG image identification. Our network, called CGFormer, consists of patch embedding, feature modeling, and token prediction. We apply patch embedding to sequence the input image and weaken the pixel sensitivity. Stacked multi-head attention-based transformer blocks are utilized to model the patch-wise relationship and introduce a certain level of adaptability. Besides the conventional classification loss on class token of the whole image, we additionally introduce a soft cross entropy loss on patch tokens to comprehensively exploit the supervision information from local patches. Extensive experiments demonstrate that our method achieves the state-of-the-art forensic performance on six publicly available datasets in terms of classification accuracy, generalization, and robustness. Code is available at https://github.com/feipiefei/CGFormer.
PubDate: WED, 04 OCT 2023 09:19:10 -04
Issue No: Vol. 19, No. null (2023)
-
- A Lightweight Certificateless Multi-User Matchmaking Encryption for Mobile
Devices: Enhancing Security and Performance-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ningbin Yang;Chunming Tang;Debiao He;
Pages: 251 - 264
Abstract: The technology for securely sharing data has grown extensively in recent years. Many users are willing to share their lightweight mobile device data via social networks or the cloud. A novel matchmaking encryption primitive was proposed in CRYPTO’19, whose potential for privacy protection and data sharing security was introduced. However, matchmaking encryption technology faces challenges in flexibly realizing critical functions, such as one-to-many non-interactive scenarios, no key escrow problem, stronger security, lightweight computation and low communication overheads for mobile devices, which impede their widespread application. To achieve the above functions, we present a lightweight certificateless multi-user matchmaking encryption (LC-MUME) for mobile devices, which enhances security flexibly and performance based on standard hard assumptions and low-consumption pairing-free technology, while also avoiding one-by-one encryption for each user. The proposed LC-MUME scheme enjoys minor computation and communication overheads in a one-to-many non-interactive certificateless cryptosystem. We prove that our scheme achieves indistinguishability-based chosen-ciphertext attack (IND-CCA) security, the existential unforgeability under a chosen message attack (EU-CMA) security and anonymity-CCA security under the random oracle model. Our LC-MUME scheme outperforms the state-of-the-art schemes regarding efficiency and flexibility, as demonstrated by the performance comparison and analysis, and therefore is a practical solution for resource-constrained mobile devices.
PubDate: WED, 04 OCT 2023 09:19:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Masked Face Transformer
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weisong Zhao;Xiangyu Zhu;Kaiwen Guo;Haichao Shi;Xiao-Yu Zhang;Zhen Lei;
Pages: 265 - 279
Abstract: The COVID-19 pandemic makes wearing masks mandatory. Existing CNN-based face recognition (FR) systems suffer from severe performance degradation as masks occlude the vital facial regions. Recently, Vision Transformers have shown promising performance in various vision tasks with quadratic computation costs. Swin Transformer first proposes a successive window attention mechanism allowing the cross-window connection and more computational efficiency. Despite its potential, the deployment of Swin Transformer in masked face recognition encounters two challenges: 1) the attention range is insufficient to capture locally compatible face regions. 2) Masked face recognition can be defined as an occlusion-robust classification task with a known occlusion position, i.e., the position of the mask is minor-varying, which is overlooked but efficient in improving the model’s recognition accuracy. To alleviate the above problem, we propose a Masked Face Transformer (MFT) with Masked Face-compatible Attention (MFA). The proposed MFA 1) introduces two additional window partition configurations, e.g., row shift and column shift, to enlarge the attention range in Swin with invariant computation costs, and 2) suppresses the interaction between the masked and non-masked regions to retain their discrepancies. Additionally, as mask occlusion leads to a separation between the masked and non-masked samples of the same identity, we propose to explore the relationship between them by a ClassFormer module to enhance intra-class aggregation. Extensive experiments show that MFT outperforms state-of-the-art masked face recognition methods in both simulated and real masked face testing datasets.
PubDate: FRI, 06 OCT 2023 09:17:50 -04
Issue No: Vol. 19, No. null (2023)
-
- Backdoor Attack on Deep Learning-Based Medical Image Encryption and
Decryption Network-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yi Ding;Zi Wang;Zhen Qin;Erqiang Zhou;Guobin Zhu;Zhiguang Qin;Kim-Kwang Raymond Choo;
Pages: 280 - 292
Abstract: Medical images often contain sensitive information, and one typical security measure is to encrypt medical images prior to storage and analysis. A number of solutions, such as those utilizing deep learning, have been proposed for medical image encryption and decryption. However, our research shows that deep learning-based encryption models can potentially be vulnerable to backdoor attacks. In this paper, a backdoor attack paradigm for encryption and decryption network is proposed and corresponding attacks are respectively designed for encryption and decryption scenarios. For attacking the encryption model, a backdoor discriminator is adopted, which is randomly trained with the normal discriminator to confuse the encryption process. In the decryption scenario, a number of subnetwork parameters are replaced and the subnetwork can be activated when detecting the trigger embedded into the input (encrypted image) to degrade the decryption performance. Considering the model performance degradation due to parameter replacement, the model pruning is also adopted to further strengthen the attacking performance. Furthermore, the image steganography is adopted to generate invisible triggers for each image; subsequently, improving the stealthiness of backdoor attacks. Our research on designing backdoor attacks for encryption and decryption network can serve as an attacking mode for such networks, and provides another research direction for improving the security of such models. This research is also one of the earliest works to realize the backdoor attack on the deep learning based medical encryption and decryption network to evaluate the security performance of these networks. Extensive experimental results show that the proposed method can effectively threaten the security performance both for the encryption and decryption network.
PubDate: THU, 05 OCT 2023 09:16:56 -04
Issue No: Vol. 19, No. null (2023)
-
- AutoPwn: Artifact-Assisted Heap Exploit Generation for CTF PWN
Competitions-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dandan Xu;Kai Chen;Miaoqian Lin;Chaoyang Lin;Xiaofeng Wang;
Pages: 293 - 306
Abstract: Capture-the-flag (CTF) competitions have become highly successful in security education, and heap corruption is considered one of the most difficult and rewarding challenges due to its complexity and real-world impact. However, developing a heap exploit is a challenging task that often requires significant human involvement to manipulate memory layouts and bypass security checks. To facilitate the exploitation of heap corruption, existing solutions develop automated systems that rely on manually crafted patterns to generate exploits. Such manual patterns tend to be specific, which limits their flexibility to cope with the evolving exploit techniques. To address this limitation, we explore the problem of the automatic summarization of exploit patterns. We leverage an observation that public attack artifacts provide key insights into heap exploits. Based upon this observation, we develop AutoPwn, the first artifact-assisted AEG system that automatically summarizes exploit patterns from artifacts of known heap exploits and uses them to guide the exploitation of new programs. Considering the diversity of programs and exploits, we propose to use a novel Exploitation State Machine (ESM), with generic states and transitions to model the exploit patterns, and then efficiently construct it through combining the dynamic monitoring of exploits and the semantic analysis of their text descriptions. We implement a prototype of AutoPwn and evaluate it on 96 testing CTF binaries. The results show that AutoPwn produces 22 successful exploits and 13 partial exploits, preliminarily demonstrating its efficacy.
PubDate: THU, 05 OCT 2023 09:16:56 -04
Issue No: Vol. 19, No. null (2023)
-
- Invisible Backdoor Attack With Dynamic Triggers Against Person
Re-Identification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wenli Sun;Xinyang Jiang;Shuguang Dou;Dongsheng Li;Duoqian Miao;Cheng Deng;Cairong Zhao;
Pages: 307 - 319
Abstract: In recent years, person Re-IDentification (ReID) has rapidly progressed with wide real-world applications but is also susceptible to various forms of attack, including proven vulnerability to adversarial attacks. In this paper, we focus on the backdoor attack on deep ReID models. Existing backdoor attack methods follow an all-to-one or all-to-all attack scenario, where all the target classes in the test set have already been seen in the training set. However, ReID is a much more complex fine-grained open-set recognition problem, where the identities in the test set are not contained in the training set. Thus, previous backdoor attack methods for classification are not applicable to ReID. To ameliorate this issue, we propose a novel backdoor attack on deep ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA). Instead of learning fixed triggers for the target classes from the training set, DT-IBA can dynamically generate new triggers for any unknown identities. Specifically, an identity hashing network is proposed to first extract target identity information from a reference image, which is then injected into the benign images by image steganography. We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets and evaluate the effectiveness of several defense methods against our attack.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- A Perspective of Using Frequency-Mixing as Entropy in Random Number
Generation for Portable Hardware Cybersecurity IP-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiangye Wei;Liming Xiu;Yimao Cai;
Pages: 320 - 333
Abstract: True random number generator (TRNG) is a crucial component in security. In typical TRNGs, entropy comes directly from device noises. In this work, an improved method of using frequency-mixing as means for enriching entropy is implemented. A group of electromagnetic waves are mixed to create an irregular waveform that is then sampled to generate a random bitstream. Some part of the bitstream is fed back to the system for influencing the future frequencies of the sourcing waves, making it a chaotic system. The circuit-level support for this TRNG is the TAF-DPS (Time-Average-Frequency Direct Period Synthesis) technology. It can be digitally implemented, making the TRNG a portable IP. The merits of this TRNG include no need of special device, no post-processing, free of bias, programmable throughput, and hard-to-recognize spectrum. Those features make the TRNG suitable for a large array of applications, particularly for security in cyberspace. This TRNG is validated by a silicon chip on a 180 nm process, also on a FPGA.
PubDate: FRI, 06 OCT 2023 09:17:50 -04
Issue No: Vol. 19, No. null (2023)
-
- Fuzzy Linguistic Knowledge Reasoning-Based Secure Control for Connected
Nonlinear Servosystem-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Meng Li;Zheng Pei;Yong Chen;Zhenhai Miao;
Pages: 334 - 343
Abstract: In this paper, the issue of tracking control for connected servosystems with coupling input and false data injection (FDI) attacks is studied. A fuzzy linguistic knowledge reasoning-based secure control scheme is proposed. Firstly, the dynamic model of connected nonlinear servosystems suffer from coupling input and FDI attacks is modeled. Then, a fuzzy linguistic estimator based on experimental observation and human knowledge is proposed to approximate the nonlinear function. Furthermore, an observer depended on the fuzzy linguistic estimator is designed to observe the system state. Thirdly, to achieve the tracking control of connected nonlinear servosystems, a fuzzy linguistic knowledge reasoning-based secure control algorithm is presented. Finally, simulation and experiment results demonstrate the effectiveness of the algorithm.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- Domain Generalization via Aggregation and Separation for Audio Deepfake
Detection-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuankun Xie;Haonan Cheng;Yutian Wang;Long Ye;
Pages: 344 - 358
Abstract: In this paper, we propose an Aggregation and Separation Domain Generalization (ASDG) method for Audio DeepFake Detection (ADD). Fake speech generated from different methods exhibits varied amplitude and frequency distributions rather than genuine speech. In addition, the spoofing attacks in training sets may not keep pace with the evolving diversity of real-world deepfake distributions. In light of this, we attempt to learn an ideal feature space that can aggregate real speech and separate fake speech to achieve better generalizability in the detection of unseen target domains. Specifically, we first propose a feature generator based on Lightweight Convolutional Neural Networks (LCNN), which is employed for generating a feature space and categorizing the feature into real and fake. Meanwhile, single-side domain adversarial learning is leveraged to make only the real speech from different domains indistinguishable, which enables the distribution of real speech to be aggregated in the feature space. Furthermore, a triplet loss is adopted to separate the distribution of fake speech while aggregating the distribution of real speech. Finally, in order to test the generalizability of the model, we train it with three different English datasets and evaluate in harsh conditions: cross-language and noisy datasets. The extensive experiments show that ASDG outperforms the baseline models in cross-domain tasks and decreases Equal Error Rate (EER) by up to 39.24% when compared to that of RawNet2. It is proved that the proposed Aggregation and Separation Domain Generalization method can be an effective strategy to improve the model generalizability.
PubDate: MON, 16 OCT 2023 09:18:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Output Positioning to Derive Maximum Entropy From Physical Unclonable
Functions-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Saeed Abdolinezhad;Lukas Zimmermann;Axel Sikora;
Pages: 359 - 371
Abstract: Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
PubDate: THU, 28 SEP 2023 09:17:20 -04
Issue No: Vol. 19, No. null (2023)
-
- Confidence-Calibrated Face and Kinship Verification
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Min Xu;Ximiao Zhang;Xiuzhuang Zhou;
Pages: 372 - 384
Abstract: In this paper, we investigate the problem of prediction confidence in face and kinship verification. Most existing face and kinship verification methods focus on accuracy performance while ignoring confidence estimation for their prediction results. However, confidence estimation is essential for modeling reliability and trustworthiness in such high-risk tasks. To address this, we introduce an effective confidence measure that allows verification models to convert a similarity score into a confidence score for any given face pair. We further propose a confidence-calibrated approach, termed Angular Scaling Calibration (ASC). ASC is easy to implement and can be readily applied to existing verification models without model modifications, yielding accuracy-preserving and confidence-calibrated probabilistic verification models. In addition, we introduce the uncertainty in the calibrated confidence to boost the reliability and trustworthiness of the verification models in the presence of noisy data. To the best of our knowledge, our work presents the first comprehensive confidence-calibrated solution for modern face and kinship verification tasks. We conduct extensive experiments on four widely used face and kinship verification datasets, and the results demonstrate the effectiveness of our proposed approach. Code and models are available at https://github.com/cnulab/ASC.
PubDate: MON, 25 SEP 2023 09:20:10 -04
Issue No: Vol. 19, No. null (2023)
-
- An Adaptive Method for Camera Attribution Under Complex Radial Distortion
Corrections-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Andrea Montibeller;Fernando Pérez-González;
Pages: 385 - 400
Abstract: Radial distortion correction, applied by in-camera or out-camera software/firmware alters the supporting grid of the image so as to hamper PRNU-based camera attribution. Existing solutions to deal with this problem try to invert/estimate the correction using radial transformations parameterized with few variables in order to restrain the computational load; however, with ever more prevalent complex distortion corrections their performance is unsatisfactory. In this paper we propose an adaptive algorithm that by dividing the image into concentric annuli is able to deal with sophisticated corrections like those applied out-camera by third party software like Adobe Lightroom, Photoshop, Gimp and PT-Lens. We also introduce a statistic called cumulative peak of correlation energy (CPCE) that allows for an efficient early stopping strategy. Experiments on a large dataset of in-camera and out-camera radially corrected images and on a in-the-wild dataset of images from smartphones show that our solution improves the state of the art in terms of both accuracy and computational cost.
PubDate: WED, 27 SEP 2023 09:17:37 -04
Issue No: Vol. 19, No. null (2023)
-
- Constructing New Backbone Networks via Space-Frequency Interactive
Convolution for Deepfake Detection-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhiqing Guo;Zhenhong Jia;Liejun Wang;Dewang Wang;Gaobo Yang;Nikola Kasabov;
Pages: 401 - 413
Abstract: The serious concerns over the negative impacts of Deepfakes have attracted wide attentions in the community of multimedia forensics. The existing detection works achieve deepfake detection by improving the traditional backbone networks to capture subtle manipulation traces. However, there is no attempt to construct new backbone networks with different structures for Deepfake detection by improving the internal feature representation of convolution. In this work, we propose a novel Space-Frequency Interactive Convolution (SFIConv) to efficiently model the manipulation clues left by Deepfake. To obtain high-frequency features from tampering traces, a Multichannel Constrained Separable Convolution (MCSConv) is designed as the component of the proposed SFIConv, which learns space-frequency features via three stages, namely generation, interaction and fusion. In addition, SFIConv can replace the vanilla convolution in any backbone networks without changing the network structure. Extensive experimental results show that seamlessly equipping SFIConv into the backbone network greatly improves the accuracy for Deepfake detection. In addition, the space-frequency interaction mechanism does benefit to capturing common artifact features, thus achieving better results in cross-dataset evaluation. Our code will be available at https://github.com/EricGzq/SFIConv.
PubDate: MON, 16 OCT 2023 09:18:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Heavy Hitter Identification Over Large-Domain Set-Valued Data With Local
Differential Privacy-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Youwen Zhu;Yiran Cao;Qiao Xue;Qihui Wu;Yushu Zhang;
Pages: 414 - 426
Abstract: Set-valued data are widely used to represent information in the real word, such as individual daily behaviors, items in shopping carts and web browsing history. By collecting set-valued data and identifying heavy hitters, service providers (i.e., the collector) can learn usage preferences of costumers (i.e., users), and improve the quality of their services by the learned information. However, the collection of raw data would bring privacy risks to users. Recently, local differential privacy (LDP) has emerged as a rigorous privacy framework for user private data collection. At the same time, many LDP schemes have been designed to achieve heavy hitters, but most of them are limited by the large data domain due to the huge computation cost. In this paper, we propose an LDP framework: PemSet, to efficiently identify heavy hitters from set-valued data with a large domain. In PemSet, users mainly focus on the prefix of each item (i.e., the first few bits of the binary expression of each item), and only perturb and report prefixes to reduce computation cost. Sometimes the prefixes of different items are the same, so the reported set-valued data could be a multiset, i.e., a set including multiple same items. As such, we design four LDP protocols MOLH, MOLH-S, MPCKV, MWheel to estimate frequencies of items in the multiset setting, and compare their performance under PemSet framework by experiments. Experimental results demonstrate that MOLH can perform the best in a high privacy region, i.e., $\epsilon < 1$ , while MWheel can obtain the highest utility when privacy budget is large, i.e., $\epsilon \geqslant 1$ .
PubDate: MON, 16 OCT 2023 09:18:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Gradient-Leaks: Enabling Black-Box Membership Inference Attacks Against
Machine Learning Models-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Gaoyang Liu;Tianlong Xu;Rui Zhang;Zixiong Wang;Chen Wang;Ling Liu;
Pages: 427 - 440
Abstract: Machine Learning (ML) techniques have been applied to many real-world applications to perform a wide range of tasks. In practice, ML models are typically deployed as the black-box APIs to protect the model owner’s benefits and/or defend against various privacy attacks. In this paper, we present Gradient-Leaks as the first evidence showcasing the possibility of performing membership inference attacks (MIAs), with mere black-box access, which aim to determine whether a data record was utilized to train a given target ML model or not. The key idea of Gradient-Leaks is to construct a local ML model around the given record which locally approximates the target model’s prediction behavior. By extracting the membership information of the given record from the gradient of the substituted local model using an intentionally modified autoencoder, Gradient-Leaks can thus breach the membership privacy of the target model’s training data in an unsupervised manner, without any priori knowledge about the target model’s internals or its training data. Extensive experiments on different types of ML models with real-world datasets have shown that Gradient-Leaks can achieve a better performance compared with state-of-the-art attacks.
PubDate: MON, 16 OCT 2023 09:18:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Finger-Knuckle Assisted Slap Fingerprint Identification System for Higher
Security and Convenience-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhenyu Zhou;Ajay Kumar;
Pages: 441 - 454
Abstract: Every day, billions of fingerprint images are captured worldwide through the extensive deployment of slap-fingerprint acquisition devices, serving e-governance programs and bolstering national border security. Several studies from national ID programs, like UIDAI and NIST, have indicated that about 2% of the user population may lack usable fingerprints. Finger knuckle patterns are inherently presented during such slap-fingerprint acquisition and can be simultaneously acquired without imposing any additional inconvenience on the users. Leveraging these finger knuckle patterns can enable not only significant improvement in identification accuracy but also enhance overall protection and facilitates smoother traffic flow. This paper develops the first such finger-knuckle-assisted fingerprint identification system for real-world applications. We systematically develop automated finger knuckle detection and segmentation algorithms, for multiple knuckles and under complex illumination, for such contactless images from the deployed slap fingerprint devices. Currently, available algorithms offer limited performance for such images, and therefore this paper proposes a new approach to more accurately match such knuckle images. Our experimental results illustrate the significant performance improvement over existing knuckle matching algorithms, and further by incorporating dynamic fusion capabilities. This paper also introduces the first joint finger-knuckle and fingerprint database, from 120 different subjects, in the public domain to advance further research and development efforts needed in this area.
PubDate: MON, 25 SEP 2023 09:20:09 -04
Issue No: Vol. 19, No. null (2023)
-
- Toward a Critical Evaluation of Robustness for Deep Learning Backdoor
Countermeasures-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Huming Qiu;Hua Ma;Zhi Zhang;Alsharif Abuadbba;Wei Kang;Anmin Fu;Yansong Gao;
Pages: 455 - 468
Abstract: Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models. However, their robustness is currently inadvertently ignored, which can introduce severe consequences, e.g., a countermeasure can be misused and result in a false implication of backdoor detection. For the first time, we critically examine the robustness of existing backdoor countermeasures. As an initial study, we first identify five potential non-robust failure factors including binary classification, poison rate, model complexity, single-model justification, and hyperparameter sensitivity. As exhaustively examining defenses is infeasible, we instead focus on influential backdoor detection-based countermeasures consisting of model-inspection ones including Neural Cleanse (S&P’19), ABS (CCS’19), and MNTD (S&P’21), and data-inspection ones including SCAn (USENIX SECURITY’21) to examine their failure cases under one or more of these factors. Although these investigated countermeasures claim that they work well under their respective threat models, they have inherent unexplored non-robust cases, which are not even rooted from delicate adaptive attacks. We demonstrate how to trivially bypass them aligned with their respective threat models by simply varying the aforementioned factors. Particularly, for each defense, formal proofs or empirical studies are used to reveal its non-robust cases where it is not as robust as it claims or expects. This work highlights the necessity of thoroughly evaluating the robustness of backdoor countermeasures to avoid their misleading security implications in unknown non-robust cases.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- Hardening Password-Based Credential Databases
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yaqing Song;Chunxiang Xu;Yuan Zhang;Shiyu Li;
Pages: 469 - 484
Abstract: We propose a protection mechanism for password-based credential databases maintained by service providers against leakage, dubbed PCDL. In PCDL, each authentication credential is derived from a user’s password and a salt, where a service provider employs a set of key servers to share the salt in a threshold way. With PCDL, an external adversary cannot derive any information about the underlying passwords from a compromised credential database, even if he can compromise some of the key servers. The most prominent manifestation of PCDL is transparency: integrating PCDL with existing password-based authentication schemes does not require users to perform any additional operation (and thereby does not change users’ interaction patterns), yet enhances the security guarantee significantly. PCDL serves as an independent component only deployed on the service provider side to harden the credential database. As such, PCDL is well compatible with existing password-based authentication schemes. We analyze the security of PCDL and conduct a performance evaluation, which shows that PCDL is secure and efficient.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- BR-HIDF: An Anti-Sparsity and Effective Host Intrusion Detection Framework
Based on Multi-Granularity Feature Extraction-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Junjiang He;Cong Tang;Wenshan Li;Tao Li;Li Chen;Xiaolong Lan;
Pages: 485 - 499
Abstract: Host-based intrusion detection systems (HIDS) have been widely acknowledged as an effective approach for detecting and mitigating malicious activities. Among various data sources utilized in HIDS, system call traces have gained significant popularity due to their inherent advantage of providing fine-grained information. Nevertheless, conventional feature extraction techniques relying on system calls tend to overlook the issue of high-dimensional sparse feature space. In this paper, we conduct a theoretical analysis to investigate the underlying causes of the sparsity problem. Subsequently, we propose an anti-sparse theory (anti-ST) as a solution to address this issue. Then, we design a multi-granularity feature extraction method (MGFE), which also meets the prerequisite mathematical conditions of the anti-ST. By applying this method, we effectively reduce the size of the feature space and minimize the number of generated features, thus mitigating sparsity. Furthermore, leveraging this approach, we propose a robust and anti-sparsity host intrusion detection framework, known as the MGFE-based Host Intrusion Detection Framework (BR-HIDF). A series of experiments were conducted to evaluate the proposed framework and compare it with the state-of-the-art method. The results demonstrate that our framework achieves impressive accuracy (97.26%), precision (97.62%), recall (96.85%), and F1 score (97.23%) in the intrusion detection task, surpassing existing frameworks. Moreover, the proposed framework significantly reduces the time overhead by 38.80%, exhibiting the highest AUC value of 0.992. Furthermore, we enhance the robustness of the detection system by integrating host-based and network-based detection, which provides greater flexibility in identifying various types of attacks.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- Overcoming Data Limitations: A Few-Shot Specific Emitter Identification
Method Using Self-Supervised Learning and Adversarial Augmentation-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chao Liu;Xue Fu;Yu Wang;Lantu Guo;Yuchao Liu;Yun Lin;Haitao Zhao;Guan Gui;
Pages: 500 - 513
Abstract: Specific emitter identification (SEI) based on radio frequency fingerprinting (RFF) is a physical layer authentication method in the field of wireless network security. RFFs are unique features embedded in the electromagnetic waves, which come from the hard imperfections in the wireless devices. Deep learning has been applied to many SEI tasks due to its powerful feature extraction capabilities. However, the success of most methods hinges on massive and labeled samples, and few methods focus on a realistic scenario, where few samples are available and labeled. In this paper, to overcome data limitations, we propose a few-shot SEI (FS-SEI) method based on self-supervised learning and adversarial augmentation (SA2SEI). Specifically, to overcome the limitation of label dependence for auxiliary dataset, a novelty adversarial augmentation (Adv-Aug)-powered self-supervised learning is designed to pre-train a RFF extractor using unlabeled auxiliary dataset. Subsequently, to overcome the limitation of sample dependence, knowledge transfer is introduced to fine-tune the extractor and a classifier with target dataset including few samples (5-30 samples per emitter in this paper) and corresponding labels. In addition, auxiliary dataset and target dataset are come from different emitters. An open-source large-scale real-world automatic-dependent surveillance-broadcast (ADS-B) dataset and a Wi-Fi dataset are used to evaluate the proposed SA2SEI method. The simulation results show that the proposed method can extract more discriminative RFF features and obtain higher identification performance in the FS-SEI. Specifically, when there are only 5 samples per Wi-Fi device, it can achieve $83.40\%$ identification accuracy, in which $38.63\%$ identification accuracy improvement comes from the Adv-Aug of pre-training process. The codes are available at https://github.com/LIUC-000/SA2SEI.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- On the Instability of Softmax Attention-Based Deep Learning Models in
Side-Channel Analysis-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Suvadeep Hajra;Manaar Alam;Sayandeep Saha;Stjepan Picek;Debdeep Mukhopadhyay;
Pages: 514 - 528
Abstract: In side-channel analysis (SCA), Points-of-Interest (PoIs), i.e., the informative sample points remain sparsely scattered across the whole side-channel trace. Several works in the SCA literature have demonstrated that the attack efficacy could be significantly improved by combining information from the sparsely occurring PoIs. In Deep Learning (DL), a common approach for combining the information from the sparsely occurring PoIs is softmax attention. This work studies the training instability of the softmax attention-based CNN models on long traces. We show that the softmax attention-based CNN model incurs an unstable training problem when applied to longer traces (e.g., traces having a length greater than $10K$ sample points). We also explore the use of batch normalization and multi-head softmax attention to make the CNN models stable. Our results show that the use of a large number of batch normalization layers and/or multi-head softmax attention (replacing the vanilla softmax attention) can make the models significantly more stable, resulting in better attack efficacy. Moreover, we found our models to achieve similar or better results (up to 85% reduction in the minimum number of the required traces to reach the guessing entropy 1) than the state-of-the-art results on several synchronized and desynchronized datasets. Finally, by plotting the loss surface of the DL models, we demonstrate that using multi-head softmax attention instead of vanilla softmax attention in the CNN models can make the loss surface significantly smoother.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Reversible Contrast Enhancement by Histogram Specification and Very Low
Distortion Data Hiding-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dinu Coltuc;Henri George Coanda;
Pages: 529 - 539
Abstract: This paper deals with reversible contrast enhancement (RCE). Image enhancement is achieved by histogram specification, the most popular contrast enhancement technique. A low bitrate procedure for inverting histogram specification is developed. The data for original image recovery is reversibly embedded into the contrast enhanced version. Very low distortion RDH schemes that exploit the sparse histogram of the contrast enhanced images are also proposed. Their major novelties are: embedding only into the histogram bins that provide room without any pixel shifting or overflow/underflow control, and encoding of embedded data sequence with fewer bits of “1”. The newly proposed RDH can exceed 1 bpp at average square errors per embedded bit between 0.375 and 0.5. It should be stressed that one gets reversibility at the distortion of least significant bit substitution algorithms. Experimental results are presented. With the proposed RCE-HS, reversibility is obtained without any visual distortion.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- GAZETA: GAme-Theoretic ZEro-Trust Authentication for Defense Against
Lateral Movement in 5G IoT Networks-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yunfei Ge;Quanyan Zhu;
Pages: 540 - 554
Abstract: The increasing connectivity in the 5G Internet of Things networks has enlarged the attack surface and made the traditional security defense inadequate for sophisticated attackers, who can move laterally from node to node with stored credentials once build a foothold in the network. There is a need to shift from the perimeter-based defense to a zero-trust security framework that focuses on agent-centric trust evaluation and access policies to identify malicious attackers, and proactively delay their lateral movement while ensuring system performance. In this work, we propose a GAme-theoretic ZEro-Trust Authentication framework, known as GAZETA, to design interdependent trust evaluation and authentication policies using dynamic game models. The stealthy and dynamic behaviors of the agent are captured by a Markov game with one-sided incomplete information. We provide a quantitative trust evaluation mechanism for the agent and update the trust score continuously based on observations. The analysis of the equilibrium not only provides a way to quantitatively assess the security posture of the network but also enables a formal method to design zero-trust authentication policies. We propose a moving-horizon computational method to enable online decisions and rapid responses to environmental changes. This online computation also enables a dynamic trust evaluation that integrates multiple sources of security evidence. We use a case study to illustrate the resilience, robustness, and efficiency of the proposed zero-trust approach.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Dynamic Analysis of Multiplex Networks With Hybrid Maintenance Strategies
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Baihao Peng;Junfeng Liu;Jun Zeng;
Pages: 555 - 570
Abstract: The advent of smart terminals and the IOT era has prompted the emergence of the multiplex network system. With the rapid information transmission in multiplex networks, security incidents caused by malicious attackers occur frequently. In light of the foregoing, this paper concentrates on modeling and analyzing the propagation process of multiplex networks. Particularly, this paper proposes an epidemic-based RCMO (Running- Confined- Malfuntioned- Overhauled) model after considering the operating state of equipment and a class of visibility-aware malware. After that, the stability analysis of the equilibria is performed to verify the effectiveness of the propagation threshold obtained in RCMO. Then, we introduce static controls including unique control, target control, acquaintance control, and pulse control, as well as dynamic controls including continuous-time and pulsed forms to the suppression strategies for preventing the spread of malware. Furthermore, the control strategies are cross-combined into $6\times 6$ hybrid maintenance strategies (HMS), and the simulation analysis is performed from three perspectives: evolution of state variables, accumulation of revenue, and change of controls. We discovered through experimental results that the optimal HMS to inhibit the propagation of malware and the HMS with the highest revenue are always different under various network topologies, but they are all hybrid combinations of continuous-time and pulse controls. To some extent, dynamic control can reduce discrepancies between the HMS and generate approximate returns. Finally, we propose a few cyber defense recommendations for network administrators.
PubDate: FRI, 13 OCT 2023 09:16:48 -04
Issue No: Vol. 19, No. null (2023)
-
- PRIMϵ: Novel Privacy-Preservation Model With Pattern Mining and
Genetic Algorithm-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Sheema Madhusudhanan;Arun Cyril Jose;Jayakrushna Sahoo;Reza Malekian;
Pages: 571 - 585
Abstract: This paper proposes a novel agglomerated privacy-preservation model integrated with data mining and evolutionary Genetic Algorithm (GA). Privacy-pReservIng with Minimum Epsilon (PRIM $\epsilon $ ) delivers minimum privacy budget ( $\epsilon $ ) value to protect personal or sensitive data during data mining and publication. In this work, the proposed Pattern identification in the Locale of Users with Mining (PLUM) algorithm, identifies frequent patterns from dataset containing users’ sensitive data. $\epsilon $ -allocation by Differential Privacy (DP) is achieved in PRIM $\epsilon $ with GA $_{\textbf {PRIM$\epsilon $}}$ , yielding a quantitative measure of privacy loss ( $\epsilon $ ) ranging from 0.0001 to 0.045. The proposed model maintains the trade-off between privacy and data utility with an average relative error of 0.109 on numerical data and an Earth Mover’s Distance (EMD) metric in the range between [0.2,1.3] on textual data. PRIM $\epsilon $ model is verified with Probabilistic Computational Tree Logic (PCTL) and proved to accept DP data only when $\epsilon \le 0.5$ . The work demonstrated resilience of model against background knowledge, membership inference, reconstruction, and privacy budget attack. PRIM $\epsilon $ is compared with existing techniques on DP and is found to be linearly scalable with worst time complexity of $\mathcal {O}$ (n log n).
PubDate: MON, 16 OCT 2023 09:18:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Optimizing Linear Correctors: A Tight Output Min-Entropy Bound and
Selection Technique-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Miloš Grujić;Ingrid Verbauwhede;
Pages: 586 - 600
Abstract: Post-processing of the raw bits produced by a true random number generator (TRNG) is always necessary when the entropy per bit is insufficient for security applications. In this paper, we derive a tight bound on the output min-entropy of the algorithmic post-processing module based on linear codes, known as linear correctors. Our bound is based on the codes’ weight distributions, and we prove that it holds even for the real-world noise sources that produce independent but not identically distributed bits. Additionally, we present a method for identifying the optimal linear corrector for a given input min-entropy rate that maximizes the throughput of the post-processed bits while simultaneously achieving the needed security level. Our findings show that for an output min-entropy rate of 0.999, the extraction efficiency of the linear correctors with the new bound can be up to $\mathbf {130.56\, \%}$ higher when compared to the old bound, with an average improvement of $\mathbf {41.2\, \%}$ over the entire input min-entropy range. On the other hand, the required min-entropy of the raw bits for the individual correctors can be reduced by up to $\mathbf {61.62\, \%}$ .
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- IMap: Toward a Fast, Scalable and Reconfigurable In-Network Scanner With
Programmable Switches-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Menghao Zhang;Guanyu Li;Cheng Guo;Han Bao;Mingwei Xu;Hongxin Hu;Fenghua Li;
Pages: 601 - 615
Abstract: Network scanning has been a standard measurement technique to understand a network’s security situations, e.g., revealing security vulnerabilities, monitoring service deployments. However, probing a large-scale scanning space with existing network scanners is both difficult and slow, since they are all implemented on commodity servers and deployed at the network edge. To address this, we introduce IMap, a fast, scalable and reconfigurable in-network scanner based on programmable switches. In designing IMap, we overcome key restrictions posed by computation models and memory resources of programmable switches, and devise numerous techniques and optimizations, including an address-random and rate-adaptive probe packet generation mechanism, and a correct and efficient response packet processing scheme, to turn a switch into a practical runtime-reconfigurable high-speed network scanner. We implement an open-source prototype of IMap, and evaluate it with extensive testbed experiments and real-world deployments in our campus network. Evaluation results show that even with one switch port enabled, IMap can survey all ports of our campus network (i.e., a total of up to 25 billion scanning space) in 8 minutes. This demonstrates a nearly 4 times faster scanning speed and 1.5 times higher scanning accuracy than the state of the art, which shows that IMap has great potentials to be the next-generation terabit network scanner with all switch ports enabled. Besides, our experiments also show that IMap supports the reconfiguration of scanning tasks at runtime, without incurring switch downtime. Leveraging IMap, we also discover several potential security threats in our campus network, and report them to our network administrators responsibly.
PubDate: WED, 25 OCT 2023 09:16:53 -04
Issue No: Vol. 19, No. null (2023)
-
- Breaking the Anonymity of Ethereum Mixing Services Using Graph Feature
Learning-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hanbiao Du;Zheng Che;Meng Shen;Liehuang Zhu;Jiankun Hu;
Pages: 616 - 631
Abstract: With the property of helping users further enhance the anonymity of transactions, mixing services in blockchain have gained wide popularity in recent years. However, the strong untraceability offered by mixing services has led to the abuse of them by criminals for money laundering and committing fraud. These illegal actions pose significant threats to the blockchain ecosystem and financial order. In this paper, we focus on the problem of correlating the addresses of mixing transactions in Tornado Cash, a widely-used mixing service on Ethereum. We propose a graph neural network framework named MixBroker, which aims to break the anonymity of Tornado Cash by correlate mixing addresses from the perspective of node-pair link prediction. Specifically, we construct a Mixing Interaction Graph (MIG) using raw Ethereum mixing transaction data that can be used for subsequent analysis. To better represent the properties of mixing account nodes, we extract features from account nodes in the MIG from multiple perspectives. Furthermore, we design a GNN-based link prediction mechanism to serve as the backbone of MixBroker. This mechanism captures the interconnected nature of nodes within the MIG and calculates the probability of correlation between account nodes through node embeddings. In addition, to solve the problem of lacking ground-truth, we collect a large number of real mixing transactions of Ethereum in Tornado Cash and construct a ground-truth dataset by combining the principles of Ethereum Name Service (ENS). We conduct extensive experiments on the datasets, and the results demonstrate that MixBroker has a superior performance over other state-of-the-art methods on the address correlation problem in Ethereum mixing transactions.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Linked Fault Analysis
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ali Asghar Beigizad;Hadi Soleimany;Sara Zarei;Hamed Ramzanipour;
Pages: 632 - 645
Abstract: Numerous fault models with distinct characteristics and effects have been developed. The costs, repeatability, and practicability of these models should be assessed. Moreover, there must be effective ways to use the injected fault to retrieve the secret key, particularly if the implementation includes any countermeasures. In this paper, we introduce a new fault analysis called “linked fault analysis” (LFA), a more powerful technique than other well-known fault attacks against implementations of symmetric primitives, especially in software implementations. For known fault analysis, the basis for the fault model is either the bias over the faulty value or the relationship between the correct value and the faulty one. In the LFA, however, a single fault involves two intermediate values. The faulty target variable, $u'$ , is linked to a second variable, $v$ , such that a particular relation holds: $u'=l(v)$ . LFA lets the attacker perform fault attacks without the input control, using far fewer data than previously introduced fault attacks in the same class. We show the utilization of LFA in the presence or absence of typical redundancy-based countermeasures by introducing “Linked Differential Fault Analysis” (LDFA) and “Linked Ineffective Fault Analysis” (LIFA). We also demonstrate that, under specific circumstances, LFA is still effective even when masking protections are in place. We have performed our attacks against the public implementation of AES and PRESENT in ATMEGA328p to show the feasibility of LFA in the real world. The practical results and simulations validate our theoretical models as well.
PubDate: WED, 25 OCT 2023 09:16:53 -04
Issue No: Vol. 19, No. null (2023)
-
- Federated Supervised Principal Component Analysis
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: William Briguglio;Waleed A. Yousef;Issa Traore;Mohammad Mamun;
Pages: 646 - 660
Abstract: In federated learning, standard machine learning (ML) techniques are modified so they can be applied to data held by separate participants without the need for exchanging said data and while preserving privacy. Other data modelling techniques, such as singular value decomposition, have been similarly federated, enabling federated principal component analysis (PCA), which is a popular preprocessing step for ML tasks. Supervised PCA improves on standard PCA by using labeled data to retain more relevant information for supervised ML problems. However, a federated version of supervised PCA does not exist in the literature. In this paper, we propose a federated version of supervised PCA and its dual and kernel variations, called FeS-PCA, dual FeS-PCA, and FeSK-PCA, respectively. We used random orthogonal matrix masking to keep FeS-PCA and dual FeS-PCA private, while FeSK-PCA was kept private using an approximation of the standard approach. We tested our proposed approaches by recreating visualization, classification, and regression experiments from the original unfederated supervised PCA paper. We further added a real-world federated dataset to test the scalability and fidelity of our approach. Our analysis and results indicate that FeS-PCA and dual FeS-PCA are faithful, lossless, and private versions of their unfederated counterparts. Furthermore, despite being an approximation, FeSK-PCA achieves nearly identical performance to standard kernel SPCA in many cases. This is in addition to the added benefit of a reduced runtime and smaller memory footprint.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- IdentifierIDS: A Practical Voltage-Based Intrusion Detection System for
Real In-Vehicle Networks-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhouyan Deng;Jiajia Liu;Yijie Xun;Junman Qin;
Pages: 661 - 676
Abstract: As innovative technologies such as autonomous driving, over-the-air technology, and vehicle-to-everything are widely applied to intelligent connected vehicles, people can gain a more convenient and safer driving experience. Although the application of these technologies facilitates our lives, they also bring a series of vulnerable interfaces (such as 5G, Bluetooth, and WiFi), which pose a significant security threat to existing in-vehicle networks. To address these threats, researchers have proposed two mainstream schemes, including message authentication and intrusion detection system (IDS), where the scheme of message authentication needs to occupy the limited bandwidth of controller area network (CAN) bus. Furthermore, most IDSs either cannot locate the sender of the attack, fail to detect aperiodic malicious frames, or require prior knowledge of which CAN identifiers (IDs) belong to which electronic control units (ECUs). To address these weaknesses, we propose a practical voltage-based IDS named IdentifierIDS for real in-vehicle networks. To the best of our knowledge, it is the first scheme to detect intrusions by establishing a voltage fingerprint for each ID without the need for prior knowledge. This allows IdentifierIDS to detect both periodic and aperiodic malicious frames without occupying the limited bandwidth of the CAN bus. As a self-learning IDS, it can adapt to different in-vehicle networks without the need for customization for them. Experiments on three real vehicles demonstrate the robustness of our scheme in different in-vehicle networks.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- PRIDN: A Privacy Preserving Data Sharing on Named Data Networking
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qi Xia;Isaac Amankona Obiri;Jianbin Gao;Hu Xia;Xiaosong Zhang;Kwame Omono Asamoah;Sandro Amofa;
Pages: 677 - 692
Abstract: The Named Data Networking (NDN) architecture is a futuristic internet infrastructure that aims to deliver content efficiently. However, NDN is faced with the challenge of ensuring the privacy of both content and names. Traditional solutions have focused on encrypting and signing content before injecting the resultant ciphertext into the NDN platform to provide confidentiality and integrity. However, these solutions fail to protect content name privacy in critical applications such as the military and healthcare. To address this challenge, we propose Privacy-Preserving Data Sharing on Named Data Networking (PRIDN), which employs a combination of proxy re-encryption and symmetric mechanisms to secure both content and names. PRIDN offers several advantages over existing solutions. Firstly, it eliminates the need for subscribers to communicate with content publishers for decryption keys, reducing communication overhead and ensuring that content publishers do not need to be online all the time to respond to key generation requests. Second, the proxy re-encryption mechanism prevents replication of ciphertexts, thus avoiding multiple instances of the same content in the network. Lastly, PRIDN also protects sensitive information in content names, preventing user profiling and censorship. Simulation results from ndnSIM and MIRACL libraries demonstrate that PRIDN reduces content retrieval time on NDN. A crypto-verification tool, Verifpal, shows that the proposed protocols are secure for real-world deployment.
PubDate: WED, 25 OCT 2023 09:16:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on
Heterogeneous Data-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zekai Chen;Shengxing Yu;Mingyuan Fan;Ximeng Liu;Robert H. Deng;
Pages: 693 - 707
Abstract: Federated learning (FL) allows multiple clients to train deep learning models collaboratively while protecting sensitive local datasets. However, FL has been highly susceptible to security for federated backdoor attacks (FBA) through injecting triggers and privacy for potential data leakage from uploaded models in practical application scenarios. FBA defense strategies consider specific and limited attacker models, and a sufficient amount of noise injected can only mitigate rather than eliminate the attack. To address these deficiencies, we introduce a Robust Federated Backdoor Defense Scheme (RFBDS) and Privacy-preserving RFBDS (PrivRFBDS) to ensure the elimination of adversarial backdoors. Our RFBDS to overcome FBA consists of amplified magnitude sparsification, adaptive OPTICS clustering, and adaptive clipping. The experimental evaluation of RFBDS is conducted on three benchmark datasets and an extensive comparison is made with state-of-the-art studies. The results demonstrate the promising defense performance from RFBDS, moderately improved by 31.75% ~ 73.75% in clustering defense methods, and 0.03% ~ 56.90% for Non-IID to the utmost extent for the average FBA success rate over MNIST, FMNIST, and CIFAR10. Besides, our privacy-preserving shuffling in PrivRFBDS maintains is $7.83e^{-5}\,\,\sim \,\,0.42\times $ that of state-of-the-art works.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Verifying in the Dark: Verifiable Machine Unlearning by Using Invisible
Backdoor Triggers-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yu Guo;Yu Zhao;Saihui Hou;Cong Wang;Xiaohua Jia;
Pages: 708 - 721
Abstract: Machine unlearning as a fundamental requirement in Machine-Learning-as-a-Service (MLaaS) has been extensively studied with increasing concerns about data privacy. It requires MLaaS providers should delete training data upon user requests. Unfortunately, none of the existing studies can efficiently achieve machine unlearning validation while preserving the retraining efficiency and the service quality after data deletion. Besides, how to craft the validation scheme to prevent providers from spoofing validation by forging proofs remains under-explored. In this paper, we introduce a backdoor-assisted validation scheme for machine unlearning. The proposed design is built from the ingenious combination of backdoor triggers and incremental learning to assist users in verifying proofs of machine unlearning without compromising performance and service quality. We propose to embed invisible markers based on backdoor triggers into privacy-sensitive data to prevent MLaaS providers from distinguishing poisoned data for validation spoofing. Users can use prediction results to determine whether providers comply with data deletion requests. Besides, we incorporate our validation scheme into an efficient incremental learning approach via our index structure to further facilitate the performance of retraining after data deletion. Evaluation results on real-world datasets confirm the efficiency and effectiveness of our proposed verifiable machine unlearning scheme.
PubDate: FRI, 27 OCT 2023 09:17:19 -04
Issue No: Vol. 19, No. null (2023)
-
- A Credential Usage Study: Flow-Aware Leakage Detection in Open-Source
Projects-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ruidong Han;Huihui Gong;Siqi Ma;Juanru Li;Chang Xu;Elisa Bertino;Surya Nepal;Zhuo Ma;Jianfeng Ma;
Pages: 722 - 734
Abstract: Authentication and cryptography are critical security functions and, thus, are very often included as part of code. These functions require using credentials, such as passwords, security tokens, and cryptographic keys. However, developers often incorrectly implement/use credentials in their code because of a lack of secure coding skills. This paper analyzes open-source projects concerning the correct use of security credentials. We developed a semantic-rich, language-independent analysis approach for analyzing many projects automatically. We implemented a detection tool, SEAGULL, to automatically check open-source projects based on string literal and code structure information. Instead of analyzing the entire project code, which might result in path explosion when constructing data and control dependencies, SEAGULL pinpoints all literal constants to identify credential candidates and then analyzes the code snippets correlated to these candidates. SEAGULL accurately identifies the leaked credentials by obtaining semantic and syntax information about the code. We applied SEAGULL to 377 open-source projects. SEAGULL successfully reported 19 real-world credential leakages out of those projects. Our analysis shows that some developers protected or erased the credentials in the current project versions, but previously used credentials can still be extracted from the project’s historical versions. Although the implementations of credential leakages seem to be fixed in the current projects, attackers could successfully log into accounts if developers keep using the same credentials as before. Additionally, we found that such credential leakages still affect some projects. By exploiting leaked credentials, attackers can log into particular accounts.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Universal Heterogeneous Face Analysis via Multi-Domain Feature
Disentanglement-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Decheng Liu;Xinbo Gao;Chunlei Peng;Nannan Wang;Jie Li;
Pages: 735 - 747
Abstract: Heterogeneous face analysis is an important and challenge problem in face recognition community, because of the large modality discrepancy between heterogeneous face images. Existing methods either focus on transforming heterogeneous faces into the same style via face synthesis process, or intend to directly recognize heterogeneous face via modality invariant descriptors. However, the tasks of cross modality face synthesis and face recognition share a common purpose, which is to disentangle an inherent explainable representation. To this end, we propose a novel universal heterogenous face analysis method via multi-domain feature disentanglement, which does not need any face domain label. The proposed method explores to disentangle factors of variations of cross modality faces in an unsupervised manner. Then we could translate cross modality faces through modifying semantic factors, and the extracted inherent explainable representation still maintains being discriminative for heterogeneous face recognition. Experimental results on multiple cross modality face databases demonstrate the effectiveness of the proposed method. These experimental results also inspire us that the unsupervised disentangled module could help to analyze the interpretability of heterogenous face representation.
PubDate: WED, 25 OCT 2023 09:16:53 -04
Issue No: Vol. 19, No. null (2023)
-
- Backdoor Attack Against Split Neural Network-Based Vertical Federated
Learning-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ying He;Zhili Shen;Jingyu Hua;Qixuan Dong;Jiacheng Niu;Wei Tong;Xu Huang;Chen Li;Sheng Zhong;
Pages: 748 - 763
Abstract: Vertical federated learning (VFL) is being used more and more widely in industry. One of its most common application scenarios is a two-party setting: a participant (i.e., the host), who exclusively owns the labels but possesses insufficient number of features, wants to improve its model performance by combining features from another participant (i.e., the client) of a different business group. The best deep ML architecture suits for this scenario is considered to be Split Neural Network (SplitNN), in which each participant runs a self-defined bottom model to learn the hidden representations (i.e., the local embeddings) of its local data and then forwards them to the host, who runs a top model to aggregate both the local embeddings to produce the final predicts. In this paper, we assume the client is malicious and demonstrate that she/he could inject a stealthy backdoor into the top model during the training to misclassify any sample to a pre-selected target class with a high probability by just replacing its local embedding with a special trigger vector regardless of the host-side embedding. This task is non-trivial because existing data poison attacks for backdoor injection in traditional models usually require to modify the labels of a set of trigger-tagged samples of non-target classes, which is impossible here as the client has no rights to access or modify the labels exclusively owned by the host. Targeting this challenge, we propose a SplitNN-dedicated data poison attack which does not require to modify any labels but just replaces the local embeddings of a very small number of target-class samples with a carefully constructed trigger vector during training. The experiments on four datasets show that our attack can achieve an attack rate as high as 94%, while bringing negligible side-effects to the model accuracy. Moreover, it is stealthy enough to resist various anomaly detection methods.
PubDate: THU, 26 OCT 2023 09:19:54 -04
Issue No: Vol. 19, No. null (2023)
-
- Toward Automated Field Semantics Inference for Binary Protocol Reverse
Engineering-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mengqi Zhan;Yang Li;Bo Li;Jinchao Zhang;Chuanrong Li;Weiping Wang;
Pages: 764 - 776
Abstract: Network protocol reverse engineering is the basis for many security applications. A common class of protocol reverse engineering methods is based on the analysis of network message traces. After performing message field identification by segmenting messages into multiple fields, a key task is to infer the semantics of the fields. One of the limitations of existing field semantics inference methods is that they usually infer semantics for only a few fields and often require a lot of manual effort. In this paper, we propose an automated field semantics inference method for binary protocol reverse engineering (FSIBP). FSIBP aims to automatically learn semantics inference knowledge from known protocols and use it to infer the semantics of any field of an unknown protocol. To achieve this goal, we design a feature extraction method that can extract features of the field itself and of the field context. We also propose a semantic category aggregation method that abstracts the fine-grained semantics of all fields of known protocols into aggregated semantic categories. Moreover, we make FSIBP infer semantics based on the similarity of fields to semantic categories. The above design enables FSIBP to utilize the semantic knowledge of all fields of known protocols and infer the semantics of any fields of unknown protocols. The whole process of FSIBP does not require any expert knowledge or manual parameter setting. We conduct extensive experiments to demonstrate the effectiveness of FSIBP. Moreover, we find a utility for FSIBP besides field semantics inference, its output can help to detect the mis-segmented fields generated during the message field identification.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- FedComm: A Privacy-Enhanced and Efficient Authentication Protocol for
Federated Learning in Vehicular Ad-Hoc Networks-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiaohan Yuan;Jiqiang Liu;Bin Wang;Wei Wang;Bin Wang;Tao Li;Xiaobo Ma;Witold Pedrycz;
Pages: 777 - 792
Abstract: In vehicular ad-hoc networks (VANET), federated learning enables vehicles to collaboratively train a global model for intelligent transportation without sharing their local data. However, due to dynamic network structure and unreliable wireless communication of VANET, various potential risks (e.g., identity privacy leakage, data privacy inference, model integrity compromise, and data manipulation) undermine the trustworthiness of intermediate model parameters necessary for building the global model. While existing cryptography techniques and differential privacy provide provable security paradigms, the practicality of secure federated learning in VANET is hindered in terms of training efficiency and model performance. Therefore, developing a secure and efficient federated learning in VANET remains a challenge. In this work, we propose a privacy-enhanced and efficient authentication protocol for federated learning in VANET, called FedComm. Unlike existing solutions, FedComm addresses the above challenge through user anonymity. First, FedComm enables vehicles to participate in training with unlinkable pseudonyms, ensuring both privacy preservation and efficient collaboration. Second, FedComm incorporates an efficient authentication protocol to guarantee the authenticity and integrity of model parameters originated from anonymous vehicles. Finally, FedComm accurately identifies and completely eliminates malicious vehicles in anonymous communication. Security analysis and verification with ProVerif demonstrate that FedComm enhances privacy and reliability of intermediate model parameters. Experimental results show that FedComm reduces the overhead of proof generation and verification by 67.38% and 67.39%, respectively, compared with the state-of-the-art authentication protocols used in federated learning.
PubDate: MON, 16 OCT 2023 09:18:10 -04
Issue No: Vol. 19, No. null (2023)
-
- Robust Domain Misinformation Detection via Multi-Modal Feature Alignment
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hui Liu;Wenya Wang;Hao Sun;Anderson Rocha;Haoliang Li;
Pages: 793 - 806
Abstract: Social media misinformation harms individuals and societies and is potentialized by fast-growing multi-modal content (i.e., texts and images), which accounts for higher “credibility” than text-only news pieces. Although existing supervised misinformation detection methods have obtained acceptable performances in key setups, they may require large amounts of labeled data from various events, which can be time-consuming and tedious. In turn, directly training a model by leveraging a publicly available dataset may fail to generalize due to domain shifts between the training data (a.k.a. source domains) and the data from target domains. Most prior work on domain shift focuses on a single modality (e.g., text modality) and ignores the scenario where sufficient unlabeled target domain data may not be readily available in an early stage. The lack of data often happens due to the dynamic propagation trend (i.e., the number of posts related to fake news increases slowly before catching the public attention). We propose a novel robust domain and cross-modal approach (RDCM) for multi-modal misinformation detection. It reduces the domain shift by aligning the joint distribution of textual and visual modalities through an inter-domain alignment module and bridges the semantic gap between both modalities through a cross-modality alignment module. We also propose a framework that simultaneously considers application scenarios of domain generalization (in which the target domain data is unavailable) and domain adaptation (in which unlabeled target domain data is available). Evaluation results on two public multi-modal misinformation detection datasets (Pheme and Twitter Datasets) evince the superiority of the proposed model.
PubDate: FRI, 20 OCT 2023 09:18:55 -04
Issue No: Vol. 19, No. null (2023)
-
- Efficient Coded Multi-Party Computation at Edge Networks
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Elahe Vedadi;Yasaman Keshtkarjahromi;Hulya Seferoglu;
Pages: 807 - 820
Abstract: Multi-party computation (MPC) is promising for designing privacy-preserving machine learning algorithms at edge networks. An emerging approach is coded-MPC (CMPC), which advocates the use of coded computation to improve the performance of MPC in terms of the required number of workers involved in computations. The current approach for designing CMPC algorithms is to merely combine efficient coded computation constructions with MPC. We show that this approach fails short of being efficient; e.g., entangled polynomial codes are not necessarily better than PolyDot codes in MPC setting, while they are always better for coded computation. Motivated by this observation, we propose a new construction; Adaptive Gap Entangled (AGE) polynomial codes for MPC. We show through analysis and simulations that MPC with AGE codes always perform better than existing CMPC algorithms in terms of the required number of workers as well as computation, storage, and communication overhead.
PubDate: MON, 23 OCT 2023 09:21:52 -04
Issue No: Vol. 19, No. null (2023)
-
- Dynamic Trust-Based Redactable Blockchain Supporting Update and
Traceability-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuqing Zhang;Zhaofeng Ma;Shoushan Luo;Pengfei Duan;
Pages: 821 - 834
Abstract: Blockchain, as an emerging technology, is constantly evolving due to its remarkable advantages but is also subject to its unalterability, which leads to the misuse of blockchain storage and causes adverse effects. Hence, the redactable blockchain was proposed, which can alleviate the above issues in a controlled manner. However, a situation exists in which the modifiers specified by customized identities or attributes in the existing schemes may be malicious, which can easily lead to malicious modification events. Evaluating, filtering, and limiting malicious modifiers in advance may be a feasible solution to the situation. Hence, we propose an efficient dynamic trust-based redactable blockchain supporting update and traceability, which offers full-process security with pre-modification pre-evaluation, modification privilege restrictions, and post-modification traceability. Firstly, we consider the user’s various behaviors and multiple factors and customize a dynamic trust evaluation model for redactable blockchain to comprehensively evaluate the reliability of the user. Then, we combine the user’s trust worthiness, dynamic proactive secret sharing( $\mathcal {DPSS}$ ), chameleon hash( $\mathcal {CH}$ ), and digital signature ( $\mathcal {DS}$ ) to design a dynamic trust-based chameleon hash supporting update and traceability, called $\mathcal {DTCH}$ , to realize full-process security, and prove its security. Thirdly, we construct the $\mathcal {DTCH}$ -based redactable blockchain supporting update and traceability, demonstrate its security and further apply it to consortium blockchain. Finally, we evaluate the performance of the constructed model and scheme, and the evaluation results illuminate that they are not only effective but also possess better performance.
PubDate: FRI, 20 OCT 2023 09:18:56 -04
Issue No: Vol. 19, No. null (2023)
-
- Optimal Selfish Mining-Based Denial-of-Service Attack
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qiuhua Wang;Chengyu Li;Tianyu Xia;Yizhi Ren;Dong Wang;Guoyan Zhang;Kim-Kwang Raymond Choo;
Pages: 835 - 850
Abstract: In recent years, Bitcoin has become one of the most popular cryptocurrencies. The most significant mechanism of Bitcoin is PoW (Proof-of-Work), but it also brings opportunities for mining attacks. In our last study, we proposed a Selfish Mining-based Denial-of-Service Attack (SDoS), which can cause serious threats to the Bitcoin system. On this basis, we further put forward three greedier SDoS attack strategies: a competitive greedy SDoS attack strategy ESDoS, a trail greedy SDoS attack strategy TSDoS, a hybrid greedy SDoS attack strategy ETSDoS, and a more public SDoS attack strategy PSDoS. Besides, we also study the adversary’s optimal strategies under different conditions. The experimental results show that if the adversary adopts the SDoS optimal strategy, his revenue increase rate will be further improved and significantly higher than the other existing mining attacks. If the adversary masters 14% of the total mining power, he has a chance to improve his revenue (25% in Selfish Mining, 19.6% in SDoS), and if the adversary masters 15% of the total mining power, he is capable of launching a 51% attack.
PubDate: FRI, 20 OCT 2023 09:18:55 -04
Issue No: Vol. 19, No. null (2023)
-
- Dual Sampling Based Causal Intervention for Face Anti-Spoofing With
Identity Debiasing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xingming Long;Jie Zhang;Shuzhe Wu;Xin Jin;Shiguang Shan;
Pages: 851 - 862
Abstract: Improving generalization to unseen scenarios is one of the greatest challenges in Face Anti-spoofing (FAS). Most previous FAS works focus on domain debiasing to eliminate the distribution discrepancy between training and test data. However, a crucial but usually neglected bias factor is the face identity. Generally, the identity distribution varies across the FAS datasets as the participants in these datasets are from different regions, which will lead to serious identity bias in the cross-dataset FAS tasks. In this work, we resort to causal learning and propose Dual Sampling based Causal Intervention (DSCI) for face anti-spoofing, which improves the generalization of the FAS model by eliminating the identity bias. DSCI treats the bias as a confounder and applies the backdoor adjustment through the proposed dual sampling on the face identity and the FAS feature. Specifically, we first sample the data uniformly on the identity distribution that is obtained by a pretrained face recognition model. By feeding the sampled data into a network, we can get an estimated FAS feature distribution and sample the FAS feature on it. Sampling the FAS feature from a complete estimated distribution can include potential counterfactual features in the training, which effectively expands the training data. The dual sampling process helps the model learn the real causality between the FAS feature and the input liveness, allowing the model to perform more stably across various identity distributions. Extensive experiments demonstrate our proposed method outperforms the state-of-the-art methods on both intra- and cross-dataset evaluations.
PubDate: FRI, 20 OCT 2023 09:18:55 -04
Issue No: Vol. 19, No. null (2023)
-
- Realistic Fingerprint Presentation Attacks Based on an Adversarial
Approach-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Roberto Casula;Giulia Orrù;Stefano Marrone;Umberto Gagliardini;Gian Luca Marcialis;Carlo Sansone;
Pages: 863 - 877
Abstract: Modern Fingerprint Presentation Attack Detection (FPAD) modules have been particularly successful in avoiding attacks exploiting artificial fingerprint replicas against Automated Fingerprint Identification Systems (AFISs). As for several other domains, Machine and Deep Learning strongly contributed to this success, with all recent state-of-the-art detectors leveraging learning-based approaches. An insidious flip side is represented by adversarial attacks, namely, procedures intended to mislead a target detector. Indeed, despite this type of attack has been considered unrealistic, as it presupposes access to the communication channel between the sensor and the detector, in a recent work, we have highlighted the possibility of transferring a fingerprint adversarial attack from the digital domain to the physical one. In this work, we take a step further by introducing a new procedure designed to make the physical adversarial presentation attack i) more robust to the physical crafting of the PAI by exploiting explainability techniques, ii) easier to adapt to different fingerprint scanners and adversarial algorithms, and iii) usable in a black-box scenario. To quantify the impact of these novel adversarial presentation attacks family, designed to be robust to the physical crafting process, we assess the performance of both state-of-the-art PAD modules alone and integrated AFISs. Results highlight the approach’s feasibility, opening a new series of threats in the context of fingerprint PAD.
PubDate: WED, 25 OCT 2023 09:16:53 -04
Issue No: Vol. 19, No. null (2023)
-
- Privacy Preserving Palmprint Recognition via Federated Metric Learning
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Huikai Shao;Chengcheng Liu;Xiaojiang Li;Dexing Zhong;
Pages: 878 - 891
Abstract: Deep learning-based palmprint recognition methods have made good progress and obtained promising performance. However, most of them are mainly focused on continuously improving the recognition accuracy, while ignore the privacy preserving, which is also extremely significant. In this paper, we propose a novel Federated Metric Learning (FedML) method to address the issue of data privacy and data islands in palmprint recognition. There are several clients with different structures deployed in communities, which cannot access the private data of others. The key is to improve the accuracy of each client by generating understandable knowledge and transferring it to each other but without explicitly sharing its private data or model architecture. A public dataset is introduced and several effective communication losses are constructed at both instance level and relation level to help clients to learn from each other. Furthermore, transfer learning is applied to close the gap between private and public data. Extensive experiments are conducted on eighteen constrained and unconstrained palmprint benchmark datasets. The results demonstrate that FedML can outperform other methods by a large margin and obtain promising performance.
PubDate: WED, 25 OCT 2023 09:16:53 -04
Issue No: Vol. 19, No. null (2023)
-
- CSRA: Robust Incentive Mechanism Design for Differentially Private
Federated Learning-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yunchao Yang;Miao Hu;Yipeng Zhou;Xuezheng Liu;Di Wu;
Pages: 892 - 906
Abstract: The differentially private federated learning (DPFL) paradigm emerges to firmly preserve data privacy from two perspectives. First, decentralized clients merely exchange model updates rather than raw data with a parameter server (PS) over multiple communication rounds for model training. Secondly, model updates to be exposed to the PS will be distorted by clients with differentially private (DP) noises. To incentivize clients to participate in DPFL, various incentive mechanisms have been proposed by existing works which reward participating clients based on their data quality and DP noise scales assuming that all clients are honest and genuinely report their DP noise scales. However, the PS cannot directly measure or observe DP noise scales leaving the vulnerability that clients can boost their rewards and lower DPFL utility by dishonestly reporting their DP noise scales. Through a quantitative study, we validate the adverse influence of dishonest clients in DPFL. To overcome this deficiency, we propose a robust incentive mechanism called client selection with reverse auction (CSRA) for DPFL. We prove that CSRA satisfies the properties of truthfulness, individual rationality, budget feasibility and computational efficiency. Besides, CSRA can prevent dishonest clients with two steps in each communication round. First, CSRA compares the variance of exposed model updates and claimed DP noise scale for each individual to identify suspicious clients. Second, suspicious clients will be further clustered based on their model updates to finally identify dishonest clients. Once dishonest clients are identified, CSRA will not only remove them from the current round but also lower their probability of being selected in subsequent rounds. Extensive experimental results demonstrate that CSRA can provide robust incentive against dishonest clients in DPFL and significantly outperform other baselines on three real public datasets.
PubDate: WED, 01 NOV 2023 09:18:15 -04
Issue No: Vol. 19, No. null (2023)
-
- Toward Adversarially Robust Recommendation From Adaptive Fraudster
Detection-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuni Lai;Yulin Zhu;Wenqi Fan;Xiaoge Zhang;Kai Zhou;
Pages: 907 - 919
Abstract: The robustness of recommender systems under node injection attacks has garnered significant attention. Recently, GraphRfi, a Graph-Neural-Network-based (GNN-based) recommender system, was proposed and shown to effectively mitigate the impact of injected fake users. However, we demonstrate that GraphRfi remains vulnerable to attacks due to the supervised nature of its fraudster detection component, where obtaining clean labels is challenging in practice. In particular, we propose a powerful poisoning attack, MetaC, against both GNN-based and Martix-Faxtorization-based recommender systems. Furthermore, we analyze why GraphRfi fails under such an attack. Then, based on our insights obtained from vulnerability analysis, we design an adaptive fraudster detection module that explicitly considers label uncertainty. This module can serve as a plug-in for different recommender systems, resulting in a robust framework named Posterior-Detection Recommender (PDR). Comprehensive experiments show that our defense approach outperforms other benchmark methods under attacks. Overall, our research presents an effective framework for integrating fraudster detection into recommendation systems to achieve adversarial robustness.
PubDate: THU, 26 OCT 2023 09:19:54 -04
Issue No: Vol. 19, No. null (2023)
-
- Adaptive Domain-Invariant Feature Extraction for Cross-Domain Linguistic
Steganalysis-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yiming Xue;Jiaxuan Wu;Ronghua Ji;Ping Zhong;Juan Wen;Wanli Peng;
Pages: 920 - 933
Abstract: Existing linguistic steganalysis methods require the training and testing datasets to be independent and identically distributed (i.i.d). However, in real-world scenarios, various types of text and steganographic algorithms are employed to generate steganographic text, making it challenging to fulfill the requirement of independent and identical distribution between training and test datasets. This issue, known as the domain mismatch problem, significantly diminishes the detection performance. Thus, it is reasonable to consider domain adaptation by reducing the distribution discrepancy of different domains. However, how to measure and minimize the discrepancy for linguistic steganalysis remains a big challenge. In this paper, we put forward a cross-domain linguistic steganalysis architecture based on a new domain distance metric and adaptive weight selection network. Concretely, a novel steganographic domain distance metric (SDDM) is first proposed, which can effectively characterize the overall distribution discrepancy and capture the weak noise introduced by the information embedding process. Additionally, an adaptive weight selection network with a switching-path structure is designed to calculate domain-specific attention weights, facilitating the model to adapt to various discrepancies scenarios and enhancing its domain-invariant feature representation capability. Extensive experiments show that the proposed method achieves state-of-the-art performance for cross-domain linguistic steganalysis.
PubDate: MON, 30 OCT 2023 09:19:14 -04
Issue No: Vol. 19, No. null (2023)
-
- Large Pose Face Recognition via Facial Representation Learning
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jingwei Xin;Zikai Wei;Nannan Wang;Jie Li;Xinbo Gao;
Pages: 934 - 946
Abstract: Overcoming image acquisition perspectives and face pose variations is a key problem in unconstrained face recognition tasks. One of the practical approaches is by reconstructing the face with extreme pose into a version that is more easily recognized by the discriminator, such as a frontal face. Often, existing methods attempt to balance the accuracy of downstream tasks with human visual perception, but ignore the differences in propensity between the two. Besides, large-scale datasets of profile-frontal paired face images are absent, which further hinders the training of models. In this work, we investigate a variety of face reconstruction approaches and propose a very simple, but very effective method to match face images across different scenes, named facial representation learning (FRL). The core idea of FRL is to introduce a representation generator in front of a pre-trained face recognition model, which can extract face representations from arbitrary faces that are more suitable for recognition model discrimination. In particular, the representation generator reconstructs the facial representation by minimising identity differences from the frontal face and adds pixel-level and adversarial constraints to cater for discriminator preferences. Extensive benchmark experiments show that the proposed method not only achieves better performance than state-of-the-art methods, but also can further squeeze the inference potential of existing face recognition models.
PubDate: THU, 02 NOV 2023 09:17:11 -04
Issue No: Vol. 19, No. null (2023)
-
- An Accurate and Efficient Algorithm to Identify Malicious Nodes of a Graph
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Rossano Gaeta;
Pages: 947 - 958
Abstract: The identification of misbehaving elements in a distributed system is an important task in many diverse settings that can be represented as graphs; this problem can be cast as the computation of a subset of the graph nodes by exploiting a pre-determined detection mechanism. In this paper we propose a simple yet accurate algorithm to compute the set of nodes of a graph suspected to be malicious that is based on the so called comparison detection model. In this framework, a node can play the role of the comparator for two of its neighbors and can provide a boolean result based on the actual status of both. The algorithm we propose has low computational complexity and linear space complexity; furthermore, it only requires one parameter to trade accuracy against computational cost. We also show it outperforms the state-of-the-art and performs equally very well on both synthetic and real world graphs.
PubDate: FRI, 27 OCT 2023 09:17:19 -04
Issue No: Vol. 19, No. null (2023)
-
- Joint Discriminative Analysis With Low-Rank Projection for Finger Vein
Feature Extraction-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Shuyi Li;Ruijun Ma;Jianhang Zhou;Bob Zhang;Lifang Wu;
Pages: 959 - 969
Abstract: Over the last decades, finger vein biometric recognition has generated increasing attention because of its high security, accuracy, and natural anti-counterfeiting. However, most of the existing finger vein recognition approaches rely on image enhancement or require much prior knowledge, which limits their generalization ability to different databases and different scenarios. Additionally, these methods rarely take into account the interference of noise elements in feature representation, which is detrimental to the final recognition results. To tackle these problems, we propose a novel jointly embedding model, called Joint Discriminative Analysis with Low-Rank Projection (JDA-LRP), to simultaneously extract noise component and salient information from the raw image pixels. Specifically, JDA-LRP decomposes the input image into noise and clean components via low-rank representation and transforms the clean data into a subspace to adaptively learn salient features. To further extract the most representative features, the proposed JDA-LRP enforces the discriminative class-induced constraint of the training samples as well as the sparse constraint of the embedding matrix to aggregate the embedded data of each class in their respective subspace. In this way, the discriminant ability of the jointly embedding model is greatly improved, such that JDA-LRP can be adapted to multiple scenarios. Comprehensive experiments conducted on three commonly used finger vein databases and four palm-based biometric databases illustrate the superiority of our proposed model in recognition accuracy, computational efficiency, and domain adaptation.
PubDate: FRI, 20 OCT 2023 09:18:56 -04
Issue No: Vol. 19, No. null (2023)
-
- Universal Detection of Backdoor Attacks via Density-Based Clustering and
Centroids Analysis-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wei Guo;Benedetta Tondi;Mauro Barni;
Pages: 970 - 984
Abstract: We propose a Universal Defence against backdoor attacks based on Clustering and Centroids Analysis (CCA-UD). The goal of the defence is to reveal whether a Deep Neural Network model is subject to a backdoor attack by inspecting the training dataset. CCA-UD first clusters the samples of the training set by means of density-based clustering. Then, it applies a novel strategy to detect the presence of poisoned clusters. The proposed strategy is based on a general misclassification behaviour observed when the features of a representative example of the analysed cluster are added to benign samples. The capability of inducing a misclassification error is a general characteristic of poisoned samples, hence the proposed defence is attack-agnostic. This marks a significant difference with respect to existing defences, that, either can defend against only some types of backdoor attacks, or are effective only when some conditions on the poisoning ratio or the kind of triggering signal used by the attacker are satisfied. Experiments carried out on several classification tasks and network architectures, considering different types of backdoor attacks (with either clean or corrupted labels), and triggering signals, including both global and local triggering signals, as well as sample-specific and source-specific triggers, reveal that the proposed method is very effective to defend against backdoor attacks in all the cases, always outperforming the state of the art techniques.
PubDate: WED, 01 NOV 2023 09:18:15 -04
Issue No: Vol. 19, No. null (2023)
-
- Physical-Layer Secret and Private Key Generation in Wireless Relay
Networks With Correlated Eavesdropping Channels-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Peng Xu;Jun Yang;Gaojie Chen;Zheng Yang;Yong Li;Moe Z. Win;
Pages: 985 - 1000
Abstract: This paper investigates the performance of key generation between two nodes assisted by a relay in the presence of correlated eavesdropping channels. A cooperative jamming scheme is utilized to impose superimposed channel measurements on the relay and eavesdropper. Both lower and upper bounds on key capacities for both secret key (SK) and private key (PK) generation are evaluated, where the lower bounds are derived by using minimum mean square error and zero forcing methods for channel estimation, and the upper bounds are derived by formulating several enhanced discrete memoryless source (DMS) models. The analytical expressions are further simplified in the high signal-to-noise ratio (SNR) regime. We discover that one of the two legitimate channels should specialize in playing a role of jamming the relay or eavesdropper. We also demonstrate that the derived lower and upper bounds are tight when the eavesdropping channels are lowly or highly correlated. When the eavesdropping channels are uncorrelated, the SK and PK capacities can be determined since the corresponding upper and lower bounds are equal. Moreover, at high SNRs, a constant gap exists between the SK/PK upper and lower bounds as the correlation coefficient becomes one.
PubDate: THU, 02 NOV 2023 09:17:11 -04
Issue No: Vol. 19, No. null (2023)
-
- Locally Verifiable Batch Authentication in IoMT
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wenze Mao;Peng Jiang;Liehuang Zhu;
Pages: 1001 - 1014
Abstract: Internet of Medical Things (IoMT) technology has gained a lot of attention. In the IoMT system, IoT devices collect and upload patient data through sensors, and doctors treat them remotely. In this process, doctors need to verify the correctness of the patient’s body data in bulk, a function that requires aggregated signature technology to achieve. However, due to the limitation that traditional verification of aggregated signatures requires access to all messages, doctors cannot efficiently verify the correctness of a particular piece of data. This will result in a significant additional verification overhead. Therefore, the efficiency problem of data batch authentication remains unsolved but imperative. In this paper, we propose an identity-based locally verifiable aggregated signature, (ID-LVEAS), that enables efficient local verification of patient data in IoMT. Building on top of both hybrid hash and inverse framework, we instantiate an ID-LVEAS scheme and prove its security in the EU-CMA security model. Based on ID-LVEAS, we further propose ID-based locally verifiable signcryption (ID-LVASC) that supports both confidentiality and integrity. We also present an instantiation and an application into IoMT from the proposed ID-LVASC. We conduct extensive experiments on the ID-LVEAS and ID-LVASC design. The results show that both cost constantly 0.01s for the verification, independent of the number of messages, which means our research has great promise for application in IoMT.
PubDate: MON, 06 NOV 2023 09:19:32 -04
Issue No: Vol. 19, No. null (2023)
-
- LDS-FL: Loss Differential Strategy Based Federated Learning for Privacy
Preserving-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Taiyu Wang;Qinglin Yang;Kaiming Zhu;Junbo Wang;Chunhua Su;Kento Sato;
Pages: 1015 - 1030
Abstract: Federated Learning (FL) has attracted extraordinary attention from the industry and academia due to its advantages in privacy protection and collaboratively training on isolated datasets. Since machine learning algorithms usually try to find an optimal hypothesis to fit the training data, attackers also can exploit the shared models and reversely analyze users’ private information. However, there is still no good solution to solve the privacy-accuracy trade-off, by making information leakage more difficult and meanwhile can guarantee the convergence of learning. In this work, we propose a Loss Differential Strategy (LDS) for parameter replacement in FL. The key idea of our strategy is to maintain the performance of the Private Model to be preserved through parameter replacement with multi-user participation, while the efficiency of privacy attacks on the model can be significantly reduced. To evaluate the proposed method, we have conducted comprehensive experiments on four typical machine learning datasets to defend against membership inference attack. For example, the accuracy on MNIST is near 99%, while it can reduce the accuracy of attack by 10.1% compared with FedAvg. Compared with other traditional privacy protection mechanisms, our method also outperforms them in terms of accuracy and privacy preserving.
PubDate: THU, 05 OCT 2023 09:16:56 -04
Issue No: Vol. 19, No. null (2023)
-
- A Comprehensive Data Retrieval and Correction Approach From 40-nm Flash
Memory With Selective Chemical Engraving-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiao Mei Zeng;Qing Liu;Chee Lip Gan;
Pages: 1031 - 1040
Abstract: Floating gate-based flash memory is a widely used storage medium for sensitive data that may be relevant to forensic investigations. For various data extraction techniques, the accuracy of the recovered data is critical to ensuring the integrity of information as forensic evidence. In cases where the devices are physically or digitally damaged, invasive data extraction techniques serve as a last resort, and can directly extract binary from individual memory cells. Here we introduce a new invasive data extraction technique called selective chemical engraving. This electrochemical-based approach could systematically imprint the data of ‘0’ and ‘1’ as cavities on memory surfaces, which can subsequently be imaged with an optical microscope and SEM. This technique is capable of extracting data stored in embedded flash memory in microcontrollers of 40 nm technology node with a high accuracy of 99.66%. The error correction code (ECC) stored in the flash memory was also extracted together with the data. By analysing the extracted ECC, we were able to accurately derive the error correction algorithm of single error correction-double error detection (SEC-DED). The reconstructed SEC-DED code was then used to correct all 0.34% of errors. The high data retrieval accuracy (99.66%) together with the error correction capability led to a 100% accuracy of recovered data. This selective chemical engraving approach offers a comprehensive solution for the lowest-level data retrieval and correction from 40 nm technology flash memory, providing a new avenue for forensic data extraction.
PubDate: THU, 26 OCT 2023 09:19:54 -04
Issue No: Vol. 19, No. null (2023)
-
- Multi-Surface Multi-Technique (MUST) Latent Fingerprint Database
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Aakarsh Malhotra;Mayank Vatsa;Richa Singh;Keith B. Morris;Afzel Noore;
Pages: 1041 - 1055
Abstract: Latent fingerprint recognition involves acquisition and comparison of latent fingerprints with an exemplar gallery of fingerprints. The diversity in the type of surface leads to different procedures to recover the latent fingerprint. The appearance of latent fingerprints vary significantly due to the development techniques, leading to large intra-class variation. Due to lack of large datasets acquired using multiple mechanisms and surfaces, existing algorithms for latent fingerprint enhancement and comparison may perform poorly. In this study, we propose a Multi-Surface Multi-Technique (MUST) Latent Fingerprint Database. The database consists of more than 16,000 latent fingerprint impressions from 120 unique classes (120 fingers from 12 participants). Including corresponding exemplar fingerprints (livescan and rolled) and extended gallery, the dataset has nearly 21,000 impressions. It has latent fingerprints acquired under 35 different scenarios and additional four subsets of exemplar prints captured using live scan sensor and inked-rolled prints. With 39 different subsets, the database illustrates intra-class variations in latent fingerprints. The database has a potential usage towards building robust algorithms for latent fingerprint enhancement, segmentation, comparison, and multi-task learning. We also provide annotations for manually marked minutiae, acquisition Pixel Per Inch (PPI), and semantic segmentation masks. We also present the experimental protocol and the baseline results for the proposed dataset. The availability of the proposed database can encourage research in handling intra-class variation in latent fingerprint recognition.
PubDate: MON, 29 MAY 2023 10:03:17 -04
Issue No: Vol. 19, No. null (2023)
-
- Lightweight Radio Frequency Fingerprint Identification Scheme for V2X
Based on Temporal Correlation-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xinyu Qi;Aiqun Hu;Tianshu Chen;
Pages: 1056 - 1070
Abstract: Radio frequency fingerprinting identification (RFFI) is a promising physical layer authentication technique based on the inherent hardware defects of transmitters, yet there are bottlenecks in its application to vehicular networks. In this paper, we focus on the concerns of data dependency, channel effects, signal representation, and model efficiency to propose a lightweight RFFI scheme for vehicle-to-everything (V2X) communication based on temporal correlation. Specifically, modified gramian angular filed (MAGF) and Markov probability transition matrix with temporal dependency (MTTD) are proposed for signal representation to mine the temporal information related to device identity in terms of angular variation trajectory and first-order Markov transition probabilities, respectively. Due to the superiority of the proposed signal representation, paired with the customized pre-processing design, a lightweight feature extractor can achieve satisfactory RFFI performance in a very short time. We performed a comprehensive complexity analysis of existing models in the field and validated the proposed scheme using thirteen V2X devices in real wireless environments. In addition, the generalizability of the proposed pre-processing and representation method is demonstrated by testing on different deep learning models.
PubDate: THU, 02 NOV 2023 09:17:11 -04
Issue No: Vol. 19, No. null (2023)
-
- Verifiable Arbitrary Queries With Zero Knowledge Confidentiality in
Decentralized Storage-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Haodi Wang;Yu Guo;Rongfang Bie;Xiaohua Jia;
Pages: 1071 - 1085
Abstract: Blockchain-based data storage has become an emerging paradigm, providing a fair and transparent data platform for decentralized applications. However, how to achieve secure on-chain verification for arbitrary SQL queries in such a decentralized storage remains under-explored. Due to the limitations of authenticated data structure (ADS), existing works either do not consider arbitrary query verification issue or fail to achieve practical gas consumption efficiency. In this paper, we present a novel arbitrary query verification scheme for decentralized storage. The proposed scheme, named $\mathsf {zkQuery}$ , enables efficient public verification for arbitrary queries with zero-knowledge confidentiality. $\mathsf {zkQuery}$ is built from the ingenious synergy of techniques from both zero-knowledge proof and smart contract technology. The core idea is to delegate smart contracts to fairly execute results verification and utilize our tailored zero-knowledge proof protocol to facilitate arbitrary computation in a privacy-preserving manner. The verification protocols of $\mathsf {zkQuery}$ are highly customized for decentralized storage, where the complexity of on-chain verification can be completed in logarithmic time, significantly decreasing gas consumption. We rigorously provide security analysis and complete the prototype implementation. The extensive experiments over the NEAR blockchain show that $\mathsf {zkQuery}$ can gain at least $2\times $ better performance than the baseline approach on all metrics.
PubDate: MON, 06 NOV 2023 09:19:32 -04
Issue No: Vol. 19, No. null (2023)
-
- Secret Key Generation for IRS-Assisted Multi-Antenna Systems: A Machine
Learning-Based Approach-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chen Chen;Junqing Zhang;Tianyu Lu;Magnus Sandell;Liquan Chen;
Pages: 1086 - 1098
Abstract: Physical-layer key generation (PKG) based on wireless channels is a lightweight technique to establish secure keys between legitimate communication nodes. Recently, intelligent reflecting surfaces (IRSs) have been leveraged to enhance the performance of PKG in terms of secret key rate (SKR), as it can reconfigure the wireless propagation environment and introduce more channel randomness. In this paper, we investigate an IRS-assisted PKG system, taking into account the channel spatial correlation at both the base station (BS) and the IRS. Based on the considered system model, the closed-form expression of SKR is derived analytically considering correlated eavesdropping channels. Aiming to maximize the SKR, a joint design problem of the BS’s precoding matrix and the IRS’s phase shift vector is formulated. To address this high-dimensional non-convex optimization problem, we propose a novel unsupervised deep neural network (DNN)-based algorithm with a simple structure. Different from most previous works that adopt iterative optimization to solve the problem, the proposed DNN-based algorithm directly obtains the BS precoding and IRS phase shifts as the output of the DNN. Simulation results reveal that the proposed DNN-based algorithm outperforms the benchmark methods with regard to SKR.
PubDate: FRI, 10 NOV 2023 09:16:50 -04
Issue No: Vol. 19, No. null (2023)
-
- Machine Unlearning via Representation Forgetting With Parameter
Self-Sharing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weiqi Wang;Chenhan Zhang;Zhiyi Tian;Shui Yu;
Pages: 1099 - 1111
Abstract: Machine unlearning enables data owners to remove the contribution of their specified samples from trained models. However, existing methods fail to strike an optimal balance between erasure effectiveness and model utility preservation. Previous studies focused on removing the impact of user-specified data from the model as much as possible to implement unlearning. These methods usually result in significant model utility degradation, commonly called catastrophic unlearning. To address the issue, we systematically consider machine unlearning and formulate it as a two-objective optimization problem that involves forgetting the erased data and retaining the previously learned knowledge, highlighting accuracy preservation during the unlearning process. We propose an unlearning method called representation-forgetting unlearning with parameter self-sharing (RFU-SS) to achieve the two-objective unlearning goal. Firstly, we design a representation-forgetting unlearning (RFU) method that aims to remove the contribution of specified samples from a trained representation by minimizing the mutual information between the representation and the erased data. The representation is learned using the information bottleneck (IB) method. RFU is tailored to the IB structure models for ease of introduction. Secondly, we customize a parameter self-sharing structural optimization method for RFU (i.e., RFU-SS) to simultaneously optimize the forgetting and retention objectives to find the optimal balance. Extensive experimental results demonstrate a significant effectiveness improvement of RFU-SS over the state-of-the-art methods. RFU-SS almost eliminates catastrophic unlearning, reducing model accuracy degradation from over 6% to less than 0.2% on the MNIST dataset with an even better removal effect. The source code is available at https://github.com/wwq5-code/RFU-SS.git.
PubDate: WED, 08 NOV 2023 09:16:49 -04
Issue No: Vol. 19, No. null (2023)
-
- Robust and Generalized Physical Adversarial Attacks via Meta-GAN
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weiwei Feng;Nanqing Xu;Tianzhu Zhang;Baoyuan Wu;Yongdong Zhang;
Pages: 1112 - 1125
Abstract: Deep neural networks are known to be vulnerable to adversarial examples, where adding carefully crafted adversarial perturbations to the inputs can mislead the DNN model. However, it is challenging to generate effective adversarial examples in the physical world due to many uncontrollable physical dynamics, which pose security and safety threats in the real world. Current physical attack methods aim to generate robust physical adversarial examples by simulating all possible physical dynamics. If attacking a new image or a new DNN model, they require expensive manual efforts for simulating physical dynamics or considerable time for iteratively optimizing. To tackle these limitations, we propose a robust and generalized physical adversarial attack method with Meta-GAN (Meta-GAN Attack), which is able to not only generate robust physical adversarial examples, but also generalize to attacking novel images and novel DNN models by accessing a few digital and physical images. First, we propose to craft robust physical adversarial examples with a generative attack model via simulating color and shape distortions. Second, we formulate the physical attack as a few-shot learning problem and design a novel class-agnostic and model-agnostic meta-learning algorithm to solve this problem. Extensive experiments on two benchmark datasets with four challenging experimental settings verify the superior robustness and generalization of our method by comparing to state-of-the-art physical attack methods. The source code is released at github.
PubDate: WED, 21 JUN 2023 10:01:41 -04
Issue No: Vol. 19, No. null (2023)
-
- Systematically Quantifying Cryptanalytic Nonlinearities in Strong PUFs
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Durba Chatterjee;Kuheli Pratihar;Aritra Hazra;Ulrich Rührmair;Debdeep Mukhopadhyay;
Pages: 1126 - 1141
Abstract: Physically Unclonable Functions (PUFs) with large challenge space (also called Strong PUFs) are promoted for usage in authentications and various other cryptographic and security applications. In order to qualify for these cryptographic applications, the Boolean functions realized by PUFs need to possess a high nonlinearity (NL). However, with a large challenge space (usually $\geq 64$ bits), measuring NL by classical techniques like the Walsh transformation is computationally infeasible. In this paper, we propose the usage of a heuristic-based measure called the non-homomorphicity test which estimates the cryptographic NL of Boolean functions with high accuracy in spite of not needing access to the entire challenge-response set. We also combine our analysis with a technique used in linear cryptanalysis, called Piling-up lemma, to measure the NL of popular PUF compositions. As a demonstration to justify the soundness of the metric, we perform extensive experimentation by first estimating the NL of constituent Arbiter/Bistable Ring PUFs using the non-homomorphicity test, and then applying them to quantify the same for their XOR compositions namely XOR Arbiter PUFs and XOR Bistable Ring PUF. Our findings show that the metric explains the impact of various parameter choices of these PUF compositions on the NL obtained and thus promises to be used as an important objective criterion for future efforts to evaluate PUF designs. While the framework is not representative of the machine learning robustness of PUFs, it can be a useful complementary tool to analyze the cryptanalytic strengths of PUF primitives.
PubDate: WED, 01 NOV 2023 09:18:15 -04
Issue No: Vol. 19, No. null (2023)
-
- ResNeXt+: Attention Mechanisms Based on ResNeXt for Malware Detection and
Classification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuewang He;Xiangui Kang;Qiben Yan;Enping Li;
Pages: 1142 - 1155
Abstract: Malware detection and classification are crucial for protecting digital devices and information systems. Accurate identification of malware enables researchers and incident responders to take prompt measures against malware and mitigate its damage. With the development of attention mechanisms in the field of computer vision, attention mechanism-based malware detection techniques are also rapidly evolving. The essence of the attention mechanism is to focus on the information of interest and suppress the useless information. In this paper, we develop different plug-and-play attention mechanisms based on the ResNeXt tagging model, where the designed model is trained to focus on the malware features by capturing the malware image channel perception field of view and is also able to provide more helpful and flexible information than other methods. We have named this designed neural network ResNeXt+, and its core modules are built with different plug-and-play attention mechanisms. Extensive experimental results show that ResNeXt+ is effective and efficient in malware detection and classification with high classification accuracy. The proposed methods outperform the state-of-the-art techniques with seven benchmark datasets. Cross-dataset experiments conducted on the Windows and Android datasets, with an accuracy of 90.64% on cross-dataset detection of the android. Ablation experiments are also conducted on seven datasets, which demonstrate that attention mechanisms can improve malware detection and classification accuracy.
PubDate: MON, 30 OCT 2023 09:19:14 -04
Issue No: Vol. 19, No. null (2023)
-
- TMG-GAN: Generative Adversarial Networks-Based Imbalanced Learning for
Network Intrusion Detection-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hongwei Ding;Yu Sun;Nana Huang;Zhidong Shen;Xiaohui Cui;
Pages: 1156 - 1167
Abstract: Internet of Things (IoT) devices are large in number, widely distributed, weak in protection ability, and vulnerable to various malicious attacks. Intrusion detection technology can provide good protection for network equipment. However, the normal traffic and abnormal traffic in the network are usually imbalanced. Imbalanced samples will seriously affect the performance of machine learning detection algorithm. Therefore, this paper proposes an intrusion detection method based on data augmentation, namely TMG-IDS. We name the proposed data augmentation model TMG-GAN, which is a data augmentation method based on generative adversarial networks (GAN). First, TMG-GAN has a multi-generator structure, which can be used to generate different types of attack data simultaneously. Second, we increase the classifier structure, which can optimize the generator and discriminator more efficiently based on the classification loss. Third, we calculate the cosine similarity between the generated samples and the original samples and other types of generated samples as a generator loss, which can further improve the quality of generated samples and reduce the class overlap area between the distributions of various generated samples. We conduct extensive experiments on two intrusion detection datasets, CICIDS2017 and UNSW-NB15. The experimental results show that compared with the advanced oversampling algorithm and the latest intrusion detection algorithm, the proposed TMG-IDS method has a good detection effect under the three indicators of Precision, Recall and F1-score.
PubDate: WED, 08 NOV 2023 09:16:49 -04
Issue No: Vol. 19, No. null (2023)
-
- Beyond the Prior Forgery Knowledge: Mining Critical Clues for General Face
Forgery Detection-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Anwei Luo;Chenqi Kong;Jiwu Huang;Yongjian Hu;Xiangui Kang;Alex C. Kot;
Pages: 1168 - 1182
Abstract: Face forgery detection is essential in combating malicious digital face attacks. Previous methods mainly rely on prior expert knowledge to capture specific forgery clues, such as noise patterns, blending boundaries, and frequency artifacts. However, these methods tend to get trapped in local optima, resulting in limited robustness and generalization capability. To address these issues, we propose a novel Critical Forgery Mining (CFM) framework, which can be flexibly assembled with various backbones to boost their generalization and robustness performance. Specifically, we first build a fine-grained triplet and suppress specific forgery traces through prior knowledge-agnostic data augmentation. Subsequently, we propose a fine-grained relation learning prototype to mine critical information in forgeries through instance and local similarity-aware losses. Moreover, we design a novel progressive learning controller to guide the model to focus on principal feature components, enabling it to learn critical forgery features in a coarse-to-fine manner. The proposed method achieves state-of-the-art forgery detection performance under various challenging evaluation settings. The source code is available at: https://github.com/LoveSiameseCat/CFM.
PubDate: MON, 13 NOV 2023 09:18:16 -04
Issue No: Vol. 19, No. null (2023)
-
- MalPatch: Evading DNN-Based Malware Detection With Adversarial Patches
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Dazhi Zhan;Yexin Duan;Yue Hu;Weili Li;Shize Guo;Zhisong Pan;
Pages: 1183 - 1198
Abstract: Static analysis is a crucial protection layer that enables modern antivirus systems to address the rampant proliferation of malware. These systems are increasingly relying on deep neural networks (DNNs) to automatically extract reliable features and achieve outstanding detection accuracy. Since DNNs are known to be vulnerable to adversarial examples, several studies have proposed practical evasion attacks to generate adversarial perturbations that can evade malware detectors. These attacks, however, require specific designs for the given input sample, prohibiting them from large-scale deployment. Therefore, it is more practical to generate sample-agnostic perturbations that do not involve recalculations regardless of the input malware sample. To this end, we leverage an adversarial patch attack, which is a special type of adversarial attack that dose not know the sample being modified during the attack construction process. In particular, we propose a new adversarial attack against malware detection systems called MalPatch. It locates the nonfunctional part of malware for adversarial patch injection to protect its executability while generating adversarial examples based on different strategies. The generated patch can be injected into any malware sample, fooling the detector into classifying it as benign. Experimental results demonstrate that MalPatch is effective under different attack settings. In the white-box setting, MalPatch achieves 69%-78% success rates against DNN detectors based on raw byte features and 47%-96% success rates against four grayscale detectors based on image features. In the black-box setting, the success rates of MalPatch against the same models reach 54%-74% and 27%-42%, respectively. We conclude by discussing several of its potential countermeasures and the generality of our approach.
PubDate: THU, 16 NOV 2023 09:20:13 -04
Issue No: Vol. 19, No. null (2023)
-
- Attention-Based API Locating for Malware Techniques
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Guo-Wei Wong;Yi-Ting Huang;Ying-Ren Guo;Yeali Sun;Meng Chang Chen;
Pages: 1199 - 1212
Abstract: This paper presents APILI, an innovative approach to behavior-based malware analysis that utilizes deep learning to locate the API calls corresponding to discovered malware techniques in dynamic execution traces. APILI defines multiple attentions between API calls, resources, and techniques, incorporating MITRE ATT&CK framework, adversary tactics, techniques and procedures, through a neural network. We employ fine-tuned BERT for arguments/resources embedding, SVD for technique representation, and several design enhancements, including layer structure and noise addition, to improve the locating performance. To the best of our knowledge, this is the first attempt to locate low-level API calls that correspond to high-level malicious behaviors (that is, techniques). Our evaluation demonstrates that APILI outperforms other traditional and machine learning techniques in both technique discovery and API locating. These results indicate the promising performance of APILI, thus allowing it to reduce the analysis workload.
PubDate: MON, 06 NOV 2023 09:19:32 -04
Issue No: Vol. 19, No. null (2023)
-
- Cross-User Leakage Mitigation for Authorized Multi-User Encrypted Data
Sharing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mingyue Wang;Zizhuo Chen;Yinbin Miao;Hejiao Huang;Cong Wang;Xiaohua Jia;
Pages: 1213 - 1226
Abstract: Cloud computing has been a research focus in both academic and industrial communities for decades. Along with this trend, Searchable Encryption (SE) technology emerged and developed as data privacy concerns increased. Many schemes are proposed to solve the privacy-preserving data-sharing problem in multi-user scenarios. Most existing solutions are based on the assumption that all users are trusted. However, there will be cross-user leakage when there are malicious or compromised ones. This is because of the inherent linkability of authorization information and the search result when multiple users request data from the same database. To this end, we propose a cross-user leakage mitigation scheme for authorized encrypted data sharing in a two-server model. We utilize a blinding factor to delink authorizations based on Symmetric Multi-Key Searchable Encryption (SMKSE). To break the linkability of query results, we combine the zero-sum garbled Bloom filter with the oblivious transfer technique, where each of the two servers can only know partial information. We devise a group-based Bloom filter structure in indices to improve efficiency. We perform formal security analysis and also demonstrate the efficiency through comparative experiments.
PubDate: WED, 15 NOV 2023 09:16:56 -04
Issue No: Vol. 19, No. null (2023)
-
- Pluggable Attack for Visual Object Tracking
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Shaochuan Zhao;Tianyang Xu;Xiao-Jun Wu;Josef Kittler;
Pages: 1227 - 1240
Abstract: Performing adversarial attacks on a visual tracker aims to drift the apparent target to the background by adding malicious perturbations to the source images. Demonstrating convincingly their ability to decrease accuracy, existing tracking attackers mislead the target predictions at the decision level, but this is tracker design specific, narrowing their applicability to other tracking approaches. In contrast, we advocate that attacks be performed by corrupting the feature-level clues, i.e., the feature representations extracted by deep networks. The proposed approach provides a general attacking framework for backbone-head tracking architectures. Motivated by the knowledge that the quality of intermediate-level features strongly influences the decision making, four intermediate-level attack methods are proposed to maximise the difference between the feature distributions of natural and adversarial samples, thus decoupling the attack strategies from the form of the output of specific victim trackers. Interestingly, our intermediate-level attacks are compatible with existing decision-level attacks, thus a joint optimisation of these two kinds of adversarial objective functions has the potential to achieve better attacking performance. Hence, the proposed adversarial attack methodology can be used in conjunction with several mainstream tracking paradigms (Discriminative correlation filters, Siamese networks, and Transformer trackers), demonstrating its pluggability. The experimental results on four popular benchmarks, e.g., OTB100, UAV123, LaSOT, and TLP, verify that our method can produce impressive and consistent accuracy degeneration.
PubDate: MON, 13 NOV 2023 09:18:15 -04
Issue No: Vol. 19, No. null (2023)
-
- AgrAmplifier: Defending Federated Learning Against Poisoning Attacks
Through Local Update Amplification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zirui Gong;Liyue Shen;Yanjun Zhang;Leo Yu Zhang;Jingwei Wang;Guangdong Bai;Yong Xiang;
Pages: 1241 - 1250
Abstract: The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the “morality” of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
PubDate: THU, 16 NOV 2023 09:20:13 -04
Issue No: Vol. 19, No. null (2023)
-
- URadar: Discovering Unrestricted File Upload Vulnerabilities via Adaptive
Dynamic Testing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuanchao Chen;Yuwei Li;Zulie Pan;Yuliang Lu;Juxing Chen;Shouling Ji;
Pages: 1251 - 1266
Abstract: Unrestricted file upload (UFU) vulnerabilities, especially unrestricted executable file upload (UEFU) vulnerabilities, pose severe security risks to web servers. For instance, attackers can leverage such vulnerabilities to execute arbitrary code to gain the control of a whole web server. Therefore, it is significant to develop effective and efficient methods to detect UFU and UEFU vulnerabilities. Towards this, most state-of-the-art methods are designed based on dynamic testing. Nevertheless, they still entail two critical limitations. 1) They heavily rely on manual efforts, which are error-prone and have poor adaptability. 2) They seldom leverage effective information to guide the testing, resulting in generating a large number of invalid test cases. Such limitations severely hinder the performance of UFU vulnerability detection. In this paper, we propose URadar, an adaptive dynamic testing-based method for detecting UFU and UEFU vulnerabilities. There are three core designs in URadar, including file upload interface identification, file type restriction inference, and invalid mutation combination filtration, which can effectively solve the two limitations of existing methods. To evaluate the performance of URadar, we conduct extensive experiments and compare URadar with state-of-the-art methods (e.g., FUSE, RIPS). In testing 18 web applications, URadar discovers 26 UEFU vulnerabilities, where 8 are new, and 6 have been assigned new CVE/CNNVD IDs. By contrast, FUSE and RIPS find 14 and 2 UEFU vulnerabilities, respectively. To discover the same number of UFU vulnerabilities, FUSE needs to send 73,261 request packets with a time cost of 2,791.1s on average, 23.43 and 20.53 times of the requirements for URadar. The above results demonstrate that URadar significantly outperforms the state-of-the-art methods. In addition, we have open-sourced URadar to facilitate future research on UFU vulnerability detection.
PubDate: TUE, 21 NOV 2023 09:17:31 -04
Issue No: Vol. 19, No. null (2023)
-
- Imperceptible and Robust Backdoor Attack in 3D Point Cloud
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kuofeng Gao;Jiawang Bai;Baoyuan Wu;Mengxi Ya;Shu-Tao Xia;
Pages: 1267 - 1282
Abstract: With the thriving of deep learning in processing point cloud data, recent works show that backdoor attacks pose a severe security threat to 3D vision applications. The attacker injects the backdoor into the 3D model by poisoning a few training samples with trigger, such that the backdoored model performs well on clean samples but behaves maliciously when the trigger pattern appears. Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e.g., rotation) to construct the poisoned point cloud. However, the effects of these poisoned samples are likely to be weakened or even eliminated by some commonly used pre-processing techniques for 3D point cloud, e.g., outlier removal or rotation augmentation. In this paper, we propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge. We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations. As there are several hyper-parameters and randomness in WLT, it is difficult to produce two similar transformations. Consequently, poisoned samples with unique transformations are likely to be resistant to aforementioned pre-processing techniques. Besides, the distortion caused by a fixed WLT is both controllable and smooth, resulting in the generated poisoned samples that are imperceptible to human inspection. Extensive experiments on three benchmark datasets and four models show that IRBA achieves $80\%+$ attack success rate (ASR) in most cases even with pre-processing techniques, which is significantly higher than previous state-of-the-art attacks. Our code is available at https://github.com/KuofengGao/IRBA.
PubDate: THU, 16 NOV 2023 09:20:13 -04
Issue No: Vol. 19, No. null (2023)
-
- Distortion Model-Based Spectral Augmentation for Generalized Recaptured
Document Detection-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Changsheng Chen;Bokang Li;Rizhao Cai;Jishen Zeng;Jiwu Huang;
Pages: 1283 - 1298
Abstract: Document recapturing is a presentation attack that covers the forensic traces in the digital domain. Document presentation attack detection (DPAD) is an important step in the document authentication pipeline. Existing DPAD methods suffer from low generalization performance under the cross-domain scenario with different types of documents. Data augmentation is a de facto technique to reduce the risk of overfitting the training data and improve the generalizability of a trained model. In this work, we improve the generalization performance of DPAD approaches by addressing two important limitations of the existing frequency domain augmentation (FDA) methods. First, contrary to the existing FDA methods that treat different spectral bands equally, we establish a band-of-interest localization (BOIL) method that locates the spectral band-of-interest (BOI) related to the recapturing operation by domain knowledge from the theoretical distortion models. Second, we propose a frequency-domain halftoning augmentation (FHAG) strategy that enhances the halftoning features in the BOI with considerations of different halftoning distortions. To evaluate the generalization performance of our FHAG with BOIL method on different types of document images, we have constructed a diverse recaptured document image dataset with 162 types of documents (RDID162), consisting of 5346 samples. The proposed method has been evaluated on the generic deep learning models and a state-of-the-art DPAD approach under both cross-device and cross-domain protocols for the DPAD task. Compared to the existing FDA methods, our method has improved the models with ResNet50 backbone by reducing more than 25% or 5 percentage points in EERs. The source code and data in this work is available at https://github.com/chenlewis/FHAG-with-BOIL.
PubDate: TUE, 28 NOV 2023 09:21:39 -04
Issue No: Vol. 19, No. null (2023)
-
- SLIM: A Secure and Lightweight Multi-Authority Attribute-Based
Signcryption Scheme for IoT-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Bei Gong;Chong Guo;Chong Guo;Chen Guo;Yao Sun;Muhammad Waqas;Sheng Chen;
Pages: 1299 - 1312
Abstract: Although attribute-based signcryption (ABSC) offers a promising technology to ensure the security of IoT data sharing, it faces a two-fold challenge in practical implementation, namely, the linearly increasing computation and communication costs and the heavy load of single authority based key management. To this end, we propose a Secure and Lightweight Multi-authority ABSC scheme called SLIM in this paper. The signcryption and de-signcryption costs of devices are reduced to a small constant by offloading most of the computation to the edge server. To minimize communication and storage costs, a short and constant-size ciphertext is designed. Moreover, we adopt a hierarchical multi-authority architecture, setting up multiple attribute authorities that manage keys independently to prevent the bottleneck. Rigorous security analysis proves that the SLIM scheme can resist adaptive chosen ciphertext attacks and adaptive chosen message attacks under the standard model. Simulation experiments demonstrate the correctness of our theoretical derivations and the cost reduction of the SLIM scheme in computation, communication and storage.
PubDate: WED, 08 NOV 2023 09:16:49 -04
Issue No: Vol. 19, No. null (2023)
-
- Video-Based Visible-Infrared Person Re-Identification With Auxiliary
Samples-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yunhao Du;Cheng Lei;Zhicheng Zhao;Yuan Dong;Fei Su;
Pages: 1313 - 1325
Abstract: Visible-infrared person re-identification (VI-ReID) aims to match persons captured by visible and infrared cameras, allowing person retrieval and tracking in 24-hour surveillance systems. Previous methods focus on learning from cross-modality person images in different cameras. However, temporal information and single-camera samples tend to be neglected. To crack this nut, in this paper, we first contribute a large-scale VI-ReID dataset named BUPTCampus. Different from most existing VI-ReID datasets, it 1) collects tracklets instead of images to introduce rich temporal information, 2) contains pixel-aligned cross-modality sample pairs for better modality-invariant learning, 3) provides one auxiliary set to help enhance the optimization, in which each identity only appears in a single camera. Based on our constructed dataset, we present a two-stream framework as baseline and apply Generative Adversarial Network (GAN) to narrow the gap between the two modalities. To exploit the advantages introduced by the auxiliary set, we propose a curriculum learning based strategy to jointly learn from both primary and auxiliary sets. Moreover, we design a novel temporal k-reciprocal re-ranking method to refine the ranking list with fine-grained temporal correlation cues. Experimental results demonstrate the effectiveness of the proposed methods. We also reproduce 9 state-of-the-art image-based and video-based VI-ReID methods on BUPTCampus and our methods show substantial superiority to them. The codes and dataset are available at: https://github.com/dyhBUPT/BUPTCampus.
PubDate: THU, 30 NOV 2023 09:18:20 -04
Issue No: Vol. 19, No. null (2023)
-
- Similarity Distance Learning on SPD Manifold for Writer Independent
Offline Signature Verification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Elias N. Zois;Dimitrios Tsourounis;Dimitrios Kalivas;
Pages: 1342 - 1356
Abstract: Identifying the existence or approval of a human in a number of past, recent and present day activities with the use of a handwritten signature is a captivating biometric challenge. Several engineering branches such as computer vision, pattern recognition and quite recently data-driven machine learning algorithms are combined in a multi-disciplined signature verification framework in order to deliver an equivalent and efficient e-assistance to manually executed duties, which usually demand knowledge and skills. In this work, we propose, for the first time, the use of a learnable Symmetric Positive Definite manifold distance framework in offline signature verification literature in order to build a global writer-independent signature verification classifier. The key building block of the framework relies on the use of regional covariance matrices of handwritten signature images as visual descriptors, which maps them into the Symmetric Positive Definite manifold. The learning and verification protocol explores both blind intra and blind inter transfer learning frameworks with the use of four popular signature datasets of Western and Asian origin. Experiments strongly indicate that the learnable SPD manifold similarity distance can be highly efficient for offline writer independent signature verification.
PubDate: THU, 16 NOV 2023 09:20:13 -04
Issue No: Vol. 19, No. null (2023)
-
- WFDefProxy: Real World Implementation and Evaluation of Website
Fingerprinting Defenses-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jiajun Gong;Wuqi Zhang;Charles Zhang;Tao Wang;
Pages: 1357 - 1371
Abstract: Tor, an onion-routing anonymity network, can be attacked by Website Fingerprinting (WF), which de-anonymizes encrypted web browsing traffic by analyzing its unique sequence characteristics. Although many defenses have been proposed, few have been implemented and tested in the real world; most state-of-the-art defenses were only simulated. Simulations fail to capture the real performance of these defenses as they make simplifying assumptions about the protocol stack and network conditions. To allow WF defenses to be analyzed as real implementations, we create WFDefProxy, the first general platform for WF defense implementation on Tor as pluggable transports. We implement three state-of-the-art WF defenses: FRONT, Tamaraw, and RegulaTor. We evaluate each defense extensively by directly collecting defended datasets under WFDefProxy. Our results show that simulation can be inaccurate in many cases. Specifically, Tamaraw’s time overhead was underestimated by 22% in one setting and overestimated by 24% in another. RegulaTor’s time overhead was underestimated by 30–40%. We find that a major source of simulation inaccuracy is that they cannot incorporate how packets depend on each other. We also find that adverse network conditions (which are ignored in simulation), especially congestion, can affect the evaluated overhead of defenses. These results show that it is important to evaluate defenses as implementations instead of only simulations to avoid errors in evaluation.
PubDate: WED, 25 OCT 2023 09:16:53 -04
Issue No: Vol. 19, No. null (2023)
-
- SepBIN: Binary Feature Separation for Better Semantic Comparison and
Authorship Verification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qige Song;Yafei Sang;Yongzheng Zhang;Shuhao Li;Xiaolin Xu;
Pages: 1372 - 1387
Abstract: Binary semantic comparison and authorship verification are critical in many security applications. They respectively focus on the functional semantic features and developers’ programming style features of binary code, which are usually mixed without clear demarcation. Recently, researchers have proposed learning-based approaches for intelligent binary analysis. They generally addressed single tasks with hand-crafted feature sets or neural binary encoders, which suffer performance bottlenecks due to the noise in mixed features. This paper proposes SepBIN, a novel neural network framework that exploits the intrinsic correlation of binary semantic comparison and authorship verification tasks and automatically separates semantic and stylistic binary features. We first construct a strong backbone binary encoder, then utilize preliminary decomposition subnets and the flexible gating-based feature fusion mechanism to distill pure semantic-related and style-related binary representations, and further improve their quality by a feature reconstruction module. The overall SepBIN model is optimized by a multi-objective joint optimization strategy. We conduct extensive experiments on Google Code Jam (GCJ) datasets in different languages and scales. Results show that SepBIN simultaneously benefits binary semantic comparison and authorship verification tasks through the effective binary semantic-style feature separation mechanism, and provides multi-perspectives interpretability for the performance gains. For state-of-the-art approaches with different binary encoders, SepBIN can adaptively improve them with the designed separation modules. Furthermore, we adopt a pretraining-finetuning strategy to effectively transfer SepBIN’s separation capability in real-world applications, including APT malware homology detection and binary semantic comparison against code obfuscations.
PubDate: THU, 09 NOV 2023 09:16:41 -04
Issue No: Vol. 19, No. null (2023)
-
- Optimality of the Proper Gaussian Signal in Complex MIMO Wiretap Channels
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yong Dong;Yinfei Xu;Tong Zhang;Yili Xia;
Pages: 1401 - 1414
Abstract: The multiple-input multiple-output (MIMO) wiretap channel (WTC) serves as a fundamental model for exploring information-theoretic secrecy in wireless communication systems, involving a transmitter, a legitimate user, and an eavesdropper. This paper investigates the optimality of proper complex signals in complex WTCs. Our primary contribution lies in the derivation of a determinant inequality, which establishes that the secrecy rate of degraded complex MIMO WTCs is maximized when the signal is proper, meaning that its pseudo-covariance matrix is a zero matrix. Remarkably, we extend this result beyond the degraded scenario to the general complex WTC by leveraging a min-max reformulation of the secrecy capacity. Thus, we demonstrate that focusing on proper signals is sufficient when examining the secrecy capacity of the complex WTC. Overall, this work highlights the significance of the determinant inequality we derive and its implications for optimizing secrecy rates in the complex WTC.
PubDate: TUE, 28 NOV 2023 09:21:39 -04
Issue No: Vol. 19, No. null (2023)
-
- Information Embedding With Stegotext Reconstruction
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yinfei Xu;Jian Lu;Xuan Guang;Wei Xu;
Pages: 1415 - 1428
Abstract: In this paper, we consider stegotext reconstruction problem in information embedding. By adding the requirement of restoring the stegotext under certain fidelity criterion, we generalize the concept of reversible/irreversible information embedding. We focus on the stegotext reconstruction in a discrete memoryless host dependent attack channel, which can be regarded as a generalized Gel’fand-Pinsker problem with an input reconstruction constraint. For this problem, we prove an upper bound and a lower bound on its embedding capacity-distortion function, which is defined to describe the tradeoff between embedding information rate, host composition loss, and stegotext reconstruction distortion. In particular, our upper and lower bounds thus obtained match each other for the binary XOR attack channel with Hamming distortion and Costa’s additive Gaussian attack channel with quadratic loss. We further consider a variant of this problem, where host signal is available at the encoder in a causal way. For this case, we completely characterize its capacity-distortion function.
PubDate: THU, 30 NOV 2023 09:18:20 -04
Issue No: Vol. 19, No. null (2023)
-
- HODA: Hardness-Oriented Detection of Model Extraction Attacks
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Amir Mahdi Sadeghzadeh;Amir Mohammad Sobhanian;Faezeh Dehghan;Rasool Jalili;
Pages: 1429 - 1439
Abstract: Model extraction attacks exploit the target model’s prediction API to create a surrogate model, allowing the adversary to steal or reconnoiter the functionality of the target model in the black-box setting. Several recent studies have shown that a data-limited adversaries with no or limited access to the samples from the target model’s training data distribution, can employ synthesized or semantically similar samples to conduct model extraction attacks. In this paper, we introduce the concept of hardness degree to characterize sample difficulty based on the concept of learning speed. The hardness degree of a sample depends on the epoch number at which the predicted label for that sample converges. We investigate the hardness degree of samples and demonstrate that the hardness degree histogram of a data-limited adversary’s sample sequence is differs significantly from that of benign users’ sample sequences. We propose Hardness-Oriented Detection Approach (HODA) to detect the sample sequences of model extraction attacks. Our results indicate that HODA can effectively detect model extraction attack sequences with a high success rate, using only 100 monitored samples. It outperforms all previously proposed methods for model extraction detection.
PubDate: THU, 28 SEP 2023 09:17:17 -04
Issue No: Vol. 19, No. null (2023)
-
- STDA: Secure Time Series Data Analytics With Practical Efficiency in
Wide-Area Network-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiaoguo Li;Zixi Huang;Bowen Zhao;Guomin Yang;Tao Xiang;Robert H. Deng;
Pages: 1440 - 1454
Abstract: Time series data analytics technology significantly benefits modern scientific research, especially in fields such as medical health, financial investment, and transportation. Unfortunately, privacy issues hinder people from handing over the data to a third party for various analytical tasks; because the data may reveal much more individual sensitive information, e.g., disease information from medical data, investment tendency from financial data, or the daily trajectory from transportation data. To break down this barrier, secure computation approaches have shown their importance in processing sensitive data, and have attracted much attention from the industry and research communities. However, when considering the case of secure time-series data analytics (e.g., DTW similarity), we are still far from achieving high efficiency due to high round complexity in communication or expensive computational complexity. We observe that DTW involves a lot of comparison operations and existing approaches in dealing with the comparison require higher communication costs. To this end, this paper studies secure DTW-based analytics with practical efficiency over time series data. Specifically, we propose the framework of secure time series data analytics (STDA) and formulate the problem of top- $k$ query for outsourced time series data. Based on threshold Paillier encryption, we present a top- $k$ query protocol utilizing the DTW distance as a metric and its security analysis, optimizations, and performance evaluation. The experimental results demonstrate that in a wide-area network with a 10 ms latency, our top- $k$ approach outperforms the state-of-the-art by 3x times, while DTW calculation outperforms by 9x times. Correspondingly, the optimized $\mathcal {F}_{\text {DTW}}$ achieves 17x times better, and optimized top- $k$ achieves 4-10x times better.
PubDate: THU, 23 NOV 2023 09:17:11 -04
Issue No: Vol. 19, No. null (2023)
-