Advances in Multimedia
[SJR: 0.191] [H-I: 10] [2 followers] Follow
Open Access journal
ISSN (Print) 1687-5680 - ISSN (Online) 1687-5699
Published by Hindawi [333 journals]
- Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods
for Dynamic Hand Gesture Detection Method
Abstract: Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.
PubDate: Thu, 02 Mar 2017 08:15:02 +000
- Salient Object Detection Based on Background Feature Clustering
Abstract: Automatic estimation of salient object without any prior knowledge tends to greatly enhance many computer vision tasks. This paper proposes a novel bottom-up based framework for salient object detection by first modeling background and then separating salient objects from background. We model the background distribution based on feature clustering algorithm, which allows for fully exploiting statistical and structural information of the background. Then a coarse saliency map is generated according to the background distribution. To be more discriminative, the coarse saliency map is enhanced by a two-step refinement which is composed of edge-preserving element-level filtering and upsampling based on geodesic distance. We provide an extensive evaluation and show that our proposed method performs favorably against other outstanding methods on two most commonly used datasets. Most importantly, the proposed approach is demonstrated to be more effective in highlighting the salient object uniformly and robust to background noise.
PubDate: Thu, 09 Feb 2017 00:00:00 +000
- Revealing Traces of Image Resampling and Resampling Antiforensics
Abstract: Image resampling is a common manipulation in image processing. The forensics of resampling plays an important role in image tampering detection, steganography, and steganalysis. In this paper, we proposed an effective and secure detector, which can simultaneously detect resampling and its forged resampling which is attacked by antiforensic schemes. We find that the interpolation operation used in the resampling and forged resampling makes these two kinds of image show different statistical behaviors from the unaltered images, especially in the high frequency domain. To reveal the traces left by the interpolation, we first apply multidirectional high-pass filters on an image and the residual to create multidirectional differences. Then, the difference is fit into an autoregressive (AR) model. Finally, the AR coefficients and normalized histograms of the difference are extracted as the feature. We assemble the feature extracted from each difference image to construct the comprehensive feature and feed it into support vector machines (SVM) to detect resampling and forged resampling. Experiments on a large image database show that the proposed detector is effective and secure. Compared with the state-of-the-art works, the proposed detector achieved significant improvements in the detection of downsampling or resampling under JPEG compression.
PubDate: Thu, 12 Jan 2017 12:49:33 +000
- Block Compressed Sensing of Images Using Adaptive Granular Reconstruction
Abstract: In the framework of block Compressed Sensing (CS), the reconstruction algorithm based on the Smoothed Projected Landweber (SPL) iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA) to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC) to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.
PubDate: Mon, 28 Nov 2016 11:30:19 +000
- Image Encryption Performance Evaluation Based on Poker Test
Abstract: The fast development of image encryption requires performance evaluation metrics. Traditional metrics like entropy do not consider the correlation between local pixel and its neighborhood. These metrics cannot estimate encryption based on image pixel coordinate permutation. A novel effectiveness evaluation metric is proposed in this paper to address the issue. The cipher text image is transformed to bit stream. Then, Poker Test is implemented. The proposed metric considers the neighbor correlations of image by neighborhood selection and clip scan. The randomness of the cipher text image is tested by calculating the chi-square test value. Experiment results verify the efficiency of the proposed metrics.
PubDate: Tue, 28 Jun 2016 09:31:00 +000
- A Distortion-Free Data Hiding Scheme for Triangular Meshes Based on
Abstract: This study adopts a triangle subdivision scheme to achieve reversible data embedding. The secret message is embedded into the newly added vertices. The topology of added vertex is constructed by connecting it with the vertices of located triangle. For further raising the total embedding capacity, a recursive subdivision mechanism, terminated by a given criterion, is employed. Finally, a principal component analysis can make the stego model against similarity transformation and vertex/triangle reordering attacks. Our proposed algorithm can provide a high and adjustable embedding capacity with reversibility. The experimental results demonstrate the feasibility of our proposed algorithm.
PubDate: Mon, 27 Jun 2016 10:55:27 +000
- A Novel Printable Watermarking Method in Dithering Halftone Images
Abstract: Halftone images are commonly printed on books, newspapers, and magazines. How to protect the copyright of these printed halftone images becomes an important issue. Digital watermarking provides a solution for copyright protection. In this paper, we will propose a novel printable watermarking method for dithering halftone images. Based on downsampling and the property of a dispersed dithering screen, the method can resist cropping, tampering, and print-and-scan process attacks. In addition, comparing to Guo et al.’s method, the experimental results show that the proposed method provides higher robustness for the above-mentioned attacks and better visual quality in the high-frequency regions of halftone images.
PubDate: Mon, 30 May 2016 09:29:24 +000
- Classification of Error-Diffused Halftone Images Based on Spectral
Regression Kernel Discriminant Analysis
Abstract: This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.
PubDate: Thu, 26 May 2016 07:56:53 +000
- Video Traffic Flow Analysis in Distributed System during Interactive
Abstract: Cost effective, smooth multimedia streaming to the remote customer through the distributed “video on demand” architecture is the most challenging research issue over the decade. The hierarchical system design is used for distributed network to satisfy more requesting users. The distributed hierarchical network system contains all the local and remote storage multimedia servers. The hierarchical network system is used to provide continuous availability of the data stream to the requesting customer. In this work, we propose a novel data stream that handles the methodology for reducing the connection failure and smooth multimedia stream delivery to the remote customer. The proposed session based single-user bandwidth requirement model presents the bandwidth requirement for any interactive session like pause, move slowly, rewind, skip some of the frame, and move fast with some constant number of frames. The proposed session based optimum storage finding algorithm reduces the search hop count towards the remote storage-data server. The modeling and simulation result shows the better impact over the distributed system architecture. This work presents the novel bandwidth requirement model at the interactive session and gives the trade-off in communication and storage costs for different system resource configurations.
PubDate: Sun, 10 Apr 2016 14:47:22 +000
- Fast HEVC Intramode Decision Based on Hybrid Cost Ranking
Abstract: To improve rate-distortion (R-D) performance, high efficiency video coding (HEVC) increases the intraprediction modes with heavy computational load, and thus the intracoding optimization is highly demanded for real-time applications. According to the conditional probabilities of most probable modes and the correlation of potential candidate subsets, this paper proposes a fast HEVC intramode decision scheme based on the hybrid cost ranking which includes both Hadamard cost and rate-distortion cost. The proposed scheme utilizes the coded results of the modified rough mode decision and the neighboring prediction units so as to obtain a potential candidate subset and then conditionally selects the optimal mode through early likelihood decision and hybrid cost ranking. By the experiment-driven methodology, the proposed scheme implements the early termination if the best mode from the candidate subset is equal to one or two neighboring intramodes. The experimental results demonstrate that the proposed scheme averagely provides about 23.7% encoding speedup with just 0.82% BD-rate loss in comparison with default fast intramode decision in HM16.0. Compared to other fast intramode decision schemes, the proposed scheme also significantly reduces intracoding time while maintaining similar R-D performance for the all-intraconfiguration in HM16.0 Main profile.
PubDate: Wed, 24 Feb 2016 07:09:23 +000
- Enhancement of Video Streaming in Distributed Hybrid Architecture
Abstract: Pure Peer to Peer (P2P) network requires enhancing transportation of chunk video objects to the proxy server in the mesh network. The rapid growth of video on demand user brings congestion at the proxy server and on the overall network. The situation needs efficient content delivery procedure, to the video on demand viewer from the distributed storage. In general scenario, if the proxy server does not possess the required video stream or the chunk of that said video, then the same can be smoothly and rapidly streamed to the viewer. This paper has shown that multitier mesh shaped hybrid architecture composed of P2P and mesh architecture increase the number of requests served by the dynamic environment in comparison with the static environment. Optimized storage finding path search reduces the unnecessary query forward and hence increases the size of content delivery to the desired location.
PubDate: Mon, 22 Feb 2016 09:04:47 +000
- Nonintrusive Method Based on Neural Networks for Video Quality of
Abstract: The measurement and evaluation of the QoE (Quality of Experience) have become one of the main focuses in the telecommunications to provide services with the expected quality for their users. However, factors like the network parameters and codification can affect the quality of video, limiting the correlation between the objective and subjective metrics. The above increases the complexity to evaluate the real quality of video perceived by users. In this paper, a model based on artificial neural networks such as BPNNs (Backpropagation Neural Networks) and the RNNs (Random Neural Networks) is applied to evaluate the subjective quality metrics MOS (Mean Opinion Score) and the PSNR (Peak Signal Noise Ratio), SSIM (Structural Similarity Index Metric), VQM (Video Quality Metric), and QIBF (Quality Index Based Frame). The proposed model allows establishing the QoS (Quality of Service) based in the strategy Diffserv. The metrics were analyzed through Pearson’s and Spearman’s correlation coefficients, RMSE (Root Mean Square Error), and outliers rate. Correlation values greater than 90% were obtained for all the evaluated metrics.
PubDate: Wed, 27 Jan 2016 13:21:18 +000
- The Harmonic Walk: An Interactive Physical Environment to Learn Tonal
Abstract: The Harmonic Walk is an interactive physical environment designed for learning and practicing the accompaniment of a tonal melody. Employing a highly innovative multimedia system, the application offers to the user the possibility of getting in touch with some fundamental tonal music features in a very simple and readily available way. Notwithstanding tonal music is very common in our lives, unskilled people as well as music students and even professionals are scarcely conscious of what these features actually are. The Harmonic Walk, through the body movement in space, can provide all these users a live experience of tonal melody structure, chords progressions, melody accompaniment, and improvisation. Enactive knowledge and embodied cognition allow the user to build an inner map of these musical features, which can be acted by moving on the active surface with a simple step. Thorough assessment tests with musicians and nonmusicians high school students could prove the high communicative power and efficiency of the Harmonic Walk application both in improving musical knowledge and in accomplishing complex musical tasks.
PubDate: Sun, 10 Jan 2016 08:22:04 +000
- Performances Evaluation of a Novel Hadoop and Spark Based System of Image
Retrieval for Huge Collections
Abstract: A novel system of image retrieval, based on Hadoop and Spark, is presented. Managing and extracting information from Big Data is a challenging and fundamental task. For these reasons, the system is scalable and it is designed to be able to manage small collections of images as well as huge collections of images. Hadoop and Spark are based on the MapReduce framework, but they have different characteristics. The proposed system is designed to take advantage of these two technologies. The performances of the proposed system are evaluated and analysed in terms of computational cost in order to understand in which context it could be successfully used. The experimental results show that the proposed system is efficient for both small and huge collections.
PubDate: Wed, 16 Dec 2015 06:57:19 +000
- IPTV Service Framework Based on Secure Authentication and Lightweight
Content Encryption for Screen-Migration in Cloud Computing
Abstract: These days, the advancing of smart devices (e.g. smart phones, tablets, PC, etc.) capabilities and the increase of internet bandwidth enables IPTV service provider to extend their services to smart mobile devices. User can just receive their IPTV service using any smart devices by accessing the internet via wireless network from anywhere anytime in the world which is convenience for users. However, wireless network communication has well a known critical security threats and vulnerabilities to user smart devices and IPTV service such as user identity theft, reply attack, MIM attack, and so forth. A secure authentication for user devices and multimedia protection mechanism is necessary to protect both user devices and IPTV services. As result, we proposed framework of IPTV service based on secure authentication mechanism and lightweight content encryption method for screen-migration in Cloud computing. We used cryptographic nonce combined with user ID and password to authenticate user device in any mobile terminal they passes by. In addition we used Lightweight content encryption to protect and reduce the content decode overload at mobile terminals. Our proposed authentication mechanism reduces the computational processing by 30% comparing to other authentication mechanism and our lightweight content encryption reduces encryption delay to 0.259 second.
PubDate: Thu, 26 Nov 2015 06:57:21 +000
- Sparsity for Image Denoising with Local and Global Priors
Abstract: We propose a sparsity based approach to remove additive white Gaussian noise from a given image. To achieve this goal, we combine the local prior and global prior together to recover the noise-free values of pixels. The local prior depends on the neighborhood relationships of a search window to help maintain edges and smoothness. The global prior is generated from a hierarchical sparse representation to help eliminate the redundant information and preserve the global consistency. In addition, to make the correlations between pixels more meaningful, we adopt Principle Component Analysis to measure the similarities, which can be both propitious to reduce the computational complexity and improve the accuracies. Experiments on the benchmark image set show that the proposed approach can achieve superior performance to the state-of-the-art approaches both in accuracy and perception in removing the zero-mean additive white Gaussian noise.
PubDate: Wed, 04 Nov 2015 06:28:13 +000
- Personality, Gender, and Age as Predictors of Media Richness Preference
Abstract: Media richness, the degree to which a specific media transmits information in multiple channels, is an important concept as the number of available multimedia communication methods increases regularly. Individuals differ in their preferences for media richness which may influence their choice of communication multimedia in a given situation. These preferences can influence how successful their communication efforts will be. This exploratory study of 299 adults (ages 16–84) with at least a basic ability to compute examines the relationship between multimedia preference and age, gender, and personality traits. Males and people with higher levels of extraversion and agreeableness were found to have a higher preference for media richness. Age was not a significant predictor of media richness preference.
PubDate: Tue, 20 Oct 2015 14:09:18 +000
- Compact Local Directional Texture Pattern for Local Image Description
Abstract: This paper presents an effective local image feature region descriptor, called CLDTP descriptor (Compact Local Directional Texture Pattern), and its application in image matching and object recognition. The CLDTP descriptor encodes the directional and contrast information in a local region, so it contains the gradient orientation information and the gradient magnitude information. As the dimension of the CLDTP histogram is much lower than the dimension of the LDTP histogram, the CLDTP descriptor has higher computational efficiency and it is suitable for image matching. Extensive experiments have validated the effectiveness of the designed CLDTP descriptor.
PubDate: Mon, 07 Sep 2015 11:10:15 +000
- Advanced Issues on Topic Detection, Tracking, and Trend Analysis for
PubDate: Tue, 04 Aug 2015 13:57:06 +000
- Supporting Image Search with Tag Clouds: A Preliminary Approach
Abstract: Algorithms and techniques for searching in collections of data address a challenging task, since they have to bridge the gap between the ways in which users express their interests, through natural language expressions or keywords, and the ways in which data is represented and indexed. When the collections of data include images, the task becomes harder, mainly for two reasons. From one side the user expresses his needs through one medium (text) and he will obtain results via another medium (some images). From the other side, it can be difficult for a user to understand the results retrieved; that is why a particular image is part of the result set. In this case, some techniques for analyzing the query results and giving to the users some insight into the content retrieved are needed. In this paper, we propose to address this problem by coupling the image result set with a tag cloud of words describing it. Some techniques for building the tag cloud are introduced and two application scenarios are discussed.
PubDate: Tue, 04 Aug 2015 11:33:11 +000
- An Empirical Analysis of Technology Transfer of National R&D Projects
in South Korea
Abstract: This study is aimed at seeking policy implications for the policy makers of South Korean government and finding a direction to support R&D institutions in performing R&D activities more efficiently, by analyzing the factors influencing technology transfer of the national R&D projects. The data retrieved from NTIS (National Science & Technology Information Service) was used in analyzing the results of 575 projects with 1,903 cases of technology transfer, performed by the Ministry of Science, ICT and Future Planning, between 2002 and 2012. We found that there were significant differences between the government funded institutions and the universities and between basic R&D and applied ones. We also discovered that the government funded institutions did not necessarily take a better position than the universities in terms of the quantity of technology transfer. Lastly, the applied R&D of the universities was very vulnerable in terms of technology transfer.
PubDate: Tue, 04 Aug 2015 11:10:35 +000
- Development of Ontology and 3D Software for the Diseases of Spine
Abstract: KISTI is carrying out an e-Spine project for spinal diseases to prepare for the aged society, so-called NAP. The purpose of the study is to build a spine ontology that represents the anatomical structure and disease information which is compatible with simulation model of KISTI. The final use of the ontology includes diagnosis of diseases and setting treatment directions by the clinicians. The ontology was represented using 3D software. Twenty diseases were selected to be represented after discussions with a spine specialist. Several ontology studies were reviewed, reference books were selected for each disease and were organized in MS Excel. All the contents were then reviewed by the specialists. Altova SemanticWorks and Protégé were used to code spine ontology with OWL Full model. Links to the images from KISTI and sample images of diseases were included in the ontology. The OWL ontology was also reviewed by the specialists again with Protégé. We represented unidirectional ontology from anatomical structure to disease, images, and treatment. The ontology was human understandable. It would be useful for the education of medical students or residents studying diseases of spine. But in order for the computer to understand the ontology, a new model with OWL DL or Lite is needed.
PubDate: Tue, 04 Aug 2015 11:04:16 +000
- Performance Comparison of OpenMP, MPI, and MapReduce in Practical Problems
Abstract: With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.
PubDate: Tue, 04 Aug 2015 06:40:03 +000
- Coevolution of Artificial Agents Using Evolutionary Computation in
Abstract: Analysis of bargaining game using evolutionary computation is essential issue in the field of game theory. This paper investigates the interaction and coevolutionary process among heterogeneous artificial agents using evolutionary computation (EC) in the bargaining game. In particular, the game performance with regard to payoff through the interaction and coevolution of agents is studied. We present three kinds of EC based agents (EC-agent) participating in the bargaining game: genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE). The agents’ performance with regard to changing condition is compared. From the simulation results it is found that the PSO-agent is superior to the other agents.
PubDate: Mon, 03 Aug 2015 14:19:11 +000
- Preprocessing Techniques for High-Efficiency Data Compression in Wireless
Multimedia Sensor Networks
Abstract: We have proposed preprocessing techniques for high-efficiency data compression in wireless multimedia sensor networks. To do this, we analyzed the characteristics of multimedia data under the environment of wireless multimedia sensor networks. The proposed preprocessing techniques consider the characteristics of sensed multimedia data to perform the first stage preprocessing by deleting the low priority bits that do not affect the image quality. The second stage preprocessing is also performed for the undeleted high priority bits. By performing these two-stage preprocessing techniques, it is possible to reduce the multimedia data size in large. To show the superiority of our techniques, we simulated the existing multimedia data compression scheme with/without our preprocessing techniques. Our experimental results show that our proposed techniques increase compression ratio while reducing compression operations compared to the existing compression scheme without preprocessing techniques.
PubDate: Mon, 03 Aug 2015 14:03:19 +000
- Security Requirements for Multimedia Archives
Abstract: With the explosive growth of various multimedia contents, digital archives are used to store those contents accordingly. In contrast to the traditional storage systems in which data lifetime is measured in months or years, data lifetime in the archive is measured in decades. This longevity of contents causes new security issues that threat the archive systems. In this paper, we discuss these new security issues in perspective. And we suggest some security requirements for digital archives.
PubDate: Mon, 03 Aug 2015 13:32:11 +000
- Discovering Congested Routes Using Vehicle Trajectories in Road Networks
Abstract: The popular route recommendation and traffic monitoring over the road networks have become important in the location-based services. The schemes to find out the congested routes were proposed by considering the number of vehicles in a road segment. However, the existing schemes do not consider the features of each road segment such as width, length, and direction in a road network. Furthermore, the existing schemes fail to consider the average moving speed of vehicles. Therefore, they can detect the incorrect density routes. To overcome such problems, we propose a new discovering scheme of congested routes through the analysis of vehicle trajectories in a road network. The proposed scheme divides each road into segments with different width and length in a road network. And then, the congested road segment is detected through the saturation degree of the road segment and the average moving speed of vehicles in the road segment. Finally, we compute the final congested routes by using a clustering scheme. The experimental results have shown that the proposed scheme can efficiently discover the congested routes in the different directions of the roads.
PubDate: Mon, 03 Aug 2015 13:29:55 +000
- Study on Strengthening Plan of Safety Network CCTV Monitoring by
Steganography and User Authentication
Abstract: Recently, as the utilization of CCTV (closed circuit television) is emerging as an issue, the studies on CCTV are receiving much attention. Accordingly, due to the development of CCTV, CCTV has IP addresses and is connected to network; it is exposed to many threats on the existing web environment. In this paper, steganography is utilized to confirm the Data Masquerading and Data Modification and, in addition, to strengthen the security; the user information is protected based on PKI (public key infrastructure), SN (serial number), and R value (random number) attributed at the time of login and the user authentication protocol to block nonauthorized access of malicious user in network CCTV environment was proposed. This paper should be appropriate for utilization of user infringement-related CCTV where user information protection-related technology is not applied for CCTV in the future.
PubDate: Mon, 03 Aug 2015 12:52:45 +000
- High-Level Codewords Based on Granger Causality for Video Event Detection
Abstract: Video event detection is a challenging problem in many applications, such as video surveillance and video content analysis. In this paper, we propose a new framework to perceive high-level codewords by analyzing temporal relationship between different channels of video features. The low-level vocabulary words are firstly generated after different audio and visual feature extraction. A weighted undirected graph is constructed by exploring the Granger Causality between low-level words. Then, a greedy agglomerative graph-partitioning method is used to discover low-level word groups which have similar temporal pattern. The high-level codebooks representation is obtained by quantification of low-level words groups. Finally, multiple kernel learning, combined with our high-level codewords, is used to detect the video event. Extensive experimental results show that the proposed method achieves preferable results in video event detection.
PubDate: Tue, 23 Jun 2015 06:20:45 +000
- A New Information Hiding Method Based on Improved BPCS Steganography
Abstract: Bit-plane complexity segmentation (BPCS) steganography is advantageous in its capacity and imperceptibility. The important step of BPCS steganography is how to locate noisy regions in a cover image exactly. The regular method, black-and-white border complexity, is a simple and easy way, but it is not always useful, especially for periodical patterns. Run-length irregularity and border noisiness are introduced in this paper to work out this problem. Canonical Cray coding (CGC) is also used to replace pure binary coding (PBC), because CGC makes use of characteristic of human vision system. Conjugation operation is applied to convert simple blocks into complex ones. In order to contradict BPCS steganalysis, improved BPCS steganography algorithm adopted different bit-planes with different complexity. The higher the bit-plane is, the smaller the complexity is. It is proven that the improved BPCS steganography is superior to BPCS steganography by experiment.
PubDate: Thu, 26 Mar 2015 07:31:30 +000