Results 1  10
of
58
Digital image steganography: Survey and analysis of current methods
 Journal signal processing, Volume 90, Issue
"... Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of the embedded data. Steganography has various useful applications. However, like any other science it can be used for ill intentions. It has been propelled to the forefront of current security techniques by the remarkable growth in computational power, the increase in security awareness by, e.g., individuals, groups, agencies, government and through intellectual pursuit. Steganography’s ultimate objectives, which are undetectability, robustness (resistance to various image processing methods and compression) and capacity of the hidden data, are the main factors that separate it from related techniques such as watermarking and cryptography. This paper provides a stateoftheart review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature. This paper concludes with some recommendations and advocates for the objectoriented embedding mechanism. Steganalysis, which is the science of attacking steganography, is not the focus of this survey but nonetheless will be briefly discussed. Keywords Digital image steganography; spatial domain; frequency domain; adaptive steganography; security. 1.
Minimizing Additive Distortion in Steganography using SyndromeTrellis Codes
"... This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover el ..."
Abstract

Cited by 38 (21 self)
 Add to MetaCart
This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of perelement distortions. Both the payloadlimited sender (minimizing the total distortion while embedding a fixed payload) and the distortionlimited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the nonbinary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel syndromecoding scheme based on dual convolutional codes equipped with the Viterbi algorithm. This fast and very versatile solution achieves stateoftheart results in steganographic applications while having linear time and space complexity w.r.t. the number of cover elements. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains. Most current coding schemes used in steganography (matrix embedding, wet paper codes, etc.) and many new ones can be implemented using this framework.
Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions
, 2007
"... An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statisti ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and randomcoding exponents for perfectlysecure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. In some applications, steganographic communication may be disrupted by an active warden, modelled here by a compound discrete memoryless channel. The transmitter and warden are subject to distortion constraints. We address the potential loss in communication performance due to the perfectsecurity requirement. This loss is the same as the loss obtained under a weaker order1 steganographic requirement that would just require matching of firstorder
Kevitt, “Biometric inspired digital image Steganography
 in: Proceedings of the 15 th Annual IEEE International Conference and Workshops on the Engg.of ComputerBased Systems (ECBS’08
, 2008
"... Steganography is defined as the science of hiding or embedding “data ” in a transmission medium. Its ultimate objectives, which are undetectability, robustness (i.e., against image processing and other attacks) and capacity of the hidden data (i.e., how much data we can hide in the carrier file), ar ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Steganography is defined as the science of hiding or embedding “data ” in a transmission medium. Its ultimate objectives, which are undetectability, robustness (i.e., against image processing and other attacks) and capacity of the hidden data (i.e., how much data we can hide in the carrier file), are the main factors that distinguish it from other “sistersin science ” techniques, namely watermarking and Cryptography. This paper provides an overview of well known Steganography methods. It identifies current research problems in this area and discusses how our current research approach could solve some of these problems. We propose using human skin tone detection in colour images to form an adaptive context for an edge operator which will provide an excellent secure location for data hiding.
An efficient buyerseller watermarking protocol based on composite signal representation
 in Proc. 11th ACMWorkshop on Multimedia and Security., Princeton, NJ,2009
"... Buyerseller watermarking protocols integrate watermarking techniques with cryptography, for copyright protection, piracy tracing, and privacy protection. In this paper, we propose an efficient buyerseller watermarking protocol based on homomorphic publickey cryptosystem and composite signal repr ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Buyerseller watermarking protocols integrate watermarking techniques with cryptography, for copyright protection, piracy tracing, and privacy protection. In this paper, we propose an efficient buyerseller watermarking protocol based on homomorphic publickey cryptosystem and composite signal representation in the encrypted domain. A recently proposed composite signal representation allows us to reduce both the computational overhead and the large communication bandwidth which are due to the use of homomorphic publickey encryption schemes. Both complexity analysis and simulation results confirm the efficiency of the proposed solution, suggesting that this technique can be successfully used in practical applications.
Evaluation of classifiers: Practical considerations for security applications
 In AAAI Workshop on Evaluation Methods for Machine Learning
, 2006
"... In recent years several tools based on statistical methods and machine learning have been incorporated in security related tasks involving classification, such as intrusion detection systems (IDSs), fraud detection, spam filters, biometrics and multimedia forensics. Measuring the security perfor ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
In recent years several tools based on statistical methods and machine learning have been incorporated in security related tasks involving classification, such as intrusion detection systems (IDSs), fraud detection, spam filters, biometrics and multimedia forensics. Measuring the security performance of these classifiers is an essential part for facilitating decision making, determining the viability of the product, or for comparing multiple classifiers. There are however relevant considerations for security related problems that are sometimes ignored by traditional evaluation schemes. In this paper we identify two pervasive problems in securityrelated applications. The first problem is the usually large class imbalance between normal events and attack events. This problem has been addressed by evaluating classifiers based on costsensitive metrics and with the introduction of Bayesian Receiver Operating Characteristic (BROC) curves. The second problem to consider is the fact that the classifier or learning rule will be deployed in an adversarial environment. This implies that good performance on average might not be a good performance measure, but rather we look for good performance under the worst type of adversarial attacks. In order to address this notion more precisely we provide a framework to model an adversary and define security notions based on evaluation metrics.
Embedding renewable cryptographic keys into continuous noisy data
 In (to appear) 10th International Conference on Information and Communications Security (ICICS), Lecture Notes in Computer Science
, 2008
"... Abstract. Fuzzy extractor is a powerful but theoretical tool to extract uniform strings from discrete noisy data. Before it can be used in practice, many concerns need to be addressed in advance, such as making the extracted strings renewable and dealing with continuous noisy data. We propose a prim ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Fuzzy extractor is a powerful but theoretical tool to extract uniform strings from discrete noisy data. Before it can be used in practice, many concerns need to be addressed in advance, such as making the extracted strings renewable and dealing with continuous noisy data. We propose a primitive fuzzy embedder as a practical replacement for fuzzy extractor. Fuzzy embedder naturally supports renewability because it allows a randomly chosen string to be embedded. Fuzzy embedder takes continuous noisy data as input and its performance directly links to the property of the input data. We give a general construction for fuzzy embedder based on the technique of Quantization Index Modulation (QIM) and derive the performance result in relation to that of the underlying QIM. In addition, we show that quantization in 2dimensional space is optimal from the perspective of the length of the embedded string. We also present a concrete construction for fuzzy embedder in 2dimensional space and compare its performance with that obtained by the 4square tiling method of Linnartz, et al. [13]. 1
Controlling leakage of biometric information using dithering
 in Proc. EUSIPCO
"... Fuzzy extractors allow cryptographic keys to be generated from noisy, nonuniform biometric data. Fuzzy extractors can be used to authenticate a user to a server without storing her biometric data directly. However, in the Information Theoretic sense fuzzy extractors will leak information about the ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Fuzzy extractors allow cryptographic keys to be generated from noisy, nonuniform biometric data. Fuzzy extractors can be used to authenticate a user to a server without storing her biometric data directly. However, in the Information Theoretic sense fuzzy extractors will leak information about the biometric data. We propose as alternative to use a fuzzy embedder which fuses an independently generated cryptographic key with biometric data. As fuzzy extractors, a fuzzy embedder can be used to authenticate a user without storing her biometric information or the cryptographic key on a server. A fuzzy embedder will leak in the Information Theoretic sense information about both the biometrics and the cryptographic key. While both types of leakage are important, information leakage of the biometric data is critical since the cryptographic key as opposed to biometric data can be renewed. We show that constructing fuzzy embedders which leak no information about the biometrics is theoretically possible. We present a construction which allows controlling the leakage of biometric information, but which requires a weak secret at the decoder called dither. If this secret is compromised the security of the construction will degrade gracefully. 1.
PredictiveCodingBased Steganography and Modification for Enhanced Security
 IJCSNS International Journal of Computer Science and Network Security, vol.6 no. 3b
, 2006
"... The predictivecodingbased (PCB) steganography can embed a large amount of bits into the code stream of lossless compression with high imperceptibility. However, based on two elaborately chosen statistical features, the proposed steganalytic method can easily find the presence of a secret message w ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
The predictivecodingbased (PCB) steganography can embed a large amount of bits into the code stream of lossless compression with high imperceptibility. However, based on two elaborately chosen statistical features, the proposed steganalytic method can easily find the presence of a secret message with small error probability. To enhance the scheme’s security, a modified one is proposed, which preserves the prediction errors ’ distribution by choosing the optimum adjustment parameter. Experimental results prove that the modified scheme can provide nearperfect security in Cachin’s definition and defeat the steganalytic method proposed by ourselves. Key words:
A Novel Steganography Algorithm for Hiding Text in Image using Five Modulus Method
"... The needs for steganographic techniques for hiding secret message inside images have been arise. This paper is to create a practical steganographic implementation to hide text inside grey scale images. The secret message is hidden inside the cover image using Five Modulus Method. The novel algorithm ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
The needs for steganographic techniques for hiding secret message inside images have been arise. This paper is to create a practical steganographic implementation to hide text inside grey scale images. The secret message is hidden inside the cover image using Five Modulus Method. The novel algorithm is called (STFMM. FMM which consists of transforming all the pixels within the 5�5 window size into its corresponding multiples of 5. After that, the secret message is hidden inside the 5�5 window as a nonmultiples of 5. Since the modulus of nonmultiples of 5 are 1,2,3 and 4, therefore; if the reminder is one of these, then this pixel represents a secret character. The secret key that has to be sent is the window size. The main advantage of this novel algorithm is to keep the size of the cover image constant while the secret message increased in size. Peak signaltonoise ratio is captured for each of the images tested. Based on the PSNR value of each images, the stego image has high PSNR value. Hence this new steganography algorithm is very efficient to hide the data inside the image.