Results 1 - 10
of
10
Estimating Steganographic Fisher Information in Real Images
"... Abstract. This paper is concerned with the estimation of steganographic capacity in digital images, using information theoretic bounds and very large-scale experiments to approximate the distributions of genuine covers. The complete distribution cannot be estimated, but with carefullychosen algorith ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
Abstract. This paper is concerned with the estimation of steganographic capacity in digital images, using information theoretic bounds and very large-scale experiments to approximate the distributions of genuine covers. The complete distribution cannot be estimated, but with carefullychosen algorithms and a large corpus we can make local approximations by considering groups of pixels. A simple estimator for the local quadratic term of Kullback-Leibler divergence (Steganographic Fisher Information) is presented, validated on some synthetic images, and computed for a corpus of covers. The results are interesting not so much for their concrete capacity estimates but for the comparisons they provide between different embedding operations, between the information found in differentlysized and-shaped pixel groups, and the results of DC normalization within pixel groups. This work suggests lessons for the future design of spatial-domain steganalysis, and also the optimization of embedding functions. 1
Steganographic strategies for a square distortion function
- In: Security, Forensics, Steganography and Watermarking of Multimedia Contents X. In: Proc. SPIE
, 2008
"... Recent results on the information theory of steganography suggest, and under some conditions prove, that the detectability of payload is proportional to the square of the number of changes caused by the embedding. Assuming that result in general, this paper examines the implications for an embedder ..."
Abstract
-
Cited by 7 (6 self)
- Add to MetaCart
(Show Context)
Recent results on the information theory of steganography suggest, and under some conditions prove, that the detectability of payload is proportional to the square of the number of changes caused by the embedding. Assuming that result in general, this paper examines the implications for an embedder when a payload is to be spread amongst multiple cover objects. A number of variants are considered: embedding with and without adaptive source coding, in uniform and nonuniform covers, and embedding in both a fixed number of covers (so-called batch steganography) as well as establishing a covert channel in an infinite stream (sequential steganography, studied here for the first time). The results show that steganographic capacity is sublinear, and strictly asymptotically greater in the case of a fixed batch than an infinite stream. In the former it is possible to describe optimal embedding strategies; in the latter the situation is much more complex, with a continuum of strategies which approach the unachievable asymptotic optimum.
The Square Root Law Does Not Require a Linear Key
"... Square root laws are theorems about imperfect steganography, embedding which fails to preserve all statistical properties of covers. They show that, in various situations, capacity of covers grows only with the square root of the available cover size. In a paper given at this conference last year [1 ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
Square root laws are theorems about imperfect steganography, embedding which fails to preserve all statistical properties of covers. They show that, in various situations, capacity of covers grows only with the square root of the available cover size. In a paper given at this conference last year [14], we showed an important caveat: when the sender’s and recipient’s shared embedding key determines the embedding path, its length must be at least linear in the size of the hidden payload to avoid their enemy exhausting over all possible sets of locations. It was left open to show that a linear key is sufficient. There is no necessity, however, for the recipient to know exactly which locations were changed during the embedding process. In this paper we remove that condition, allowing the embedder to combine more than one cover location to convey one bit of payload. As long as the embedder lives beneath the classic square root law bound, we can do more than prove the sufficiency of a linear key: we can even show that asymptotically perfect steganographic security is possible with no key at all. Furthermore, by computing Steganographic Fisher Information, we can show that the keyless embedding tends to perfect security at least as fast as the “ideal”embedding, which requires an unfeasibly large key to spread payload uniformly at random over the cover. Finally, we show asymptotic perfect security of a simple matrix embedding, which allows payload capacity of order √ n log n to be achieved.
Estimating the Information Theoretic Optimal Stego Noise. To appear in
- Proc. 8th International Workshop on Digital Watermarking (2009
"... Abstract. We recently developed a new benchmark for steganography, underpinned by the square root law of capacity, called Steganographic Fisher Information (SFI). It is related to the multiplicative constant for the square root capacity rate and represents a truly information theoretic measure of as ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Abstract. We recently developed a new benchmark for steganography, underpinned by the square root law of capacity, called Steganographic Fisher Information (SFI). It is related to the multiplicative constant for the square root capacity rate and represents a truly information theoretic measure of asymptotic evidence. Given a very large corpus of covers from which the joint histograms can be estimated, an estimator for SFI was derived in [1], and certain aspects of embedding and detection were compared using this benchmark. In this paper we concentrate on the evidence presented by various spatial-domain embedding operations. We extend the technology of [1] in two ways, to convex combinations of arbitrary so-called independent embedding functions. We then apply the new techniques to estimate, in genuine sets of cover images, the spatial-domain stego noise shape which optimally trades evidence – in terms of asymptotic KL divergence – for capacity. The results suggest that smallest embedding changes are optimal for cover images not exhibiting much noise, and also for cover images with significant saturation, but in noisy images it is superior to embed with more stego noise in fewer locations. 1
An epistemological approach to steganography
- INFORMATION HIDING 2009
, 2009
"... Steganography has been studied extensively in the light of information, complexity, probability and signal processing theory. This paper adds epistemology to the list and argues that Simmon’s seminal prisoner’s problem has an empirical dimension, which cannot be ignored (or defined away) without sim ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
Steganography has been studied extensively in the light of information, complexity, probability and signal processing theory. This paper adds epistemology to the list and argues that Simmon’s seminal prisoner’s problem has an empirical dimension, which cannot be ignored (or defined away) without simplifying the problem substantially. An introduction to the epistemological perspective on steganography is given along with a structured discussion on how the novel perspective fits into the existing body of literature.
1 Effect of Cover Quantization on Steganographic
"... Abstract—The square-root law of imperfect steganography ties the embedding change rate and the cover length with statistical detectability. In this article, we extend the law to consider the effects of cover quantization. Assuming the individual cover elements are quantized i.i.d. samples drawn from ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—The square-root law of imperfect steganography ties the embedding change rate and the cover length with statistical detectability. In this article, we extend the law to consider the effects of cover quantization. Assuming the individual cover elements are quantized i.i.d. samples drawn from an underlying continuous-valued ’precover ’ distribution, the steganographic Fisher information scales as △ s, where △ is the quantization step and s is determined jointly by the smoothness of the precover distribution and the properties of the embedding function. This extension is relevant for understanding the effects of the pixel color depth and the JPEG quality factor on the length of secure payload. I.
doi:10.1155/2009/901381 Research Article Reliable Steganalysis Using a Minimum Set of
"... This paper proposes to determine a sufficient number of images for reliable classification and to use feature selection to select most relevant features for achieving reliable steganalysis. First dimensionality issues in the context of classification are outlined, and the impact of the different par ..."
Abstract
- Add to MetaCart
(Show Context)
This paper proposes to determine a sufficient number of images for reliable classification and to use feature selection to select most relevant features for achieving reliable steganalysis. First dimensionality issues in the context of classification are outlined, and the impact of the different parameters of a steganalysis scheme (the number of samples, the number of features, the steganography method, and the embedding rate) is studied. On one hand, it is shown that, using Bootstrap simulations, the standard deviation of the classification results can be very important if too small training sets are used; moreover a minimum of 5000 images is needed in order to perform reliable steganalysis. On the other hand, we show how the feature selection process using the OP-ELM classifier enables both to reduce the dimensionality of the data and to highlight weaknesses and advantages of the six most popular steganographic algorithms. Copyright © 2009 Yoan Miche et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1.
ProgressiveRandomization:SeeingtheUnseen
"... In this paper, we introduce the Progressive Randomization (PR): a new image meta-description approach suitable for different image inference applications such as broad class Image Categorization and Steganalysis. The main difference among PR and the state-of-the-art algorithms is that it is based on ..."
Abstract
- Add to MetaCart
(Show Context)
In this paper, we introduce the Progressive Randomization (PR): a new image meta-description approach suitable for different image inference applications such as broad class Image Categorization and Steganalysis. The main difference among PR and the state-of-the-art algorithms is that it is based on progressive perturbations on pixel values of images. With such perturbations, PR captures the image class separability allowing us to successfully infer high-level information about images. Even when only a limited number of training examples are available, the method still achieves good separability, and its accuracy increases with the size of the training set. We validate the method using two different inference scenarios and four image databases.
The Square Root Law of . . .
, 2008
"... There are a number of recent information theoretic results demonstrating (under certain conditions) a sublinear relationship between the number of cover objects and their total steganographic capacity. In this paper we explain how these results may be adapted to the steganographic capacity of a sing ..."
Abstract
- Add to MetaCart
There are a number of recent information theoretic results demonstrating (under certain conditions) a sublinear relationship between the number of cover objects and their total steganographic capacity. In this paper we explain how these results may be adapted to the steganographic capacity of a single cover object, which under the right conditions should be proportional to the square root of the cover size. Then we perform some experiments using three genuine steganography methods in digital images, covering both spatial and DCT domains. Measuring detectability under four different steganalysis methods, for a variety of payload and cover sizes, we observe close accordance with a square root law.
binghamton.edu
"... binghamton.edu A modern direction in steganography calls for embedding while minimizing a distortion function defined in a sufficiently complex model space. In this paper we show that, quite surprisingly, even a high-dimensional cover model does not automatically guarantee immunity to simple attacks ..."
Abstract
- Add to MetaCart
(Show Context)
binghamton.edu A modern direction in steganography calls for embedding while minimizing a distortion function defined in a sufficiently complex model space. In this paper we show that, quite surprisingly, even a high-dimensional cover model does not automatically guarantee immunity to simple attacks. Moreover, the security can be compromised if the distortion is optimized to an incomplete cover model. We demonstrate these pitfalls with two recently proposed steganographic schemes and support our arguments experimentally. Finally, we discuss how the corresponding models might be modified to eliminate the security flaws.