Results 1 - 10
of
295
Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding
- IEEE TRANS. ON INFORMATION THEORY
, 1999
"... We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing information-embedding rate, mini ..."
Abstract
-
Cited by 496 (14 self)
- Add to MetaCart
(Show Context)
We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing information-embedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular, it achieves provably better rate distortion--robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications.
On the (im)possibility of obfuscating programs
- Lecture Notes in Computer Science
, 2001
"... Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic an ..."
Abstract
-
Cited by 348 (24 self)
- Add to MetaCart
Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility ” condition in obfuscation as meaning that O(P) is a “virtual black box, ” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P ′ that computes the same function as a program P ∈ P, the “source code ” P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.
Information-theoretic analysis of information hiding
- IEEE Transactions on Information Theory
, 2003
"... Abstract—An information-theoretic analysis of information hiding is presented in this paper, forming the theoretical basis for design of information-hiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermar ..."
Abstract
-
Cited by 265 (19 self)
- Add to MetaCart
(Show Context)
Abstract—An information-theoretic analysis of information hiding is presented in this paper, forming the theoretical basis for design of information-hiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermarking, fingerprinting, steganography, and data embedding. In these applications, information is hidden within a host data set and is to be reliably communicated to a receiver. The host data set is intentionally corrupted, but in a covert way, designed to be imperceptible to a casual analysis. Next, an attacker may seek to destroy this hidden information, and for this purpose, introduce additional distortion to the data set. Side information (in the form of cryptographic keys and/or information about the host signal) may be available to the information hider and to the decoder. We formalize these notions and evaluate the hiding capacity, which upper-bounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities: the achievable information-hiding rates and the allowed distortion levels for the information hider and the attacker. The hiding capacity is the value of a game between the information hider and the attacker. The optimal attack strategy is the solution of a particular rate-distortion problem, and the optimal hiding strategy is the solution to a channel-coding problem. The hiding capacity is derived by extending the Gel’fand–Pinsker theory of communication with side information at the encoder. The extensions include the presence of distortion constraints, side information at the decoder, and unknown communication channel. Explicit formulas for capacity are given in several cases, including Bernoulli and Gaussian problems, as well as the important special case of small distortions. In some cases, including the last two above, the hiding capacity is the same whether or not the decoder knows the host data set. It is shown that many existing information-hiding systems in the literature operate far below capacity. Index Terms—Channel capacity, cryptography, fingerprinting, game theory, information hiding, network information theory,
The Gaussian Watermarking Game
, 2000
"... Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" shoul ..."
Abstract
-
Cited by 135 (7 self)
- Add to MetaCart
Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" should be similar to the covertext) and robust (i.e., the extra information should be recoverable even if the stegotext is modified further, possibly by a malicious "attacker"). We compute the coding capacity of the watermarking game for a Gaussian covertext and squared-error distortions. Both the public version of the game (covertext known to neither attacker nor decoder) and the private version of the game (covertext unknown to attacker but known to decoder) are treated. While the capacity of the former cannot, of course, exceed the capacity of the latter, we show that the two are, in fact, identical. These capacities depend critically on whether the distortion constraints are required to be met in expectation or with probability one. In the former case the coding capacity is zero, whereas in the latter it coincides with the value of related zero-sum dynamic mutual informations games of complete and perfect information. # Parts of this work were presented at the 2000 Conference on Information Sciences and Systems (CISS '00), Princeton University, Princeton, NJ, March 15--17, 2000, and at the 2000 IEEE International Symposium on Information Theory (ISIT '00), Sorrento, Italy, June 25--30, 2000.
Detecting Hidden Messages Using Higher-Order Statistical Models
- IN INTERNATIONAL CONFERENCE ON IMAGE PROCESSING
, 2002
"... Techniques for information hiding have become increasingly more sophisticated and widespread. With high-resolution digital images as carriers, detecting hidden messages has become considerably more difficult. This paper describes a new approach to detecting hidden messages in images. The approach u ..."
Abstract
-
Cited by 124 (7 self)
- Add to MetaCart
Techniques for information hiding have become increasingly more sophisticated and widespread. With high-resolution digital images as carriers, detecting hidden messages has become considerably more difficult. This paper describes a new approach to detecting hidden messages in images. The approach uses a wavelet-like decomposition to build higherorder statistical models of natural images. A Fisher linear discriminant analysis is then used to discriminate between untouched and adulterated images.
Detecting hidden messages using higher-order statistics and support vector machines
- In 5th International Workshop on Information Hiding
, 2002
"... www.cs.dartmouth.edu/~{lsw,farid} Abstract. Techniques for information hiding have become increasingly more sophisticated and widespread. With high-resolution digital images as carriers, detecting hidden messages has become considerably more difficult. This paper describes an approach to detecting h ..."
Abstract
-
Cited by 114 (9 self)
- Add to MetaCart
(Show Context)
www.cs.dartmouth.edu/~{lsw,farid} Abstract. Techniques for information hiding have become increasingly more sophisticated and widespread. With high-resolution digital images as carriers, detecting hidden messages has become considerably more difficult. This paper describes an approach to detecting hidden messages in images that uses a wavelet-like decomposition to build higher-order statistical models of natural images. Support vector machines are then used to discriminate between untouched and adulterated images. 1
Anti-Collusion Fingerprinting for Multimedia
- IEEE Transactions on Signal Processing
, 2003
"... Digital fingerprinting is a technique for identifying users who might try to use multimedia content for unintended purposes, such as redistribution. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cost ..."
Abstract
-
Cited by 106 (28 self)
- Add to MetaCart
Digital fingerprinting is a technique for identifying users who might try to use multimedia content for unintended purposes, such as redistribution. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cost-e#ective attack against such digital fingerprints is collusion, where several di#erently marked copies of the same content are combined to disrupt the underlying fingerprints. In this paper, we investigate the problem of designing fingerprints that can withstand collusion and allow for the identification of colluders. We begin by introducing the collusion problem for additive embedding. We then study the e#ect that averaging collusion has upon orthogonal modulation. We introduce an e#cient detection algorithm for identifying the fingerprints associated with K colluders that requires log(n/K)) correlations for a group of n users. We next develop a fingerprinting scheme based upon code modulation that does not require as many basis signals as orthogonal modulation. We propose a new class of codes, called anti-collusion codes (ACC), which have the property that the composition of any subset of K or fewer codevectors is unique. Using this property, we can therefore identify groups of K or fewer colluders. We present a construction of binary-valued ACC under the logical AND operation that uses the theory of combinatorial designs and is suitable for both the on-o# keying and antipodal form of binary code modulation. In order to accommodate n users, our code construction requires only # n) orthogonal signals for a given number of colluders. We introduce four di#erent detection strategies that can be used with our ACC for identifying a suspect set of colluders. We demonstrate th...
Vanish: Increasing Data Privacy with Self-Destructing Data
"... Today’s technical and legal landscape presents formidable challenges to personal data privacy. First, our increasing reliance on Web services causes personal data to be cached, copied, and archived by third parties, often without our knowledge or control. Second, the disclosure of private data has b ..."
Abstract
-
Cited by 98 (12 self)
- Add to MetaCart
(Show Context)
Today’s technical and legal landscape presents formidable challenges to personal data privacy. First, our increasing reliance on Web services causes personal data to be cached, copied, and archived by third parties, often without our knowledge or control. Second, the disclosure of private data has become commonplace due to carelessness, theft, or legal actions. Our research seeks to protect the privacy of past, archived data — such as copies of emails maintained by an email provider — against accidental, malicious, and legal attacks. Specifically, we wish to ensure that all copies of certain data become unreadable after a userspecified time, without any specific action on the part of a user, and even if an attacker obtains both a cached copy of that data and the user’s cryptographic keys and passwords. This paper presents Vanish, a system that meets this challenge through a novel integration of cryptographic techniques with global-scale, P2P, distributed hash tables (DHTs). We implemented a proof-of-concept Vanish prototype to use both the million-plus-node Vuze Bit-Torrent DHT and the restricted-membership OpenDHT. We evaluate experimentally and analytically the functionality, security, and performance properties of Vanish, demonstrating that it is practical to use and meets the privacy-preserving goals described above. We also describe two applications that we prototyped on Vanish: a Firefox plugin for Gmail and other Web sites and a Vanishing File application. 1
Higher-Order Wavelet Statistics and their Application to Digital Forensics
- in IEEE Workshop on Statistical Analysis in Computer Vision
, 2003
"... We describe a statistical model for natural images that is built upon a multi-scale wavelet decomposition. The model consists of first- and higher-order statistics that capture certain statistical regularities of natural images. We show how this model can be useful in several digital forensic applic ..."
Abstract
-
Cited by 80 (9 self)
- Add to MetaCart
(Show Context)
We describe a statistical model for natural images that is built upon a multi-scale wavelet decomposition. The model consists of first- and higher-order statistics that capture certain statistical regularities of natural images. We show how this model can be useful in several digital forensic applications, specifically in detecting various types of digital tampering.
Multipurpose Watermarking for Image Authentication and Protection
- IEEE Transactions on Image Processing
, 2001
"... We propose a novel multipurpose watermarking scheme, in which robust and fragile watermarks are simultaneously embedded, for copyright protection and content authentication. By quantizing a host image's wavelet coefficients as masking threshold units (MTUs), two complementary watermarks are emb ..."
Abstract
-
Cited by 79 (7 self)
- Add to MetaCart
(Show Context)
We propose a novel multipurpose watermarking scheme, in which robust and fragile watermarks are simultaneously embedded, for copyright protection and content authentication. By quantizing a host image's wavelet coefficients as masking threshold units (MTUs), two complementary watermarks are embedded using cocktail watermarking and they can be blindly extracted without access to the host image. For the purpose of image protection, the new scheme guarantees that, no matter what kind of attack is encountered, at least one watermark can survive well. On the other hand, for the purpose of image authentication, our approach can locate the part of the image that has been changed and tolerate some incidental processes that have been executed. Experimental results show that the performance of our multipurpose watermarking scheme is indeed superb in terms of robustness and fragility. Keywords: Robust watermarking, Copyright protection, Fragile watermarking, Authentication. 1 1 Introduction C...