Results 1  10
of
17
Blind Newton Sensitivity Attack
 IEE Proceedings on Information Security 153
, 2006
"... Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spreadspectrumbased schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to remove the watermark from oth ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spreadspectrumbased schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to remove the watermark from other watermarking algorithms, such as those which use sideinformation. Furthermore, the sensitivity attack has never been used to obtain falsely watermarked contents, also known as forgeries. In this paper a new version of the sensitivity attack based on a general formulation is proposed; this method does not require any knowledge about the detection function nor any other system parameter, but just the binary output of the detector, being suitable for attacking most known watermarking methods. The new approach is validated with experiments.
A constructive and unifying framework for zerobit watermarking,” submitted to
 IEEE Trans. Information Forensics and Security
, 2006
"... Abstract—In the watermark detection scenario, also known as zerobit watermarking, a watermark, carrying no hidden message, is inserted in a piece of content. The watermark detector checks for the presence of this particular weak signal in received contents. The article looks at this problem from a ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract—In the watermark detection scenario, also known as zerobit watermarking, a watermark, carrying no hidden message, is inserted in a piece of content. The watermark detector checks for the presence of this particular weak signal in received contents. The article looks at this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermark signal is a function of the host content. Our study is twofold. The first step is to design the best embedding function for a given detection function, and the best detection function for a given embedding function. This yields two conditions, which are mixed into one ’fundamental ’ partial differential equation. It appears that many famous watermarking schemes are indeed solution to this ’fundamental ’ equation. This study thus gives birth to a constructive framework unifying solutions, so far perceived as very different. Index Terms—Detection theory, Pitman–Noether theorem, zerobit watermarking.
The Return of the Sensitivity Attack
 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2005
"... The sensitivity attack is considered as a serious threat to the security of spreadspectrumbased schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. This paper ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The sensitivity attack is considered as a serious threat to the security of spreadspectrumbased schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. This paper
Squin, “Some theoretical aspects of watermarking detection
 in Proc. Security, steganography and watermarking of multimedia content
, 2006
"... This paper considers watermarking detection, also known as zerobit watermarking. A watermark, carrying no hidden message, is inserted in content. The watermark detector checks for the presence of this particular weak signal in content. The paper aims at looking to this problem from a classical dete ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
This paper considers watermarking detection, also known as zerobit watermarking. A watermark, carrying no hidden message, is inserted in content. The watermark detector checks for the presence of this particular weak signal in content. The paper aims at looking to this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermarking signal is a function of the host content. Our study is twofold. The first issue is to design the best embedding function for a given detection function (a NeymanPearson detector structure is assumed). The second issue is to find the best detection function for a given embedding function. This yields two conditions, which are mixed into one ‘fundamental ’ differential equation. Solutions to this equation are optimal in these two senses. Interestingly, there are other solutions than the regular quantization index modulation scheme. The JANIS scheme, for instance, invented in a heuristic manner several years ago, is justified as it is one of these solutions. 1.
Zeroknowledge watermark detector robust to sensitivity attacks
 in 8th ACM Multimedia and Security Workshop
"... Current zeroknowledge watermark detectors are based on a linear correlation between the asset features and a given secret sequence. This detection function is susceptible of being attacked by sensitivity attacks, for which zeroknowledge does not provide protection. In this paper a new zeroknowled ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Current zeroknowledge watermark detectors are based on a linear correlation between the asset features and a given secret sequence. This detection function is susceptible of being attacked by sensitivity attacks, for which zeroknowledge does not provide protection. In this paper a new zeroknowledge watermark detector robust to sensitivity attacks is presented, using the Generalized Gaussian Maximum Likelihood (ML) detector as basis. The inherent robustness that this detector presents against sensitivity attacks, together with the security provided by the zeroknowledge protocol that conceals the keys that could be used to remove the watermark or to produce forged assets, results in a robust and secure protocol. Additionally, two new zeroknowledge proofs for modulus and square root calculation are presented; they serve as building blocks for the zeroknowledge implementation of the Generalized Gaussian ML detector, and also open new possibilities in the design of high level protocols.
Performance analysis of scalar dcqim for watermark detection
 In Proc. of ICASSP’06, to appeared
, 2006
"... Quantizationbased schemes, such as scalar DCQIM, have demonstrated performance merits for datahiding problem, which is mainly a transmission problem. However, a number of applications are stated in terms of watermark detection problem (also named onebit watermarking), and this situation has been ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Quantizationbased schemes, such as scalar DCQIM, have demonstrated performance merits for datahiding problem, which is mainly a transmission problem. However, a number of applications are stated in terms of watermark detection problem (also named onebit watermarking), and this situation has been seldom addressed in the literature for quantizationbased techniques. In this context, we carry out a complete performance analysis of uniform quantizersbased schemes with distortion compensation (DC) under additive white gaussian noise. Implementing an exact NeymanPearson test and using large deviation theory, performances are evaluated according to Receiver Operating Characteristic (ROC) and probability of error. Optimal DC’s regarding to ROC performances are derived. It is pointed out that falsealarm and miss detection capabilities are jointly optimized by the same DC value. Then, performances are compared with raw quantizedschemes (i.e. without DC) and spreadspectrum (SS) watermarking. It is shown that DCQIM always outperforms QIM and SS for detection task. The gain provided by the DC reaches several orders of magnitude for cases of interest, that is for low watermarktonoise regimes. A short comparison is also provided with respect to the corresponding transmission problem, thus evaluating the loss in performance due to the detection. 1.
Asymptotically optimal scalar quantizers for qim watermark detection
 In 2006 International Conference on Multimedia & Expo (ICME
, 2006
"... This paper investigates asymptotically optimal scalar quantizers to address QIM watermark detection with i.i.d. host data and additive noise. Falsealarm probability of detection is chosen as the cost to be minimized, keeping the embedding distortion and the miss probability upperbounded. To avoid ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper investigates asymptotically optimal scalar quantizers to address QIM watermark detection with i.i.d. host data and additive noise. Falsealarm probability of detection is chosen as the cost to be minimized, keeping the embedding distortion and the miss probability upperbounded. To avoid the intractability of falsealarm probability, Kullback distance between watermarked and nonwatermarked data is adopted instead. The problem is then to seek the quantizer which maximizes the falsealarm error exponent under distortion constraint. Using Lagrange multiplier minimization, a quantizer updating LloydMaxlike procedure is used to solve the optimization. For experimental aspects, host data and noise have been set gaussian. In comparison with uniform or LloydMax quantizers, it turns out that detection performances can be notably enhanced by using proposed applicationoptimized quantizers. The gain is effective even for small number N of sample at the detector input. However, this gain becomes more substantial as N grows. This also emphasises that good quantizers in terms of distortion are not suitable for detection task. 1.
Asymptotically Optimum Universal Watermark Embedding and Detection in the HighSNR Regime
"... Abstract—The problem of optimum watermark embedding and detection was addressed in a recent paper by Merhav and Sabbag, where the optimality criterion was the maximum falsenegative error exponent subject to a guaranteed falsepositive error exponent. In particular, Merhav and Sabbag derived univer ..."
Abstract
 Add to MetaCart
Abstract—The problem of optimum watermark embedding and detection was addressed in a recent paper by Merhav and Sabbag, where the optimality criterion was the maximum falsenegative error exponent subject to a guaranteed falsepositive error exponent. In particular, Merhav and Sabbag derived universal asymptotically optimum embedding and detection rules under the assumption that the detector relies solely on secondorder joint empirical statistics of the received signal and the watermark. In the case of a Gaussian host signal and a Gaussian attack, however, closedform expressions for the optimum embedding strategy and the falsenegative error exponent were not obtained in that work. In this paper, we derive the falsenegative error exponent for any given embedding strategy and use such a result to show that in general the optimum embedding rule depends on the variance of the host sequence and the variance of the attack noise. We then focus on high signaltonoise ratio (SNR) regime, deriving the optimum embedding strategy for such a setup. In this case, a universally optimum embedding rule turns out to exist and to be very simple with an intuitively appealing geometrical interpretation. The effectiveness of the newly proposed embedding strategy is evaluated numerically. Index Terms—Hypothesis testing, Neyman–Pearson, watermark detection, watermark embedding, watermarking.
Witsenhausen’s counterexample and its links with
"... multimedia security problems ..."
(Show Context)