Results 1  10
of
34
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 264 (5 self)
 Add to MetaCart
(Show Context)
Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finitestate finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finitestate channels, hidden Markov models, identifiability, Kalman filter, maximumlikelihood (ML) estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
Malicious Data Attacks on the Smart Grid
"... Malicious attacks against power systems are investigated, in which an adversary controls a set of meters and is able to alter the measurements from those meters. Two regimes of attacks are considered. The strong attack regime is where the adversary attacks a sufficient number of meters so that the ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
Malicious attacks against power systems are investigated, in which an adversary controls a set of meters and is able to alter the measurements from those meters. Two regimes of attacks are considered. The strong attack regime is where the adversary attacks a sufficient number of meters so that the network state becomes unobservable by the control center. For attacks in this regime, the smallest set of attacked meters capable of causing network unobservability is characterized using a graph theoretic approach. By casting the problem as one of minimizing a supermodular graph functional, the problem of identifying the smallest set of vulnerable meters is shown to have polynomial complexity. For the weak attack regime where the adversary controls only a small number of meters, the problem is examined from a decision theoretic perspective for both the control center and the adversary. For the control center, a generalized likelihood ratio detector is proposed that incorporates historical data. For the adversary, the tradeoff between maximizing estimation error at the control center and minimizing detection probability of the launched attack is examined. An optimal attack based on minimum energy leakage is proposed.
A framework for spectrally efficient noncoherent communication
, 2000
"... Abstract—This paper considers noncoherent communication over a frequencynonselective channel in which the timevarying channel gain is unknown a priori, but is approximately constant over a coherence interval. Unless the coherence interval is large, coherent communication, which requires explicit c ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
(Show Context)
Abstract—This paper considers noncoherent communication over a frequencynonselective channel in which the timevarying channel gain is unknown a priori, but is approximately constant over a coherence interval. Unless the coherence interval is large, coherent communication, which requires explicit channel estimation and tracking prior to detection, incurs training overhead which may be excessive, especially for multipleantenna communication. In contrast, noncoherent detection may be viewed as a generalized likelihood ratio test (GLRT) which jointly estimates the channel and the data, and hence does not require separate training. The main results in this paper are as follows 1) We develop a “signal space ” criterion for signal and code design for noncoherent communication, in terms of the distances of signal points from the decision boundaries. 2) The noncoherent metric thus obtained is used to guide the design of signals for noncoherent communication that are based on amplitude/phase constellations. These are significantly more efficient than conventional differential phaseshift keying (PSK), especially at high signaltonoise ratio (SNR). Also, known results on the highSNR performance of multiplesymbol demodulation of differential PSK are easily inferred from the noncoherent metric. 3) The GLRT interpretation is used to obtain nearoptimal lowcomplexity implementations of noncoherent block demodulation. In particular, this gives an implementation of multiple symbol demodulation of differential PSK, which is of linear complexity (in the block length) and whose degradation from the exact, exponential complexity, implementation can be made as small as desired. Index Terms—Differential phaseshift keying (PSK), differential quadrature amplitude modulation (QAM), generalized likelihood ratio test (GLRT), noncoherent communication, noncoherent distance. I.
Universal Composite Hypothesis Testing: A Competitve Minimax Approach
, 2001
"... A novel approach is presented for the longstanding problem of composite hypothesis testing. In composite hypothesis testing, unlike in simple hypothesis testing, the probability function of the observed data given the hypothesis, is uncertain as it depends on the unknown value of some parameter. Th ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
(Show Context)
A novel approach is presented for the longstanding problem of composite hypothesis testing. In composite hypothesis testing, unlike in simple hypothesis testing, the probability function of the observed data given the hypothesis, is uncertain as it depends on the unknown value of some parameter. The proposed approach is to minimize the worstcase ratio between the probability of error of a decision rule that is independent of the unknown parameters and the minimum probability of error attainable given the parameters. The principal solution to this minimax problem is presented and the resulting decision rule is discussed. Since the exact solution is, in general, hard to find, and afortiori hard to implement, an approximation method that yields an asymptotically minimax decision rule is proposed. Finally, a variety of potential application areas are provided in signal processing and communications with special emphasis on universal decoding.
Optimal error exponents in hidden Markov models order estimation
 IEEE Trans. Inf. Theory
, 2003
"... Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture es ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan. We prove strong consistency of those estimators without assuming any a priori upper bound on the order and smaller penalties than previous works. We prove a version of Stein’s lemma for HMM order estimation and derive an upper bound on underestimation exponents. Then we prove that this upper bound can be achieved by the penalized ML estimator and by the penalized mixture estimator. The proof of the latter result gets around the elusive nature of the ML in HMM by resorting to largedeviation techniques for empirical processes. Finally, we prove that for any consistent HMM order estimator, for most HMM, the overestimation exponent is null. Index Terms—Composite hypothesis testing, error exponents, generalized likelihood ratio testing, hidden Markov model (HMM), large deviations, order estimation, Stein’s lemma.
Detection of Hiding in the Least Significant Bit
, 2003
"... We consider the problem of detecting hiding in the least significant bit (LSB) of images. Since the hiding rate is not known, this is a composite hypothesis testing problem. We show that under a mild condition on the host probability mass function (PMF), the optimal composite hypothesis testing prob ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of detecting hiding in the least significant bit (LSB) of images. Since the hiding rate is not known, this is a composite hypothesis testing problem. We show that under a mild condition on the host probability mass function (PMF), the optimal composite hypothesis testing problem is solved by a related optimal simple hypothesis testing problem. We then develop practical tests based on the optimal test and exhibit their superiority over Stegdetect, a popular steganalysis method used in practice.
Optimal simultaneous detection and estimation under a false alarm constraint
 IEEE Trans. Inform. Theory
, 1995
"... Abstruct This paper addresses the problem of finite sample simultaneous detection and estimation which arises when estimation of signal parameters is desired but signal presence is uncertain. In general, a joint detection and estimation algorithm cannot simultaneously achieve optimal detection and ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
Abstruct This paper addresses the problem of finite sample simultaneous detection and estimation which arises when estimation of signal parameters is desired but signal presence is uncertain. In general, a joint detection and estimation algorithm cannot simultaneously achieve optimal detection and optimal estimation performance. In this paper we develop a multihypothesis testing framework for studying the tradeoffs between detection and parameter estimation (classification) for a finite discrete parameter set. Our multihypothesis testing problem is based on the worst case detection and worst case classification error probabilities of the class of joint detection and classification algorithms which are subject to a false alarm constraint. This framework leads to the evaluation of greatest lower bounds on the worst case decision error probabilities and a construction of decision rules which achieve these lower bounds. For illustration, we apply these methods to signal detection, order selection, and signal classification for a multicomponent signal in noise model. For two or fewer signals, an SNR of 3 dB and signal space dimension of AV = 10 numerical results are obtained which establish the existence of fundamental tradeoffs between three performance criteria: probability of signal detection, probability of correct order selection, and probability of correct classification. Furthermore, based on numerical performance comparisons between our optimal decision rule and other suboptimal penalty function methods, we observe that Rissanen’s order selection penalty method is nearly minmax optimal in some nonasymptotic regimes. Index Terms Simultaneous decisions, fundamental tradeoffs, minmax criterion, order selection, signal classification, signal
Universal and composite hypothesis testing via mismatched divergence
 IEEE Trans. Inf. Theory
"... ..."
(Show Context)
Spatiotemporal network anomaly detection by assessing deviations of empirical measures
 IEEE/ACM Trans. Netw
, 2009
"... Abstract—We introduce an Internet traffic anomaly detection mechanism based on large deviations results for empirical measures. Using past traffic traces we characterize network traffic during various timeofday intervals, assuming that it is anomalyfree. We present two different approaches to ch ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce an Internet traffic anomaly detection mechanism based on large deviations results for empirical measures. Using past traffic traces we characterize network traffic during various timeofday intervals, assuming that it is anomalyfree. We present two different approaches to characterize traffic: (i) a modelfree approach based on the method of types and Sanov’s theorem, and (ii) a modelbased approach modeling traffic using a Markov modulated process. Using these characterizations as a reference we continuously monitor traffic and employ large deviations and decision theory results to “compare ” the empirical measure of the monitored traffic with the corresponding reference characterization, thus, identifying traffic anomalies in realtime. Our experimental results show that applying our methodology (even shortlived) anomalies are identified within a small number of observations. Throughout, we compare the two approaches presenting their advantages and disadvantages to identify and classify temporal network anomalies. We also demonstrate how our framework can be used to monitor traffic from multiple network elements in order to identify both spatial and temporal anomalies. We validate our techniques by analyzing real traffic traces with timestamped anomalies. Index Terms—Large deviations, Markov processes, method of types, network security, statistical anomaly detection. I.
Robust and distributed stochastic localization in sensor networks: Theory and experimental results
 ACM Trans. Sensor Networks
"... We present a robust localization system allowing wireless sensor networks to determine the physical location of their nodes. The coverage area is partitioned into regions and we seek to identify the region of a sensor based on observations by stationary clusterheads. Observations (e.g., signal stre ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We present a robust localization system allowing wireless sensor networks to determine the physical location of their nodes. The coverage area is partitioned into regions and we seek to identify the region of a sensor based on observations by stationary clusterheads. Observations (e.g., signal strength) are assumed random. We pose the localization problem as a composite multihypothesis testing problem, develop the requisite theory, and address the problem of optimally placing clusterheads. We show that localization decisions can be distributed by appropriate innetwork processing. The approach is validated in a testbed yielding promising results.