Results 1 
5 of
5
Data Security
, 1979
"... The rising abuse of computers and increasing threat to personal privacy through data banks have stimulated much interest m the techmcal safeguards for data. There are four kinds of safeguards, each related to but distract from the others. Access controls regulate which users may enter the system and ..."
Abstract

Cited by 611 (3 self)
 Add to MetaCart
The rising abuse of computers and increasing threat to personal privacy through data banks have stimulated much interest m the techmcal safeguards for data. There are four kinds of safeguards, each related to but distract from the others. Access controls regulate which users may enter the system and subsequently whmh data sets an active user may read or wrote. Flow controls regulate the dissemination of values among the data sets accessible to a user. Inference controls protect statistical databases by preventing questioners from deducing confidential information by posing carefully designed sequences of statistical queries and correlating the responses. Statlstmal data banks are much less secure than most people beheve. Data encryption attempts to prevent unauthorized disclosure of confidential information in transit or m storage. This paper describes the general nature of controls of each type, the kinds of problems they can and cannot solve, and their inherent limitations and weaknesses. The paper is intended for a general audience with little background in the area.
Secure statistical database with random sample queries
 ACM Transactions on Database Systems
, 1980
"... A new inference control, called random sample queries, is proposed for safeguarding confidential data in online statistical databases. The random sample queries control deals directly with the basic principle of compromise by making it impossible for a questioner to control precisely the formation ..."
Abstract

Cited by 80 (0 self)
 Add to MetaCart
(Show Context)
A new inference control, called random sample queries, is proposed for safeguarding confidential data in online statistical databases. The random sample queries control deals directly with the basic principle of compromise by making it impossible for a questioner to control precisely the formation of query sets. Queries for relative frequencies and averages are computed using random samples drawn from the query sets. The sampling strategy permits the release of accurate and timely statistics and can be implemented at very low cost. Analysis shows the relative error in the statistics decreases as the query set size increases; in contrast, the effort required to compromise increases with the query set size due to large absolute errors. Experiments performed on a simulated database support the analysis.
A data distortion by probability distribution
 ACM TRANSACTIONS ON DATABASE SYSTEMS
, 1985
"... This paper introduces data distortion by probability distribution, a probability distortion that involves three steps. The first step is to identify the underlying density function of the original series and to estimate the parameters of this density function. The second step is to generate a series ..."
Abstract

Cited by 69 (0 self)
 Add to MetaCart
This paper introduces data distortion by probability distribution, a probability distortion that involves three steps. The first step is to identify the underlying density function of the original series and to estimate the parameters of this density function. The second step is to generate a series of data from the estimated density function. And the final step is to map and replace the generated series for the original one. Because it is replaced by the distorted data set, probability distortion guards the privacy of an individual belonging to the original data set. At the same time, the probability distorted series provides asymptotically the same statistical properties as those of the original series, since both are under the same distribution. Unlike conventional point distortion, probability distortion is difficult to compromise by repeated queries, and provides a maximum exposure for statistical analysis.
ADDISONWESLEY PUBLISHING COMPANY
"... Cryptography and data security. Includes bibliographical references and index. ..."
Abstract
 Add to MetaCart
Cryptography and data security. Includes bibliographical references and index.
[269] CONFIDENTIALITYPRESERVING MODES OF ACCESS TO FILES AND TO INTERFILE EXCHANGE FOR USEFUL STATISTICAL ANALYSIS
"... In releasing individual data for statistical analysis by outsiders, deletion of direct personal identifiers is sometimes insufficient to preserve confidentiality. Restrictions on the release of data that is publicly listed elsewhere or error innoculation of these variables may be required. Microaggr ..."
Abstract
 Add to MetaCart
In releasing individual data for statistical analysis by outsiders, deletion of direct personal identifiers is sometimes insufficient to preserve confidentiality. Restrictions on the release of data that is publicly listed elsewhere or error innoculation of these variables may be required. Microaggregated release is safe, but statistically costly. Infile capacity to run outsiders ’ analyses, with randomized rounding of frequency tallies, is best. Interfile linkage of confidential data in statistical analyses is of great potential value for program evaluation and can be achieved without the release of individually identified data from either file by the "mutually insulated file linkage"procedure described. Link file brokerage is unacceptable on confidentiality grounds, and microaggregation and synthetic linking by matching are unacceptable on statistical grounds. For both types of use, it would be beneficial for governmental program evaluation to fund internal statistical analysis capability in important administrative archives, including those in the private sector such as health and automobile insurance. t present, there is great concern about invasion of privacy andconfidentiality and about the threat to individual freedom represented by data banks. Such concerns are currently much stronger