Results 1 - 10
of
377
Random Key Predistribution Schemes for Sensor Networks”,
- IEEE Symposium on Security and Privacy,
, 2003
"... Abstract Efficient key distribution is the basis for providing secure communication, a necessary requirement for many emerging sensor network applications. Many applications require authentic and secret communication among neighboring sensor nodes. However, establishing keys for secure communicatio ..."
Abstract
-
Cited by 832 (12 self)
- Add to MetaCart
(Show Context)
Abstract Efficient key distribution is the basis for providing secure communication, a necessary requirement for many emerging sensor network applications. Many applications require authentic and secret communication among neighboring sensor nodes. However, establishing keys for secure communication among neighboring sensor nodes in a sensor network is a challenging problem, due to the scale of sensor nets, the limited computation and communication resources of sensors, their deployment in hostile environments yet their lack of tamper-resistant hardware. The limited computation resources of sensor nodes prevent using traditional key distribution mechanisms in sensor networks, such as Diffie-Hellman based approaches. Pre-distribution of secret keys among neighbors is generally not feasible, because we do not know which sensors will be neighbors after deployment. Pre-distribution of secret keys for all pairs of nodes is not viable due to the large number of sensors and the limited memory of sensor nodes. A new key distribution approach was proposed by Eschenauer and Gligor [11] to achieve secrecy for node-to-node communication: sensor nodes receive a random subset of keys from a key pool before deployment. In the field, neighboring nodes exchange information to find one common key within their random subset and use that key as their shared secret to secure subsequent communication. In this paper, we generalize the Eschenauer-Gligor key distribution approach. First, we propose two new mechanisms, the q-composite random key predistribution scheme and the multi-path key reinforcement scheme, which substantially increases the security of key setup such that an attacker has to compromise many more nodes to achieve a high probability to compromise communication. Second, we propose a new mechanism, random-pairwise keys scheme, to enable node-to-node authentication without involving a base station and perfect resilience against node capture. We also show how we enable distributed node revocation based on this scheme. To the best of our knowledge, no previous scheme supports efficient node-to-node authentication without involving a base station and distributed node revocation. We give detailed analysis and simulation results to each proposed scheme and show under which situations a scheme should be used to achieve the best security.
Packet Leashes: A Defense against Wormhole Attacks in Wireless Ad Hoc Networks
, 2003
"... Abstract — As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has n ..."
Abstract
-
Cited by 703 (15 self)
- Add to MetaCart
Abstract — As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a new, general mechanism, called packet leashes, for detecting and thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. I.
FARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment
- IN PROCEEDINGS OF THE 5TH SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI
, 2002
"... Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cry ..."
Abstract
-
Cited by 487 (13 self)
- Add to MetaCart
(Show Context)
Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.
Terra: a virtual machine-based platform for trusted computing
, 2003
"... We present a flexible architecture for trusted computing, called Terra, that allows applications with a wide range of security requirements to run simultaneously on commodity hardware. Applications on Terra enjoy the semantics of running on a separate, dedicated, tamper-resistant hardware platform, ..."
Abstract
-
Cited by 431 (5 self)
- Add to MetaCart
(Show Context)
We present a flexible architecture for trusted computing, called Terra, that allows applications with a wide range of security requirements to run simultaneously on commodity hardware. Applications on Terra enjoy the semantics of running on a separate, dedicated, tamper-resistant hardware platform, while retaining the ability to run side-by-side with normal applications on a generalpurpose computing platform. Terra achieves this synthesis by use of a trusted virtual machine monitor (TVMM) that partitions a tamper-resistant hardware platform into multiple, isolated virtual machines (VM), providing the appearance of multiple boxes on a single, general-purpose platform. To each VM, the TVMM provides the semantics of either an “open box, ” i.e. a general-purpose hardware platform like today’s PCs and workstations, or a “closed box, ” an opaque special-purpose platform that protects the privacy and integrity of its contents like today’s game consoles and cellular phones. The software stack in each VM can be tailored from the hardware interface up to meet the security requirements of its application(s). The hardware and TVMM can act as a trusted party to allow closed-box VMs to cryptographically identify the software they run, i.e. what is in the box, to remote parties. We explore the strengths and limitations of this architecture by describing our prototype implementation and several applications that we developed for it.
Venti: A New Approach to Archival Storage
, 2002
"... This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block's contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of ..."
Abstract
-
Cited by 342 (0 self)
- Add to MetaCart
(Show Context)
This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block's contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems.
Efficient and Secure Source Authentication for Multicast
- In Network and Distributed System Security Symposium, NDSS ’01
, 2001
"... One of the main challenges of securing multicast communication is source authentication, or enabling receivers of multicast data to verify that the received data originated with the claimed source and was not modified enroute. The problem becomes more complex in common settings where other receivers ..."
Abstract
-
Cited by 273 (8 self)
- Add to MetaCart
One of the main challenges of securing multicast communication is source authentication, or enabling receivers of multicast data to verify that the received data originated with the claimed source and was not modified enroute. The problem becomes more complex in common settings where other receivers of the data are not trusted, and where lost packets are not retransmitted.
SIA: Secure Information Aggregation in Sensor Networks
, 2003
"... Sensor networks promise viable solutions to many monitoring problems. However, the practical deployment of sensor networks faces many challenges imposed by real-world demands. Sensor nodes often have limited computation and communication resources and battery power. Moreover, in many applications se ..."
Abstract
-
Cited by 253 (13 self)
- Add to MetaCart
(Show Context)
Sensor networks promise viable solutions to many monitoring problems. However, the practical deployment of sensor networks faces many challenges imposed by real-world demands. Sensor nodes often have limited computation and communication resources and battery power. Moreover, in many applications sensors are deployed in open environments, and hence are vulnerable to physical attacks, potentially compromising the sensor's cryptographic keys. One of the basic and indispensable functionalities of sensor networks is the ability to answer queries over the data acquired by the sensors. The resource constraints and security issues make designing mechanisms for information aggregation in large sensor networks particularly challenging.
Bitcoin: A peer-to-peer electronic cash system,” http://bitcoin.org/bitcoin.pdf
"... www.bitcoin.org Abstract. A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party ..."
Abstract
-
Cited by 246 (0 self)
- Add to MetaCart
(Show Context)
www.bitcoin.org Abstract. A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone. 1.
Rushing Attacks and Defense in Wireless Ad Hoc Network Routing Protocols
- in ACM Workshop on Wireless Security (WiSe
, 2003
"... In an ad hoc network, mobile computers (or nodes) cooperate to forward packets for each other, allowing nodes to communicate beyond their direct wireless transmission range. Many proposed routing protocols for ad hoc networks operate in an on-demand fashion, as on-demand routing protocols have been ..."
Abstract
-
Cited by 216 (4 self)
- Add to MetaCart
In an ad hoc network, mobile computers (or nodes) cooperate to forward packets for each other, allowing nodes to communicate beyond their direct wireless transmission range. Many proposed routing protocols for ad hoc networks operate in an on-demand fashion, as on-demand routing protocols have been shown to often have lower overhead and faster reaction time than other types of routing based on periodic (proactive) mechanisms. Significant attention recently has been devoted to developing secure routing protocols for ad hoc networks, including a number of secure ondemand routing protocols, that defend against a variety of possible attacks on network routing. In this paper, we present the rushing attack, a new attack that results in denial-of-service when used against all previous on-demand ad hoc network routing protocols. For example, DSR, AODV, and secure protocols based on them, such as Ariadne, ARAN, and SAODV, are unable to discover routes longer than two hops when subject to this attack. This attack is also particularly damaging because it can be performed by a relatively weak attacker. We analyze why previous protocols fail under this attack. We then develop Rushing Attack Prevention (RAP),a generic defense against the rushing attack for on-demand protocols. RAP incurs no cost unless the underlying protocol fails to find a working route, and it provides provable security properties even against the strongest rushing attackers.
Enabling public verifiability and data dynamics for storage security in cloud computing
- in Proc. of ESORICS’09, Saint
, 2009
"... Abstract. Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about man ..."
Abstract
-
Cited by 177 (10 self)
- Add to MetaCart
(Show Context)
Abstract. Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public verifiability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the Proof of Retrievability model [1] by manipulating the classic Merkle Hash Tree (MHT) construction for block tag authentication. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure. 1