Results 1 - 10
of
54
The ghost in the browser: Analysis of web-based malware
- In Usenix Hotbots
, 2007
"... As more users are connected to the Internet and conduct their daily activities electronically, computer users have become the target of an underground economy that infects hosts with malware or adware for financial gain. Unfortunately, even a single visit to an infected web site enables the attacker ..."
Abstract
-
Cited by 126 (5 self)
- Add to MetaCart
(Show Context)
As more users are connected to the Internet and conduct their daily activities electronically, computer users have become the target of an underground economy that infects hosts with malware or adware for financial gain. Unfortunately, even a single visit to an infected web site enables the attacker to detect vulnerabilities in the user’s applications and force the download a multitude of malware binaries. Frequently, this malware allows the adversary to gain full control of the compromised systems leading to the ex-filtration of sensitive information or installation of utilities that facilitate remote control of the host. We believe that such behavior is similar to our traditional understanding of botnets. However, the main difference is that web-based malware infections are pull-based and that the resulting command feedback loop is looser. To characterize the nature of this rising thread, we identify the four prevalent mechanisms used to inject malicious content on popular web sites: web server security, user contributed content, advertising and third-party widgets. For each of these areas, we present examples of abuse found on the Internet. Our aim is to present the state of malware on the Web and emphasize the importance of this rising threat. 1.
An untold story of middleboxes in cellular networks
- in Proceedings of ACM SIGCOMM
, 2011
"... The use of cellular data networks is increasingly popular as network coverage becomes more ubiquitous and many diverse usercontributed mobile applications become available. The growing cellular traffic demand means that cellular network carriers are facing greater challenges to provide users with go ..."
Abstract
-
Cited by 80 (3 self)
- Add to MetaCart
(Show Context)
The use of cellular data networks is increasingly popular as network coverage becomes more ubiquitous and many diverse usercontributed mobile applications become available. The growing cellular traffic demand means that cellular network carriers are facing greater challenges to provide users with good network performance and energy efficiency, while protecting networks from potential attacks. To better utilize their limited network resources while securing the network and protecting client devices the carriers have already deployed various network policies that influence traffic behavior. Today, these policies are mostly opaque, though they directly impact application designs and may even introduce network vulnerabilities. We present NetPiculet, the first tool that unveils carriers ’ NAT and firewall policies by conducting intelligent measurement. By running NetPiculet on the major U.S. cellular providers as well as deploying it as a smartphone application in the wild covering more than 100 cellular ISPs, we identified the key NAT and firewall policies which have direct implications on performance, energy, and security. For example, NAT boxes and firewalls set timeouts for idle TCP connections, which sometimes cause significant energy waste on mobile devices. Although most carriers today deploy sophisticated firewalls, they are still vulnerable to various attacks such as battery draining and denial of service. These findings can inform developers in optimizing the interaction between mobile applications and cellular networks and also guide carriers in improving their network configurations.
How dynamic are ip addresses
- In Proceedings of the 2007 conference on Applications, technologies, architectures, and
, 2007
"... This paper introduces a novel algorithm, UDmap, to identify dynamically assigned IP addresses and analyze their dynamics pattern. UDmap is fully automatic, and relies only on applicationlevel server logs. We applied UDmap to a month-long Hotmail user-login trace and identified a significant number o ..."
Abstract
-
Cited by 79 (8 self)
- Add to MetaCart
(Show Context)
This paper introduces a novel algorithm, UDmap, to identify dynamically assigned IP addresses and analyze their dynamics pattern. UDmap is fully automatic, and relies only on applicationlevel server logs. We applied UDmap to a month-long Hotmail user-login trace and identified a significant number of dynamic IP addresses – more than 102 million. This suggests that the fraction of IP addresses that are dynamic is by no means negligible. Using this information in combination with a three-month Hotmail email server log, we were able to establish that 95.6 % of mail servers setup on the dynamic IP addresses in our trace sent out solely spam emails. Moreover, these mail servers sent out a large amount of spam – amounting to 42.2 % of all spam emails received by Hotmail. These results highlight the importance of being able to accurately identify dynamic IP addresses for spam filtering. We expect similar benefits to arise for phishing site identification and botnet detection. To our knowledge, this is the first successful attempt to automatically identify and understand IP address dynamics.
Orbis: Rescaling Degree Correlations to Generate Annotated Internet Topologies
, 2007
"... Researchers involved in designing network services and protocols rely on results from simulation and emulation environments to evaluate correctness, performance and scalability. To better understand the behavior of these applications and to predict their performance when deployed across the Internet ..."
Abstract
-
Cited by 51 (3 self)
- Add to MetaCart
Researchers involved in designing network services and protocols rely on results from simulation and emulation environments to evaluate correctness, performance and scalability. To better understand the behavior of these applications and to predict their performance when deployed across the Internet, the generated topologies that serve as input to simulated and emulated environments must closely match real network characteristics, not just in terms of graph structure (node interconnectivity) but also with respect to various node and link annotations. Relevant annotations include link latencies, AS membership and whether a router is a peering or internal router. Finally, it should be possible to rescale a given topology to a variety of sizes while still maintaining its essential characteristics. In this paper, we propose techniques to generate annotated, Internet router graphs of different sizes based on existing observations of Internet characteristics. We find that our generated graphs match a variety of graph properties of observed topologies for a range of target graph sizes. While the best available data of Internet topology currently remains imperfect, the quality of our generated topologies will improve with the fidelity of available measurement techniques or next generation architectures that make Internet structure more transparent.
Privacy leakage vs. Protection measures: the growing disconnect
"... Numerous research papers have listed different vectors of personally identifiable information leaking via traditional and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular Web sites. We argue that the landscape is worsening and existing ..."
Abstract
-
Cited by 43 (3 self)
- Add to MetaCart
(Show Context)
Numerous research papers have listed different vectors of personally identifiable information leaking via traditional and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular Web sites. We argue that the landscape is worsening and existing proposals (including the recent U.S. Federal Trade Commission’s report) do not address several key issues. We examined over 100 popular non-OSN Web sites across a number of categories where tens of millions of users representing diverse demographics have accounts, to see if these sites leak private information to prominent aggregators. Our results raise considerable concerns: we see leakage in sites for every category we examined; fully 56 % of the sites directly leak pieces of private information with this result growing to 75 % if we also include leakage of a site userid. Sensitive search strings sent to healthcare Web sites and travel itineraries on flight reservation sites are leaked in 9 of the top 10 sites studied for each category. The community needs a clear understanding of the shortcomings of existing privacy protection measures and the new proposals. The growing disconnect between the protection measures and increasing leakage and linkage suggests that we need to move beyond the losing battle with aggregators and examine what roles first-party sites can play in protecting privacy of their users. 1.
Detecting In-Flight Page Changes with Web Tripwires
"... While web pages sent over HTTP have no integrity guarantees, it is commonly assumed that such pages are not modified in transit. In this paper, we provide evidence of surprisingly widespread and diverse changes made to web pages between the server and client. Over 1 % of web clients in our study rec ..."
Abstract
-
Cited by 42 (3 self)
- Add to MetaCart
(Show Context)
While web pages sent over HTTP have no integrity guarantees, it is commonly assumed that such pages are not modified in transit. In this paper, we provide evidence of surprisingly widespread and diverse changes made to web pages between the server and client. Over 1 % of web clients in our study received altered pages, and we show that these changes often have undesirable consequences for web publishers or end users. Such changes include popup blocking scripts inserted by client software, advertisements injected by ISPs, and even malicious code likely inserted by malware using ARP poisoning. Additionally, we find that changes introduced by client software can inadvertently cause harm, such as introducing cross-site scripting vulnerabilities into most pages a client visits. To help publishers understand and react appropriately to such changes, we introduce web tripwires—client-side JavaScript code that can detect most in-flight modifications to a web page. We discuss several web tripwire designs intended to provide basic integrity checks for web servers. We show that they are more flexible and less expensive than switching to HTTPS and do not require changes to current browsers. 1
Towards understanding modern web traffic (extended abstract
- In Proc. ACM SIGMETRICS
, 2011
"... As the nature of Web traffic evolves over time, we must update our understanding of underlying nature of today’s Web, which is necessary to improve response time, understand caching effectiveness, and to design intermediary systems, such as firewalls, security analyzers, and reporting or management ..."
Abstract
-
Cited by 42 (4 self)
- Add to MetaCart
(Show Context)
As the nature of Web traffic evolves over time, we must update our understanding of underlying nature of today’s Web, which is necessary to improve response time, understand caching effectiveness, and to design intermediary systems, such as firewalls, security analyzers, and reporting or management systems. In this paper, we analyze five years (2006-2010) of real Web traffic from a globally-distributed proxy system, which captures the browsing behavior of over 70,000 daily users from 187 countries. Using this data set, we examine major changes in Web traffic characteristics during this period, and also investigate the redundancy of this traffic, using both traditional object-level caching as well as content-based approaches.
The Heisenbot Uncertainty Problem: Challenges in Separating Bots from Chaff
"... In this paper we highlight a number of challenges that arise in using crawling to measure the size, topology, and dynamism of distributed botnets. These challenges include traffic due to unrelated applications, address aliasing, and other active participants on the network such as poisoners. Based u ..."
Abstract
-
Cited by 41 (4 self)
- Add to MetaCart
(Show Context)
In this paper we highlight a number of challenges that arise in using crawling to measure the size, topology, and dynamism of distributed botnets. These challenges include traffic due to unrelated applications, address aliasing, and other active participants on the network such as poisoners. Based upon experience developing a crawler for the Storm botnet, we describe each of the issues we encountered in practice, our approach for managing the underlying ambiguity, and the kind of errors we believe it introduces into our estimates. 1
Where’s that phone?: geolocating ip addresses on 3g networks
- In IMC ’09: Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference
, 2009
"... Cell phones connected to high-speed 3G networks constitute an increasingly important class of clients on the Internet. From the viewpoint of the servers they connect to, such devices are virtually indistinguishable from conventional endhosts. In this study, we examine the IP addresses seen by Intern ..."
Abstract
-
Cited by 31 (1 self)
- Add to MetaCart
(Show Context)
Cell phones connected to high-speed 3G networks constitute an increasingly important class of clients on the Internet. From the viewpoint of the servers they connect to, such devices are virtually indistinguishable from conventional endhosts. In this study, we examine the IP addresses seen by Internet servers for cell phone clients and make two observations. First, individual cell phones can expose different IP addresses to servers within time spans of a few minutes, rendering IP-based user identification and blocking inadequate. Second, cell phone IP addresses do not embed geographical information at reasonable fidelity, reducing the effectiveness of commercial geolocation tools used by websites for fraud detection, server selection and content customization. In addition to these two observations, we show that applicationlevel latencies between cell phones and Internet servers can differ greatly depending on the location of the cell phone, but do not vary much at a given location over short time spans; as a result, they provide fine-grained location information that IPs do not.
Peeking into Spammer Behavior from a Unique Vantage Point
"... Understanding the spammer behavior is a critical step in the long-lasting battle against email spams. Previous studies have focused on setting up honeypots or email sinkholes containing destination mailboxes for spam collection. A spam trace collected this way offers the limited viewpoint from a sin ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
(Show Context)
Understanding the spammer behavior is a critical step in the long-lasting battle against email spams. Previous studies have focused on setting up honeypots or email sinkholes containing destination mailboxes for spam collection. A spam trace collected this way offers the limited viewpoint from a single organizational domain and hence is short of reflecting the global behavior of spammers. In this paper, we present a spam analysis study using sinkholes based on open relays. A relay sinkhole offers a unique vantage point in spam collection: it has the broader view of spam originated from multiple spam origins destined to mailboxes belonging to multiple organizational domains. The trace collected using this methodology opens the door to study spammer behaviors that were difficult to do using spam collected from a single organization. Seeing the aggregate behavior of spammers allows us to systematically separate High-Volume Spammers (HVS, e.g. direct spammers) from Low-Volume Spammers (LVS, e.g. low-volume bots in a botnet). Such a separation in turn gives rise to the notion of “spam campaigns”, which reveals how LVS appear to coordinate with each other to share the spamming workload among themselves. A detailed spam campaign analysis holds the promise of finally reverse engineering the workload distribution strategies by the LVS coordinator. 1.