Results 1 - 10
of
83
A Survey of Computational Location Privacy
- PERSONAL AND UBIQUITOUS COMPUTING
, 2008
"... This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the ..."
Abstract
-
Cited by 120 (1 self)
- Add to MetaCart
This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples’ attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.
Toward community sensing.
- In ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN),
, 2008
"... Abstract A great opportunity exists to fuse information from populations of privately-held sensors to create useful sensing applications. For example, GPS devices, embedded in cellphones and automobiles, might one day be employed as distributed networks of velocity sensors for traffic monitoring an ..."
Abstract
-
Cited by 65 (8 self)
- Add to MetaCart
(Show Context)
Abstract A great opportunity exists to fuse information from populations of privately-held sensors to create useful sensing applications. For example, GPS devices, embedded in cellphones and automobiles, might one day be employed as distributed networks of velocity sensors for traffic monitoring and routing. Unfortunately, privacy and resource considerations limit access to such data streams. We describe principles of community sensing that offer mechanisms for sharing data from privately held sensors. The methods take into account the likely availability of sensors, the contextsensitive value of sensor information, based on models of phenomena and demand, and sensor owners' preferences about privacy and resource usage. We present efficient and well-characterized approximations of optimal sensing policies. We provide details on key principles of community sensing and highlight their use within a case study for road traffic monitoring.
Are you close with me? Are you nearby? Investigating social groups, closeness, and willingness to share
"... As ubiquitous computing becomes increasingly mobile and social, personal information sharing will likely increase in frequency, the variety of friends to share with, and range of information that can be shared. Past work has identified that whom you share with is important for choosing whether or no ..."
Abstract
-
Cited by 32 (6 self)
- Add to MetaCart
(Show Context)
As ubiquitous computing becomes increasingly mobile and social, personal information sharing will likely increase in frequency, the variety of friends to share with, and range of information that can be shared. Past work has identified that whom you share with is important for choosing whether or not to share, but little work has explored which features of interpersonal relationships influence sharing. We present the results of a study of 42 participants, who self-report aspects of their relationships with 70 of their friends, including frequency of collocation and communication, closeness, and social group. Participants rated their willingness to share in 21 different scenarios based on information a UbiComp system could provide. Our findings show that (a) self-reported closeness is the strongest indicator of willingness to share, (b) individuals are more likely to share in scenarios with common information (e.g. we are within one mile of each other) than other kinds of scenarios (e.g. my location wherever I am), and (c) frequency of communication predicts both closeness and willingness to share better than frequency of collocation. Author Keywords Privacy, social networking, relationships, tie strength
Access Control for Home Data Sharing: Attitudes, Needs and Practices
, 2009
"... As digital content becomes more prevalent in the home, non-technical users are increasingly interested in sharing that content with others and accessing it from multiple devices. Not much is known about how these users think about controlling access to this data. To better understand this, we conduc ..."
Abstract
-
Cited by 31 (11 self)
- Add to MetaCart
(Show Context)
As digital content becomes more prevalent in the home, non-technical users are increasingly interested in sharing that content with others and accessing it from multiple devices. Not much is known about how these users think about controlling access to this data. To better understand this, we conducted semi-structured, in-situ interviews with 33 users in 15 households. We found that users create ad-hoc access-control mechanisms that do not always work; that their ideal polices are complex and multi-dimensional; that a priori policy specification is often insufficient; and that people’s mental models of access control and security are often misaligned with current systems. We detail these findings and present a set of associated guidelines for designing usable access-control systems for the home environment. 1 ETH Zürich 2 University of North Carolina
How Users Use Access Control
"... Existing technologies for file sharing differ widely in the granularity of control they give users over who can access their data; achieving finer-grained control generally requires more user effort. We want to understand what level of control users need over their data, by examining what sorts of a ..."
Abstract
-
Cited by 26 (0 self)
- Add to MetaCart
(Show Context)
Existing technologies for file sharing differ widely in the granularity of control they give users over who can access their data; achieving finer-grained control generally requires more user effort. We want to understand what level of control users need over their data, by examining what sorts of access policies users actually create in practice. We used automated data mining techniques to examine the realworld use of access control features present in standard document sharing systems in a corporate environment as used over a long (> 10 year) time span. We find that while users rarely need to change access policies, the policies they do express are actually quite complex. We also find that users participate in larger numbers of access control and email sharing groups than measured by self-report in previous studies. We hypothesize that much of this complexity might be reduced by considering these policies as examples of simpler access control patterns. From our analysis of what access control features are used and where errors are made, we propose a set of design guidelines for access control systems themselves and the tools used to manage them, intended to increase usability and decrease error.
A utility-theoretic approach to privacy and personalization.
- In AAAI,
, 2008
"... Abstract Online services such as web search, news portals, and ecommerce applications face the challenge of providing highquality experiences to a large, heterogeneous user base. Recent efforts have highlighted the potential to improve performance by personalizing services based on special knowledg ..."
Abstract
-
Cited by 25 (8 self)
- Add to MetaCart
Abstract Online services such as web search, news portals, and ecommerce applications face the challenge of providing highquality experiences to a large, heterogeneous user base. Recent efforts have highlighted the potential to improve performance by personalizing services based on special knowledge about users. For example, a user's location, demographics, and search and browsing history may be useful in enhancing the results offered in response to web search queries. However, reasonable concerns about privacy by both users, providers, and government agencies acting on behalf of citizens, may limit access to such information. We introduce and explore an economics of privacy in personalization, where people can opt to share personal information in return for enhancements in the quality of an online service. We focus on the example of web search and formulate realistic objective functions for search efficacy and privacy. We demonstrate how we can identify a near-optimal solution to the utilityprivacy tradeoff. We evaluate the methodology on data drawn from a log of the search activity of volunteer participants. We separately assess users' preferences about privacy and utility via a large-scale survey, aimed at eliciting preferences about peoples' willingness to trade the sharing of personal data in returns for gains in search efficiency. We show that a significant level of personalization can be achieved using only a small amount of information about users.
ReGroup: Interactive Machine Learning for On-Demand Group Creation
- in Social Networks. To Appear in Proceedings of CHI 2012
, 2012
"... We present ReGroup, a novel end-user interactive machine learning system for helping people create custom, on-demand groups in online social networks. As a person adds members to a group, ReGroup iteratively learns a probabilistic model of group membership specific to that group. ReGroup then uses i ..."
Abstract
-
Cited by 25 (1 self)
- Add to MetaCart
(Show Context)
We present ReGroup, a novel end-user interactive machine learning system for helping people create custom, on-demand groups in online social networks. As a person adds members to a group, ReGroup iteratively learns a probabilistic model of group membership specific to that group. ReGroup then uses its currently learned model to suggest additional members and group characteristics for filtering. Our evaluation shows that ReGroup is effective for helping people create large and varied groups, whereas traditional methods (searching by name or selecting from an alphabetical list) are better suited for small groups whose members can be easily recalled by name. By facilitating on-demand group creation, ReGroup can enable in-context sharing and potentially encourage better online privacy practices. In addition, applying interactive machine learning to social network group creation introduces several challenges for designing effective end-user interaction with machine learning. We identify these challenges and discuss how we address them in ReGroup. Author Keywords Interactive machine learning, social network group creation, access control lists, example and feature-based interaction.
Privacy considerations in awareness systems: designing with privacy in mind,”
- in Awareness Systems, ser. Human-Computer Interaction Series,
, 2009
"... ..."
(Show Context)
A Utility-theoretic Approach to Privacy in Online Services
, 2010
"... Online offerings such as web search, news portals, and e-commerce applications face the challenge of providing high-quality service to a large, heterogeneous user base. Recent efforts have highlighted the potential to improve performance by introducing methods to personalize services based on specia ..."
Abstract
-
Cited by 14 (2 self)
- Add to MetaCart
(Show Context)
Online offerings such as web search, news portals, and e-commerce applications face the challenge of providing high-quality service to a large, heterogeneous user base. Recent efforts have highlighted the potential to improve performance by introducing methods to personalize services based on special knowledge about users and their context. For example, a user’s demographics, location, and past search and browsing may be useful in enhancing the results offered in response to web search queries. However, reasonable concerns about privacy by both users, providers, and government agencies acting on behalf of citizens, may limit access by services to such information. We introduce and explore an economics of privacy in personalization, where people can opt to share personal information, in a standing or on-demand manner, in return for expected enhancements in the quality of an online service. We focus on the example of web search and formulate realistic objective functions for search efficacy and privacy. We demonstrate how we can find a provably near-optimal optimization of the utility-privacy tradeoff in an efficient manner. We evaluate our methodology on data drawn from a log of the search activity of volunteer participants. We separately assess users preferences about privacy and utility via a large-scale survey, aimed at eliciting preferences about peoples willingness to trade the sharing of personal data in returns for gains in search efficiency. We show that a significant level of personalization can be achieved using a relatively small amount of information about users.
The post that wasn't: exploring self-censorship on facebook. The 2013 conference on Computer supported cooperative work (pp
, 2013
"... Social networking site users must decide what content to share and with whom. Many social networks, including Facebook, provide tools that allow users to selectively share content or block people from viewing content. However, sometimes instead of targeting a particular audience, users will self-cen ..."
Abstract
-
Cited by 13 (2 self)
- Add to MetaCart
(Show Context)
Social networking site users must decide what content to share and with whom. Many social networks, including Facebook, provide tools that allow users to selectively share content or block people from viewing content. However, sometimes instead of targeting a particular audience, users will self-censor, or choose not to share. We report the results from an 18-participant user study designed to explore selfcensorship behavior as well as the subset of unshared content participants would have potentially shared if they could have specifically targeted desired audiences. We asked participants to report all content they thought about sharing but decided not to share on Facebook and interviewed participants about why they made sharing decisions and with whom they would have liked to have shared or not shared. Participants reported that they would have shared approximately half the unshared content if they had been able to exactly target their desired audiences.