What if we all had a personal assistant advising us, learning our privacy preferences, and aiding us in managing control over our data?

The Internet of Things (IoT) and Big Data are making it impractical for people to keep up with the many different ways in which their data can potentially be collected and processed. What is needed is a new, more scalable paradigm that empowers users to regain appropriate control over their data. We envision personalized privacy assistants as intelligent agents capable of learning the privacy preferences of their users over time, semi-automatically configuring many settings, and making many privacy decisions on their behalf. Through targeted interactions, privacy assistants will help their users better appreciate the ramifications associated with the processing of their data, and empower them to control such processing in an intuitive and effective manner. This includes selectively alerting users about practices they may not feel comfortable with, confirming with users privacy settings the assistants are not sure how to configure, refining models of their user’s preferences over time, and occasionally nudging users to carefully (re)consider the implications of some of their privacy decisions. Ultimately, these assistants will learn our preferences and help us more effectively manage our privacy settings across a wide range of devices and environments without the need for frequent interruptions.

Our project combines multiple research strands, each focusing on complementary research questions and elements of functionality. Our work is driven by user-centered design processes that translate personal privacy preference models, transparency mechanisms and dialog primitives into personalized privacy assistant functionality. Lab experiments and pilot studies help us evaluate and refine our functionality.

Modeling and Learning People’s Privacy Preferences

We are developing user-oriented machine learning techniques to capture people’s privacy preferences and expectations. These models are used to help users manage an otherwise unmanageable number of privacy decisions. This includes recommending or semi-automating the configuration of many privacy settings for individual users.

Dialogs with Users, including Privacy Nudges

We are exploring the merits of different modes of interaction, different interaction primitives and different interaction styles with the user. As we move towards Internet of Things scenarios, Personalized Privacy Assistants will have to be increasingly parsimonious and effective in the way in which they interact with their users. This includes being able to accommodate a wide range of contextual factors that impact the availability and effectiveness of different forms of communication with the user. This also includes studying the impact of different solutions on user privacy decision making and more generally on their behavior. What does it take to get a user’s attention? How much information is too much? When is the best time to interact with the user? What mode of interaction is most effective in a given context? How does one nudge users to carefully weigh privacy-utility tradeoffs associated with their decisions? And more.

Transparency Mechanisms for Big Data

We are developing transparency mechanisms for big data systems to inform users about data use practices of data holders. This includes identifying what data holders can infer from the data they collect and how they use the results. This analysis can also be used to help people better appreciate the ramifications of their privacy decisions.

Modeling Privacy Policy Constructs and Restrictions

Here we are developing an architecture and elements of infrastructure to support the deployment of personalized privacy assistants across different mobile and Internet of Things (IoT) scenarios. This includes the identification of an extensible collection of privacy constructs that can be used by IoT resource owners to describe the data collection, use and sharing practices associated with these resources (e.g. sensors, applictions, services) in a machine readable manner. These primitives can then be interpreted by Personalized Privacy Assistants and selectively communicated to their users.

Learn More About This Project

Project Publications


B. Liu, M.S. Andersen, F. Schaub, H. Almuhimedi, S. Zhang, N. Sadeh, A. Acquisti, and Y. Agarwal, "Follow My Recommendations: A Personalized Assistant for Mobile App Permissions", Symposium on Usable Privacy and Security (SOUPS '16), Jun 2016 Denver, CO [pdf]

N. Sadeh, "Personalized Privacy Assistants: From Android to the Internet of Things", Presentation at FTC PrivacyCon, Jan 2016 [link]

Rao, F. Schaub, N. Sadeh, A. Acquisti, and R. Kang, "Expecting the Unexpected: Understanding Mismatched Privacy Expectations Online", Symposium on Usable Privacy and Security (SOUPS '16), Jun 2016 Denver, CO [pdf]

J. Gluck, F. Schaub, A. Friedman, H. Habib, N. Sadeh, L.F. Cranor, and Y. Agarwal, "How Short is Too Short? Implications of Length and Framing on the Effectiveness of Privacy Notices", Symposium on Usable Privacy and Security (SOUPS '16), Jun 2016 Denver, CO [pdf]

A. Datta, S. Sen, and Y. Zick, "Algorithmic Transparency via Quantitative Input Influence", in Proceedings of 37th IEEE Symposium on Security and Privacy, May 2016 [pdf]

J. Lin, B. Liu, N. Sadeh, and J.I. Hong, "Modeling Users’ Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings", 2014 ACM Symposium on Usable Security and Privacy (SOUPS 2014), Jul 2014 [pdf]

B. Liu, J. Lin, N. Sadeh, "Reconciling Mobile App Privacy and Usability on Smartphones: Could User Privacy Profiles Help?", 23rd Interntional Conference on the World Wide Web (WWW 2014), Jul 2014 [pdf]

H. Almuhimedi, F. Schaub, N. Sadeh, Y. Agarwal, A. Acquisti, I. Adjerid, J. Gluck, L. Cranor, "Your Location Has Been Shared 5398 Times! A Field Study on Mobile Privacy Nudges", in Proc. CHI 2015, Jul 2015 [pdf]

  • Related Research ​IoT Expedition Read More
  • Related Research ​Usable and Secure Passwords Read More