Paper by Based on privacy calculus theory, we derive hypotheses on the role of perceived usefulness and privacy risks of artificial intelligence (AI) in public services. In a representative vignette experiment (n = 1,048), we asked citizens whether they would download a mobile app to interact in an AI-driven public service. Despite general concerns about privacy, we find that citizens are not susceptible to the amount of personal information they must share, nor to a more anthropomorphic interface. Our results confirm the privacy paradox, which we frame in the literature on the government’s role to safeguard ethical principles, including citizens’ privacy…(More)”.
AI-driven public services and the privacy paradox: do citizens really care about their privacy?
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
artificial intelligence, privacy
A major AI training data set contains millions of examples of personal data
Posted in July 28, 2025 by Stefaan Verhulst
DATA, privacy
Why the most valuable workforce data is voluntary – and how to get it
Posted in July 21, 2025 by Stefaan Verhulst
privacy
Designing Consent: Choice Architecture and Consumer Welfare in Data Sharing
Posted in July 16, 2025 by Stefaan Verhulst