Self-interest and data protection drive the adoption and moral acceptability of big data technologies: A conjoint analysis approach


Paper by Rabia I.Kodapanakka, lMark J.Brandt, Christoph Kogler, and Iljavan Beest: “Big data technologies have both benefits and costs which can influence their adoption and moral acceptability. Prior studies look at people’s evaluations in isolation without pitting costs and benefits against each other. We address this limitation with a conjoint experiment (N = 979), using six domains (criminal investigations, crime prevention, citizen scores, healthcare, banking, and employment), where we simultaneously test the relative influence of four factors: the status quo, outcome favorability, data sharing, and data protection on decisions to adopt and perceptions of moral acceptability of the technologies.

We present two key findings. (1) People adopt technologies more often when data is protected and when outcomes are favorable. They place equal or more importance on data protection in all domains except healthcare where outcome favorability has the strongest influence. (2) Data protection is the strongest driver of moral acceptability in all domains except healthcare, where the strongest driver is outcome favorability. Additionally, sharing data lowers preference for all technologies, but has a relatively smaller influence. People do not show a status quo bias in the adoption of technologies. When evaluating moral acceptability, people show a status quo bias but this is driven by the citizen scores domain. Differences across domains arise from differences in magnitude of the effects but the effects are in the same direction. Taken together, these results highlight that people are not always primarily driven by self-interest and do place importance on potential privacy violations. They also challenge the assumption that people generally prefer the status quo….(More)”.

The Story of Goldilocks and Three Twitter’s APIs: A Pilot Study on Twitter Data Sources and Disclosure


Paper by Yoonsang Kim, Rachel Nordgren and Sherry Emery: “Public health and social science increasingly use Twitter for behavioral and marketing surveillance. However, few studies provide sufficient detail about Twitter data collection to allow either direct comparisons between studies or to support replication.

The three primary application programming interfaces (API) of Twitter data sources are Streaming, Search, and Firehose. To date, no clear guidance exists about the advantages and limitations of each API, or about the comparability of the amount, content, and user accounts of retrieved tweets from each API. Such information is crucial to the validity, interpretation, and replicability of research findings.

This study examines whether tweets collected using the same search filters over the same time period, but calling different APIs, would retrieve comparable datasets. We collected tweets about anti-smoking, e-cigarettes, and tobacco using the aforementioned APIs. The retrieved tweets largely overlapped between three APIs, but each also retrieved unique tweets, and the extent of overlap varied over time and by topic, resulting in different trends and potentially supporting diverging inferences. Researchers need to understand how different data sources can influence both the amount, content, and user accounts of data they retrieve from social media, in order to assess the implications of their choice of data source…(More)”.

How to Put the Data Subject's Sovereignty into Practice. Ethical Considerations and Governance Perspectives



Paper by Peter Dabrock: “Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given “onlife”-era, or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council with recent research trends in effectively applying the AI ethics principles of explainability and enforceability…(More)”.

Smart Village Technology


Book by Srikanta Patnaik, Siddhartha Sen and Magdi S. Mahmoud: “This book offers a transdisciplinary perspective on the concept of “smart villages” Written by an authoritative group of scholars, it discusses various aspects that are essential to fostering the development of successful smart villages. Presenting cutting-edge technologies, such as big data and the Internet-of-Things, and showing how they have been successfully applied to promote rural development, it also addresses important policy and sustainability issues. As such, this book offers a timely snapshot of the state-of-the-art in smart village research and practice….(More)”.

Realizing the Potential of AI Localism


Stefaan G. Verhulst and Mona Sloane at Project Syndicate: “Every new technology rides a wave from hype to dismay. But even by the usual standards, artificial intelligence has had a turbulent run. Is AI a society-renewing hero or a jobs-destroying villain? As always, the truth is not so categorical.

As a general-purpose technology, AI will be what we make of it, with its ultimate impact determined by the governance frameworks we build. As calls for new AI policies grow louder, there is an opportunity to shape the legal and regulatory infrastructure in ways that maximize AI’s benefits and limit its potential harms.

Until recently, AI governance has been discussed primarily at the national level. But most national AI strategies – particularly China’s – are focused on gaining or maintaining a competitive advantage globally. They are essentially business plans designed to attract investment and boost corporate competitiveness, usually with an added emphasis on enhancing national security.

This singular focus on competition has meant that framing rules and regulations for AI has been ignored. But cities are increasingly stepping into the void, with New York, Toronto, Dubai, Yokohama, and others serving as “laboratories” for governance innovation. Cities are experimenting with a range of policies, from bans on facial-recognition technology and certain other AI applications to the creation of data collaboratives. They are also making major investments in responsible AI research, localized high-potential tech ecosystems, and citizen-led initiatives.

This “AI localism” is in keeping with the broader trend in “New Localism,” as described by public-policy scholars Bruce Katz and the late Jeremy Nowak. Municipal and other local jurisdictions are increasingly taking it upon themselves to address a broad range of environmental, economic, and social challenges, and the domain of technology is no exception.

For example, New York, Seattle, and other cities have embraced what Ira Rubinstein of New York University calls “privacy localism,” by filling significant gaps in federal and state legislation, particularly when it comes to surveillance. Similarly, in the absence of a national or global broadband strategy, many cities have pursued “broadband localism,” by taking steps to bridge the service gap left by private-sector operators.

As a general approach to problem solving, localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability….(More)”.

Twitter might have a better read on floods than NOAA


Interview by By Justine Calma: “Frustrated tweets led scientists to believe that tidal floods along the East Coast and Gulf Coast of the US are more annoying than official tide gauges suggest. Half a million geotagged tweets showed researchers that people were talking about disruptively high waters even when government gauges hadn’t recorded tide levels high enough to be considered a flood.

Capturing these reactions on social media can help authorities better understand and address the more subtle, insidious ways that climate change is playing out in peoples’ daily lives. Coastal flooding is becoming a bigger problem as sea levels rise, but a study published recently in the journal Nature Communications suggests that officials aren’t doing a great job of recording that.

The Verge spoke with Frances Moore, lead author of the new study and a professor at the University of California, Davis. This isn’t the first time that she’s turned to Twitter for her climate research. Her previous research also found that people tend to stop reacting to unusual weather after dealing with it for a while — sometimes in as little as two years. Similar data from Twitter has been used to study how people coped with earthquakes and hurricanes…(More)”.

NGOs embrace GDPR, but will it be used against them?


Report by Vera Franz et al: “When the world’s most comprehensive digital privacy law – the EU General Data Protection Regulation (GDPR) – took effect in May 2018, media and tech experts focused much of their attention on how corporations, who hold massive amounts of data, would be affected by the law.

This focus was understandable, but it left some important questions under-examined–specifically about non-profit organizations that operate in the public’s interest. How would non-governmental organizations (NGOs) be impacted? What does GDPR compliance mean in very practical terms for NGOs? What are the challenges they are facing? Could the GDPR be ‘weaponized’ against NGOs and if so, how? What good compliance practices can be shared among non-profits?

Ben Hayes and Lucy Hannah from Data Protection Support & Management and I have examined these questions in detail and released our findings in this report.

Our key takeaway: GDPR compliance is an integral part of organisational resilience, and it requires resources and attention from NGO leaders, foundations and regulators to defend their organisations against attempts by governments and corporations to misuse the GDPR against them.

In a political climate where human rights and social justice groups are under increasing pressure, GDPR compliance needs to be given the attention it deserves by NGO leaders and funders. Lack of compliance will attract enforcement action by data protection regulators and create opportunities for retaliation by civil society adversaries.

At the same time, since the law came into force, we recognise that some NGOs have over-complied with the law, possibly diverting scarce resources and hampering operations.

For example, during our research, we discovered a small NGO that undertook an advanced and resource-intensive compliance process (a Data Protection Impact Assessment or DPIA) for all processing operations. DPIAs are only required for large-scale and high-risk processing of personal data. Yet this NGO, which holds very limited personal data and undertakes no marketing or outreach activities, engaged in this complex and time-consuming assessment because the organization was under enormous pressure from their government. They told us they “wanted to do everything possible to avoid attracting attention.”…

Our research also found that private companies, individuals and governments who oppose the work of an organisation have used GDPR to try to keep NGOs from publishing their work. To date, NGOs have successfully fought against this misuse of the law….(More)“.

The many perks of using critical consumer user data for social benefit


Sushant Kumar at LiveMint: “Business models that thrive on user data have created profitable global technology companies. For comparison, market capitalization of just three tech companies, Google (Alphabet), Facebook and Amazon, combined is higher than the total market capitalization of all listed firms in India. Almost 98% of Facebook’s revenue and 84% of Alphabet’s come from serving targeted advertising powered by data collected from the users. No doubt, these tech companies provide valuable services to consumers. It is also true that profits are concentrated with private corporations and societal value for contributors of data, that is, the user, can be much more significant….

In the existing economic construct, private firms are able to deploy top scientists and sophisticated analytical tools to collect data, derive value and monetize the insights.

Imagine if personalization at this scale was available for more meaningful outcomes, such as for administering personalized treatment for diabetes, recommending crop patterns, optimizing water management and providing access to credit to the unbanked. These socially beneficial applications of data can generate undisputedly massive value.

However, handling critical data with accountability to prevent misuse is a complex and expensive task. What’s more, private sector players do not have any incentives to share the data they collect. These challenges can be resolved by setting up specialized entities that can manage data—collect, analyse, provide insights, manage consent and access rights. These entities would function as a trusted intermediary with public purpose, and may be named “data stewards”….(More)”.

See also: http://datastewards.net/ and https://datacollaboratives.org/

Smarter government or data-driven disaster: the algorithms helping control local communities


Release by MuckRock: “What is the chance you, or your neighbor, will commit a crime? Should the government change a child’s bus route? Add more police to a neighborhood or take some away?

Every day government decisions from bus routes to policing used to be based on limited information and human judgment. Governments now use the ability to collect and analyze hundreds of data points everyday to automate many of their decisions.

Does handing government decisions over to algorithms save time and money? Can algorithms be fairer or less biased than human decision making? Do they make us safer? Automation and artificial intelligence could improve the notorious inefficiencies of government, and it could exacerbate existing errors in the data being used to power it.

MuckRock and the Rutgers Institute for Information Policy & Law (RIIPL) have compiled a collection of algorithms used in communities across the country to automate government decision-making.

Go right to the database.

We have also compiled policies and other guiding documents local governments use to make room for the future use of algorithms. You can find those as a project on DocumentCloud.

View policies on smart cities and technologies

These collections are a living resource and attempt to communally collect records and known instances of automated decision making in government….(More)”.

An Algorithm That Grants Freedom, or Takes It Away


Cade Metz and Adam Satariano at The New York Times: “…In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? ZIP code? It’s hard to say, since many states and countries have few rules requiring that algorithm-makers disclose their formulas.

They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, Calif., where an algorithm is used during arraignment hearings, an organization called Silicon Valley De-Bug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.

Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, Calif., recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50-page guide that advises organizers on how to confront the use of algorithms.

The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decision making. A recent United Nations report warned that governments risked “stumbling zombie-like into a digital-welfare dystopia.”…(More)”.