Networked publics: multi-disciplinary perspectives on big policy issues


Special issue of Internet Policy Review edited by William Dutton: “…is the first to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). This issue is anchored in the 2017 conference in Tartu, Estonia, which was organised around the theme of networked publics. The seven papers span issues concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the contributions to this issue reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

…(More)”.

Bringing The Public Back In: Can the Comment Process be Fixed?


Remarks of Commissioner Jessica Rosenworcel, US Federal Communications Commission: “…But what we are facing now does not reflect what has come before.  Because it is apparent the civic infrastructure we have for accepting public comment in the rulemaking process is not built for the digital age.  As the Administrative Conference of the United States acknowledges, while the basic framework for rulemaking from 1946 has stayed the same, “the technological landscape has evolved dramatically.”

Let’s call that an understatement.  Though this problem may seem small in the scheme of things, the impact is big.  Administrative decisions made in Washington affect so much of our day-to-day life.  They involve everything from internet openness to retirement planning to the availability of loans and the energy sources that power our homes and businesses.  So much of the decision making that affects our future takes place in the administrative state.

The American public deserves a fair shot at participating in these decisions.  Expert agencies are duty bound to hear from everyone, not just those who can afford to pay for expert lawyers and lobbyists.  The framework from the Administrative Procedure Act is designed to serve the public—by seeking their input—but increasingly they are getting shut out.  Our agency internet systems are ill-equipped to handle the mass automation and fraud that already is corrupting channels for public comment.  It’s only going to get worse.  The mechanization and weaponization of the comment-filing process has only just begun.

We need to something about it.  Because ensuring the public has a say in what happens in Washington matters.  Because trust in public institutions matters.  A few months ago Edelman released its annual Trust Barometer and reported than only a third of Americans trust the government—a 14 percentage point decline from last year.

Fixing that decline is worth the effort.  We can start with finding ways that give all Americans—no matter who they are or where they live—a fighting chance at making Washington listen to what they think.

We can’t give in to the easy cynicism that results when our public channels are flooded with comments from dead people, stolen identities, batches of bogus filings, and commentary that originated from Russian e-mail addresses.  We can’t let this deluge of fake filings further delegitimize Washington decisions and erode public trust.

No one said digital age democracy was going to be easy.  But we’ve got to brace ourselves and strengthen our civic infrastructure to withstand what is underway.  This is true at regulatory agencies—and across our political landscape.  Because if you look for them you will find uneasy parallels between the flood of fake comments in regulatory proceedings and the barrage of posts on social media that was part of a conspicuous campaign to influence our last election.  There is a concerted effort to exploit our openness.  It deserves a concerted response….(More)”

Tending the Digital Commons: A Small Ethics toward the Future


Alan Jacobs at the Hedgehog Review: “Facebook is unlikely to shut down tomorrow; nor is Twitter, or Instagram, or any other major social network. But they could. And it would be a good exercise to reflect on the fact that, should any or all of them disappear, no user would have any legal or practical recourse….In the years since I became fully aware of the vulnerability of what the Internet likes to call my “content,” I have made some changes in how I live online. But I have also become increasingly convinced that this vulnerability raises wide-ranging questions that ought to be of general concern. Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt.

A New Kind of Responsibility

Hans Jonas was a German-born scholar who taught for many years at the New School for Social Research in New York City. He is best known for his 1958 book The Gnostic Religion, a pathbreaking study of Gnosticism that is still very much worth reading. Jonas was a philosopher whose interest in Gnosticism arose from certain questions raised by his mentor Martin Heidegger. Relatively late in his career, though he had repudiated Heidegger many years earlier for his Nazi sympathies, Jonas took up Heidegger’s interest in technology in an intriguing and important book called The Imperative of Responsibility….

What is required of a new ethics adequate to the challenge posed by our own technological powers? Jonas argues that the first priority is an expansion and complication of the notion of responsibility. Unlike our predecessors, we need always to be conscious of the effects of our actions on people we have never met and will never meet, because they are so far removed from us in space and time. Democratically elected governments can to some degree adapt to spatially extended responsibility, because our communications technologies link people who cannot meet face-to-face. But the chasm of time is far more difficult to overcome, and indeed our governments (democratic or otherwise) are all structured in such a way that the whole of their attention goes to the demands of the present, with scarcely a thought to be spared for the future. For Jonas, one of the questions we must face is this “What force shall represent the future in the present?”

I want to reflect on Jonas’s challenge in relation to our digital technologies. And though this may seem remote from the emphasis on care for the natural world that Jonas came to be associated with, there is actually a common theme concerning our experiences within and responsibility for certain environmental conditions. What forces, not in natural ecology but in media ecology, can best represent the future in the present?…(More)”.

Harnessing the Twittersphere: How using social media can benefit government ethics offices


Ian Stedman in Canadian Public Administration: “Ethics commissioners who strive to be innovative may be able to exploit opportunities that have emerged as a result of growing public interest in issues of government ethics and transparency. This article explores how social media has driven greater public interest in political discourse, and I offer some suggestions for how government ethics offices can effectively harness the power of these social tools. I argue that, by thinking outside the box, ethics commissioners can take advantage of low‐cost opportunities to inform and empower the public in a way that propels government ethics forward without the need for legislative change….(More)”.

New Power


200,000 Volunteers Have Become the Fact Checkers of the Internet


Hanna Kozlowska and Heather Timmons, at Quartz/NextGov: “Founded in 2001, Wikipedia is on the verge of adulthood. It’s the world’s fifth-most popular website, with 46 million articles in 300 languages, while having less than 300 full-time employees. What makes it successful is the 200,000 volunteers who create it, said Katherine Maher, the executive director of the Wikimedia Foundation, the parent-organization for Wikipedia and its sister sites.

Unlike other tech companies, Wikipedia has avoided accusations of major meddling from malicious actors to subvert elections around the world. Part of this is because of the site’s model, where the creation process is largely transparent, but it’s also thanks to its community of diligent editors who monitor the content…

Somewhat unwittingly, Wikipedia has become the internet’s fact-checker. Recently, both YouTube and Facebook started using the platform to show more context about videos or posts in order to curb the spread of disinformation—even though Wikipedia is crowd-sourced, and can be manipulated as well….

While no evidence of organized, widespread election-related manipulation on the platform has emerged so far, Wikipedia is not free of malicious actors, or people trying to grab control of the narrative. In Croatia, for instance, the local-language Wikipedia was completely taken over by right-wing ideologues several years ago.

The platform has also been battling the problem of “black-hat editing”— done surreptitiously by people who are trying to push a certain view—on the platform for years….

About 200,000 editors contribute to Wikimedia projects every month, and together with AI-powered bots they made a total of 39 million edits in February of 2018. In the chart below, group-bots are bots approved by the community, which do routine maintenance on the site, looking for examples of vandalism, for example. Name-bots are users who have “bot” in their name.

Like every other tech platform, Wikimedia is looking into how AI could help improve the site. “We are very interested in how AI can help us do things like evaluate the quality of articles, how deep and effective the citations are for a particular article, the relative neutrality of an article, the relative quality of an article,” said Maher. The organization would also like to use it to catch gaps in its content….(More)”.

Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

The digital economy is disrupting our old models


Diane Coyle at The Financial Times: “One of the many episodes of culture shock I experienced as a British student in the US came when I first visited the university health centre. They gave me my medical notes to take away. Once I was over the surprise, I concluded this was entirely proper. After all, the true data was me, my body. I was reminded of this moment from the early 1980s when reflecting on the debate about Facebook and data, one of the collective conclusions of which seems to be that personal data are personal property so there need to be stronger rights of ownership. If I do not like what Facebook is doing with my data, I should be able to withdraw them. Yet this fix for the problem is not straightforward.

“My” data are inextricably linked with that of other people, who are in my photographs or in my network. Once the patterns and correlations have been extracted from it, withdrawing my underlying data is neither here nor there, for the value lies in the patterns. The social character of information can be seen from the recent example of Strava accidentally publishing maps of secret American military bases because the aggregated route data revealed all the service personnel were running around the edge of their camps. One or two withdrawals of personal data would have made no difference. To put it in economic jargon, we are in the territory of externalities and public goods. Information once shared cannot be unshared.
The digital economy is one of externalities and public goods to a far greater degree than in the past. We have not begun to get to grips with how to analyse it, still less to develop policies for the common good. There are two questions at the heart of the challenge: what norms and laws about property rights over intangibles such as data or ideas or algorithms are going to be needed? And what will the best balance between collective and individual actions be or, to put it another way, between government and market?
Tussles about rights over intangible or intellectual property have been going on for a while: patent trolls on the one hand, open source creators on the other. However, the issue is far from settled. Do we really want to accept, for example, that John Deere, in selling an expensive tractor to a farmer, is only in fact renting it out because it claims property rights over the installed software?

Free digital goods of the open source kind are being cross-subsidised by their creators’ other sources of income. Free digital goods of the social media kind are being funded by various advertising services — and that turns out to be an ugly solution. Yet the network effects are so strong, the benefits they provide so great, that if Facebook and Google were shut down by antitrust action tomorrow, replacement digital groups could well emerge before too long. China seems to be in effect nationalising its big digital platforms but many in the west will find that even less appealing than a private data market. In short, neither “market” nor “state” looks like the right model for ownership and governance in an information economy pervaded by externalities and public goods. Finding alternative models for the creation and sharing of value in the digital world, when these are inherently collective and non-rival activities, is an urgent challenge….(More).

Can mobile phone traces help shed light on the spread of Zika in Colombia?


Daniela Perrotta at UN Global Pulse: “Nowadays, thanks to the continuous growth of the transport infrastructures, millions of people travel every day around the world, resulting in more opportunities for infectious diseases to spread on a large scale faster than ever before. Already at the beginning of the last century, between 1918 and 1920, due to the special circumstances that were created during World War I, such as overcrowded camps and hospitals, and soldiers piled in trenches or in transit every day, the Spanish Flu killed between 20 and 100 million people, more than the war itself, resulting perhaps in the most lethal pandemic in the history of humankind.

The question that then arises naturally is the following: what if an equally virulent and deadly virus would hit today’s highly-connected world where nearly any point can be easily reached in less than a day’s journey?…

To overcome these limitations, more and more sources of data and innovative techniques are used to detect people’s physical movements over time, such as the digital traces generated by human activities on the Internet (e.g. Twitter, Flickr, Foursquare) or the footprints left by mobile phone users’ activity. In particular, cellular networks implicitly bring a large ensemble of details on human activity, incredibly helpful for capturing mobility patterns and providing a high-level picture of human mobility.

In this context, the Computational Epidemiology Lab at the ISI Foundation in Turin (Italy), in collaboration with UN Global Pulse, an innovation initiative of the United Nations, and Telefonica Research in Madrid (Spain), is currently investigating the human mobility patterns relevant to the epidemic spread of Zika at a local level, in Colombia, mainly focusing on the potential benefits of harnessing mobile phone data as a proxy for human movements. Specifically, mobile phone data are defined as the information elements contained in call detail records (CDRs) created by telecom operators for billing purposes and summarizing mobile subscribers’ activity, i.e. phone calls, text messages and data connections. Such “digital traces” are continuously collected by telecom providers and thus represent a relatively low-cost and endless source for identifying human movements at an unprecedented scale.

In this study, more than two billion encrypted and anonymized calls made by around seven million mobile phone users in Colombia have been used to identify population movements across the country. To assess the value of such human mobility derived from CDRs, the data is evaluated against more traditional methods: census data, that are considered as a reference since they ideally represent the entire population of the country and its mobility features, and mobility models, i.e. the gravity model and the radiation model, that are the most commonly used today. In particular, the gravity model assumes that the number of trips increases with population size and decreases with distances, whereas the radiation model assumes that the mobility depends on population density….(More)”.

Use our personal data for the common good


Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?

Here are two proposals towards this goal.

First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.

My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.

Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.

Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).