Stefaan Verhulst
Gov.uk blog: “If Tesco knows day-to-day how poorly the nation is, how can Government access similar insights so it can better plan health services? If Airbnb can give you a tailored service depending on your tastes, how can Government provide people with the right support to help them back into work in a way that is right for them? If companies are routinely using social media data to get feedback from their customers to improve their services, how can Government also use publicly available data to do the same?
Data science allows us to use new types of data and powerful tools to analyse this more quickly and more objectively than any human could. It can put us in the vanguard of policymaking – revealing new insights that leads to better and more tailored interventions. And it can help reduce costs, freeing up resource to spend on more serious cases.
But some of these data uses and machine-learning techniques are new and still relatively untested in Government. Of course, we operate within legal frameworks such as the Data Protection Act and Intellectual Property law. These are flexible but don’t always talk explicitly about the new challenges data science throws up. For example, how are you to explain the decision making process of a deep learning black box algorithm? And if you were able to, how would you do so in plain English and not a row of 0s and 1s?
We want data scientists to feel confident to innovate with data, alongside the policy makers and operational staff who make daily decisions on the data that the analysts provide –. That’s why we are creating an ethical framework which brings together the relevant parts of the law and ethical considerations into a simple document that helps Government officials decide what it can do and what it should do. We have a moral responsibility to maximise the use of data – which is never more apparent than after incidents of abuse or crime are left undetected – as well as to pay heed to the potential risks of these new tools. The guidelines are draft and not formal government policy, but we want to share them more widely in order to help iterate and improve them further….
So what’s in the framework? There is more detail in the fuller document, but it is based around six key principles:
- Start with a clear user need and public benefit: this will help you justify the level of data sensitivity and method you use
- Use the minimum level of data necessary to fulfill the public benefit: there are many techniques for doing so, such as de-identification, aggregation or querying against data
- Build robust data science models: the model is only as good as the data it contains and while machines are less biased than humans they can get it wrong. It’s critical to be clear about the confidence of the model and think through unintended consequences and biases contained within the data
- Be alert to public perceptions: put simply, what would a normal person on the street think about the project?
- Be as open and accountable as possible: Transparency is the antiseptic for unethical behavior. Aim to be as open as possible (with explanations in plain English), although in certain public protection cases the ability to be transparent will be constrained.
- Keep data safe and secure: this is not restricted to data science projects but we know that the public are most concerned about losing control of their data….(More)”
Gregory Asmolov at the Policy and Internet Blog: “My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realized that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves.
To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different.
Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to the image of these institutions, which didn’t want citizens to be portrayed as the leading responding actors. Second, any type of citizen-based collective action, even if not purely political, may be an issue of concern in authoritarian countries in particular. Accordingly, one can argue that, while citizens are struggling against a disaster, in some cases the traditional institutions may make substantial efforts to restrain and contain the action of citizens. In this light, the role of information technologies can include not only enhancing citizen engagement and increasing the efficiency of the response, but also controlling the digital crowd of potential volunteers.
The purpose of this paper was to conceptualize the tension between the role of ICTs in the engagement of the crowd and its resources, and the role of ICTs in controlling the resources of the crowd. The research suggests a theoretical and methodological framework that allows us to explore this tension. The paper focuses on an analysis of specific platforms and suggests empirical data about the structure of the platforms, and interviews with developers and administrators of the platforms. This data is used in order to identify how tools of engagement are transformed into tools of control, and what major differences there are between platforms that seek to achieve these two goals. That said, obviously any platform can have properties of control and properties of engagement at the same time; however the proportion of these two types of elements can differ significantly.
One of the core issues for my research is how traditional actors respond to fast, bottom-up innovation by citizens.[2]. On the one hand, the authorities try to restrict the empowerment of citizens by the new tools. On the other hand, the institutional actors also seek to innovate and develop new tools that can restore the balance of power that has been challenged by citizen-based innovation. The tension between using digital tools for the engagement of the crowd and for control of the crowd can be considered as one of the aspects of this dynamic.
That doesn’t mean that all state-backed platforms are created solely for the purpose of control. One can argue, however, that the development of digital tools that offer a mechanism of command and control over the resources of the crowd is prevalent among the projects that are supported by the authorities. This can also be approached as a means of using information technologies in order to include the digital crowd within the “vertical of power”, which is a top-down strategy of governance. That is why this paper seeks to conceptualize this phenomenon as “vertical crowdsourcing”.
The question of whether using a digital tool as a mechanism of control is intentional is to some extent secondary. What is important is that the analysis of platform structures relying on activity theory identifies a number of properties that allow us to argue that these tools are primarily tools of control. The conceptual framework introduced in the paper is used in order to follow the transformation of tools for the engagement of the crowd into tools of control over the crowd. That said, some of the interviews with the developers and administrators of the platforms may suggest the intentional nature of the development of tools of control, while crowd engagement is secondary….Read the full article: Asmolov, G. (2015) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations.”
Book by Joanna Thornborrow: “The Discourse of Public Participation Media takes a fresh look at what ‘ordinary’ people are doing on air – what they say, and how and where they get to say it.
Using techniques of discourse analysis to explore the construction of participant identities in a range of different public participation genres, Joanna Thornborrow argues that the role of the ‘ordinary’ person in these media environments is frequently anything but.
Tracing the development of discourses of public participation media, the book focusses particularly on the 1990s onwards when broadcasting was expanding rapidly: the rise of the TV talk show, increasing formats for public participation in broadcast debate and discussion, and the explosion of reality TV in the first decade of the 21st century. During this period, traditional broadcasting has also had to move with the times and incorporate mobile and web-based communication technologies as new platforms for public access and participation – text and email as well as the telephone – and an audience that moves out of the studio and into the online spaces of chat rooms, comment forums and the ‘twitterverse’.
This original study examines the shifting discourses of public engagement and participation resulting from these new forms of communication, making it an ideal companion for students of communication, media and cultural studies, media discourse, broadcast talk and social interaction….(More)”
Open Knowledge: “….This year’s Index showed impressive gains from non-OECD countries with Taiwan topping the Index and Colombia and Uruguay breaking into the top ten at four and seven respectively. Overall, the Index evaluated 122 places and 1586 datasets and determined that only 9%, or 156 datasets, were both technically and legally open.
The Index ranks countries based on the availability and accessibility of data in thirteen key categories, including government spending, election results, procurement, and pollution levels. Over the summer, we held a public consultation, which saw contributions from individuals within the open data community as well as from key civil society organisations across an array of sectors. As a result of this consultation, we expanded the 2015 Index to include public procurement data, water quality data, land ownership data and weather data; we also decided to removed transport timetables due to the difficulties faced when comparing transport system data globally.
Open Knowledge International began to systematically track the release of open data by national governments in 2013 with the objective of measuring if governments were releasing the key datasets of high social and democratic value as open data. That enables us to better understand the current state of play and in turn work with civil society actors to address the gaps in data release. Over the course of the last three years, the Global Open Data Index has become more than just a benchmark – we noticed that governments began to use the Index as a reference to inform their open data priorities and civil society actors began to use the Index advocacy tool to encourage governments to improve their performance in releasing key datasets.
Furthermore, indices such as the Global Open Data Index are not without their challenges. The Index measures the technical and legal openness of datasets deemed to be of critical democratic and social value – it does not measure the openness of a given government. It should be clear that the release of a few key datasets is not a sufficient measure of the openness of a government. The blurring of lines between open data and open government is nothing new and has been hotly debated by civil society groups and transparency organisations since the sharp rise in popularity of open data policies over the last decade. …Index at http://index.okfn.org/”
Paper by Johann Höchtl et al in the Journal of Organizational Computing and Electronic Commerce: “Although of high relevance to political science, the interaction between technological change and political change in the era of Big Data remains somewhat of a neglected topic. Most studies focus on the concept of e-government and e-governance, and on how already existing government activities performed through the bureaucratic body of public administration could be improved by technology. This paper attempts to build a bridge between the field of e-governance and theories of public administration that goes beyond the service delivery approach that dominates a large part of e-government research. Using the policy cycle as a generic model for policy processes and policy development, a new look on how policy decision making could be conducted on the basis of ICT and Big Data is presented in this paper….(More)”
Homero Gil de Zúñiga at Social Science Computer Review: “This special issue of the Social Science Computer Review provides a sample of the latest strategies employing large data sets in social media and political communication research. The proliferation of information communication technologies, social media, and the Internet, alongside the ubiquity of high-performance computing and storage technologies, has ushered in the era of computational social science. However, in no way does the use of “big data” represent a standardized area of inquiry in any field. This article briefly summarizes pressing issues when employing big data for political communication research. Major challenges remain to ensure the validity and generalizability of findings. Strong theoretical arguments are still a central part of conducting meaningful research. In addition, ethical practices concerning how data are collected remain an area of open discussion. The article surveys studies that offer unique and creative ways to combine methods and introduce new tools while at the same time address some solutions to ethical questions….(More)”
Nathan Collins in Pacific Standard: “When you think of meaningful political action, you probably think of the March on Washington for Jobs and Freedom, or perhaps ACT-UP‘s 1990 protests in San Francisco. You probably don’t think of clicking “like” or “share” on Facetwitstagram—though a new study suggests that those likes and shares may be just as important as marching in the streets, singing songs, and carrying signs.
“The efficacy of online networks in disseminating timely information has been praised by many commentators; at the same time, users are often derided as ‘slacktivists’ because of the shallow commitment involved in clicking a forwarding button,” writes a team led by Pablo Barberá, a political scientist at New York University, in the journal PLoS One.
In other words, it’s easy to argue that sharing a post about climate change and whatnot has no value, since it involves no sacrifice—no standoffs with angry police, no going to jail over taxes you didn’t pay because you opposed the Mexican-American War, not even lost shoes.
On the other hand, maybe sacrifice isn’t the point. Maybe it’s getting attention, and, Barberá and colleagues suggest, slacktivism is actually pretty good at that part—a consequence of just how easy it is to spread the word with the click of a mouse.
The team reached that conclusion after analyzing tens of millions of tweets sent by nearly three million users during the May 2013 anti-government protests in Gezi Park, Istanbul. Among other things, the team identified which tweets were originals rather than retweets, who retweeted whom, and how many followers each user had. That meant Barberá and his team could identify not only how information flowed within the network of protesters, but also how many people that information reached.
Most original tweets came from a relatively small group of protestors using hashtags such as #gezipark, suggesting that information flowed from a core group of protestors toward a less-active periphery. Geographic data backed that up: Around 18 percent of core tweeters were physically present for the Gezi Park demonstrations, compared to a quarter of a percent of peripheral tweeters…..(More)”
Farhad Manjoo in the New York Times: “Donald J. Trump and Hillary Clinton said this week that we should think about shutting down parts of the Internet to stop terrorist groups from inspiring and recruiting followers in distant lands. Mr. Trump even suggested an expert who’d be perfect for the job: “We have to go see Bill Gates and a lot of different people that really understand what’s happening, and we have to talk to them — maybe, in certain areas, closing that Internet up in some way,” he said on Monday in South Carolina.
Many online responded to Mr. Trump and Mrs. Clinton with jeers, pointing out both constitutional and technical limits to their plans. Mr. Gates, the Microsoft co-founder who now spends much of his time on philanthropy, has as much power to close down the Internet as he does to fix Mr. Trump’s hair.
Yet I had a different reaction to Mr. Trump and Mrs. Clinton’s fantasy of a world in which you could just shut down parts of the Internet that you didn’t like: Sure, it’s impossible, but just imagine if we could do it, just for a bit. Wouldn’t it have been kind of a pleasant dream world, in these overheated last few weeks, to have lived free of social media?
Hear me out. If you’ve logged on to Twitter and Facebook in the waning weeks of 2015, you’ve surely noticed that the Internet now seems to be on constant boil. Your social feed has always been loud, shrill, reflexive and ugly, but this year everything has been turned up to 11. The Islamic State’s use of the Internet is perhaps only the most dangerous manifestation of what, this year, became an inescapable fact of online life: The extremists of all stripes are ascendant, and just about everywhere you look, much of the Internet is terrible.“The academic in me says that discourse norms have shifted,” said Susan Benesch, a faculty associate at Harvard’s Berkman Center for Internet & Society and the director of the Dangerous Speech Project, an effort to study speech that leads to violence. “It’s become so common to figuratively walk through garbage and violent imagery online that people have accepted it in a way. And it’s become so noisy that you have to shout more loudly, and more shockingly, to be heard.”
You might argue that the angst online is merely a reflection of the news. Terrorism, intractable warfare, mass shootings, a hyperpartisan presidential race, police brutality, institutional racism and the protests over it have dominated the headlines. It’s only natural that the Internet would get a little out of control over that barrage.
But there’s also a way in which social networks seem to be feeding a cycle of action and reaction. In just about every news event, the Internet’s reaction to the situation becomes a follow-on part of the story, so that much of the media establishment becomes trapped in escalating, infinite loops of 140-character, knee-jerk insta-reaction.
“Presidential elections have always been pretty nasty, but these days the mudslinging is omnipresent in a way that’s never been the case before,” said Whitney Phillips, an assistant professor of literary studies and writing at Mercer University, who is the author of “This Is Why We Can’t Have Nice Things,” a study of online “trolling.” “When Donald Trump says something that I would consider insane, it’s not just that it gets reported on by one or two or three outlets, but it becomes this wave of iterative content on top of content on top of content in your feed, taking over everything you see.”
The spiraling feedback loop is exhausting and rarely illuminating. The news brims with instantly produced “hot takes” and a raft of fact-free assertions. Everyone — yours truly included — is always on guard for the next opportunity to meme-ify outrage: What crazy thing did Trump/Obama/The New York Times/The New York Post/Rush Limbaugh/etc. say now, and what clever quip can you fit into a tweet to quickly begin collecting likes?
There is little room for indulging nuance, complexity, or flirting with the middle ground. In every issue, you are either with one aggrieved group or the other, and the more stridently you can express your disdain — short ofhurling profanities at the president on TV, which will earn you a brief suspension — the better reaction you’ll get….(More)”
Stacy Gray at the Future of Privacy Forum: “Each year, FPF invites privacy scholars and authors to submit articles and papers to be considered by members of our Advisory Board, with an aim toward showcasing those articles that should inform any conversation about privacy among policymakers in Congress, as well as at the Federal Trade Commission and in other government agencies. For our sixth annual Privacy Papers for Policymakers, we received submissions on topics ranging from mobile app privacy, to location tracking, to drone policy.
Our Advisory Board selected papers that describe the challenges and best practices of designing privacy notices, ways to minimize the risks of re-identification of data by focusing on process-based data release policy and taking a precautionary approach to data release, the relationship between privacy and markets, and bringing the concept of trust more strongly into privacy principles.
Florian Schaub, Rebecca Balebako, Adam L. Durity, and Lorrie Faith CranorIra S. Rubinstein and Woodrow HartzogArvind Narayanan, Joanna Huey, and Edward W. Felten
Peter Swire (Testimony, Senate Judiciary Committee Hearing, July 8, 2015)Joel R. Reidenberg….(More)”
Dwyer Gunn in Pacific Standard: “In 2012, there were 896 million people around the world—12.7 percent of the global population—living on less than two dollars a day. The World Food Programestimates that 795 million people worldwide don’t have enough food to “lead a healthy life”; 25 percent of people living in Sub-Saharan Africa are undernourished. Over three million children die every year thanks to poor nutrition, and hunger is the leading cause of death worldwide. In 2012, just three preventable diseases (pneumonia, diarrhea, and malaria) killed 4,600 children every day.
Last month, the World Bank announced the launch of the Global Insights Initiative (GINI). The initiative, which follows in the footsteps of so-called “nudge units” in the United Kingdom and United States, is the Bank’s effort to incorporate insights from the field of behavioral science into the design of international development programs; too often, those programs failed to account for how people behave in the real world. Development policy, according to the Bank’s 2015 World Development Report, is overdue for a “redesign based on careful consideration of human factors.” Researchers have applauded the announcement, but it raises an interesting question: What can nudges really accomplish in the face of the developing world’s overwhelming poverty and health-care deficits?
In fact, researchers have found that instituting small program changes, informed by a better understanding of people’s motivations and limitations, can have big effects on everything from savings rates to vaccination rates to risky sexual behavior. Here are five studies that demonstrate the benefits of bringing empirical social science into the developing world….(More)”