Stefaan Verhulst
Caitlin Dewey in the Washington Post: “As the death toll from Saturday’s 7.8-magnitude Nepalese earthquake inches higher, help in finding and identifying missing persons has come from an unusual source: Silicon Valley tech giants.
Both Google and Facebook deployed collaborative, cellphone-based tools over the weekend to help track victims of the earthquake. In the midst of both company’s big push to bring Internet to the developing world, it’s an important illustration of exactly how powerful that connectivity could be. And yet, in a country like Nepal — where there are only 77 cellphone subscriptions per 100 people versus 96 in the U.S. and 125 in the U.K. — it’s also a reminder of how very far that effort still has to go.
Facebook Safety Check
Facebook’s Safety Check essentially lets users do two things, depending on where they are. Users in an area impacted by a natural disaster can log onto the site and mark themselves as “safe.” Meanwhile, users around the world can log into the site and check if any of their friends are in the impacted area. The tool was built by Japanese engineers in response to the 2011 earthquake and tsunami that devastated coastal Japan.
…
Facebook hasn’t publicized how many people have used the tool, though the network only has 4.4 million users in the country based on estimates by its ad platform. Notably, you must also a smartphone running the Facebook app to use this feature — and smartphone penetration in Nepal is quite low.
Google Person Finder
Like Safety Check, Google Person Finder is intended to connect people in a disaster area with friends and family around the world. Google’s five-year-old project also operates on a larger scale, however: It basically provides a massive, open platform to collaboratively track missing persons’ reports. Previously, Google’s deployed the tool to help victims in the wake of Typhoon Haiyan and the Boston bombing.
Richard M. Thompson for the Congressional Research Service: “There are two overarching privacy issues implicated by domestic drone use. The first is defining what “privacy” means in the context of aerial surveillance. Privacy is an ambiguous term that can mean different things in different contexts. This becomes readily apparent when attempting to apply traditional privacy concepts such as personal control and secrecy to drone surveillance. Other, more nuanced privacy theories such as personal autonomy and anonymity must be explored to get a fuller understanding of the privacy risks posed by drone surveillance. Moreover, with ever-increasing advances in data storage and manipulation, the subsequent aggregation, use, and retention of drone-obtained data may warrant an additional privacy impact analysis.
The second predominant issue is which entity should be responsible for regulating drones and privacy. As the final arbiter of the Constitution, the courts are naturally looked upon to provide at least the floor of privacy protection from UAS surveillance, but as will be discussed in this report, under current law, this protection may be minimal….(More)”
CDT Press Release: “This paper is the third in a series of three, each of which explores health big data in a different context. The first — on health big data in the government context — is available here, and the second — on health big data in the clinical context — is available here.
Consumers are increasingly using mobile phone apps and wearable devices to generate and share data on health and wellness. They are using personal health record tools to access and copy health records and move them to third party platforms. They are sharing health information on social networking sites. They leave digital health footprints when they conduct online searches for health information. The health data created, accessed, and shared by consumers using these and many other tools can range from detailed clinical information, such as downloads from an implantable device and details about medication regimens, to data about weight, caloric intake, and exercise logged with a smart phone app.
These developments offer a wealth of opportunities for health care and personal wellness. However, privacy questions arise due to the volume and sensitivity of health data generated by consumer-focused apps, devices, and platforms, including the potential analytics uses that can be made of such data.
Many of the privacy issues that face traditional health care entities in the big data era also apply to app developers, wearable device manufacturers, and other entities not part of the traditional health care ecosystem. These include questions of data minimization, retention, and secondary use. Notice and consent pose challenges, especially given the limits of presenting notices on mobile device screens, and the fact that consumer devices may be bought and used without consultation with a health care professional. Security is a critical issue as well.
However, the privacy and security provisions of the Heath Insurance Portability and Accountability Act (HIPAA) do not apply to most app developers, device manufacturers or others in the consumer health space. This has benefits to innovation, as innovators would otherwise have to struggle with the complicated HIPAA rules. However, the current vacuum also leaves innovators without clear guidance on how to appropriately and effectively protect consumers’ health data. Given the promise of health apps, consumer devices, and consumer-facing services, and given the sensitivity of the data that they collect and share, it is important to provide such guidance….
As the source of privacy guidelines, we look to the framework provided by the Fair Information Practice Principles (FIPPs) and explore how it could be applied in an age of big data to patient-generated data. The FIPPs have influenced to varying degrees most modern data privacy regimes. While some have questioned the continued validity of the FIPPs in the current era of mass data collection and analysis, we consider here how the flexibility and rigor of the FIPPs provide an organizing framework for responsible data governance, promoting innovation, efficiency, and knowledge production while also protecting privacy. Rather than proposing an entirely new framework for big data, which could be years in the making at best, using the FIPPs would seem the best approach in promoting responsible big data practices. Applying the FIPPs could also help synchronize practices between the traditional health sector and emerging consumer products….(More)”
Tom Fox at the Washington Post: “Given the complexity and difficulty of the challenges that government leaders face, encouraging innovation among their workers can pay dividends. Government-wide employee survey data, however, suggest that much more needs to be done to foster this type of culture at many federal organizations.
According to that data, nearly 90 percent of federal employees are looking for ways to be more innovative and effective, but only 54 percent feel encouraged by their leaders to come up with new ways of doing work. To make matters worse, fewer than a third say they believe creativity and innovation are rewarded in their agencies.
It’s worth pausing to examine what sets apart those agencies that do. They tend to have developed innovative cultures by providing forums for employees to share and test new ideas, by encouraging responsible risk-taking, and by occasionally bringing in outside talent for rotational assignments to infuse new thinking into the workplace.
The Department of Health and Human Services (HHS) is one example of an agency working at this. In 2010 it created the Idea Lab, with the goal to “remove barriers HHS employees face and promote better ways of working in government.”
It launched an awards program as part of Idea Lab called HHS Innovates to identify promising, new ideas likely to improve effectiveness. And to directly support implementing these ideas, the lab launched HHS Ignites, which provides teams with seed funding of $5,000 and a three-month timeframe to work on approved action plans. When the agency needs a shot of outside inspiration, it has its Entrepreneurs-in-Residence program, which enlists experts from the private and nonprofit sectors to join HHS for one or two years to develop new approaches and improve practices….
While the HHS Idea Lab program is a good concept, it’s the agency’s implementation that distinguishes it from other government efforts. Federal leaders elsewhere would be wise to borrow a few of their tactics.
As a starting point, federal leaders should issue a clear call for innovation that demands a measurable result. Too often, leaders ask for changes without any specificity as to the result they are looking to achieve. If you want your employees to be more innovative, you need to set a concrete, data-driven goal — whether that’s to reduce process steps or process times, improve customer satisfaction or reduce costs.
Secondly, you should help your employees take their ideas to implementation by playing equal parts cheerleader and drill sergeant. That is, you need to boost their confidence while at the same time pushing them to develop concrete action plans, experiments and measurements to show their ideas deliver results….(More)”
Paper by Stephan G. Grimmelikhuijsen and Albert J. Meijer in Public Administration Review: “Social media use has become increasingly popular among police forces. The literature suggests that social media use can increase perceived police legitimacy by enabling transparency and participation. Employing data from a large and representative survey of Dutch citizens (N = 4,492), this article tests whether and how social media use affects perceived legitimacy for a major social media platform, Twitter. A negligible number of citizens engage online with the police, and thus the findings reveal no positive relationship between participation and perceived legitimacy. The article shows that by enhancing transparency, Twitter does increase perceived police legitimacy, albeit to a limited extent. Subsequent analysis of the mechanism shows both an affective and a cognitive path from social media use to legitimacy. Overall, the findings suggest that establishing a direct channel with citizens and using it to communicate successes does help the police strengthen their legitimacy, but only slightly and for a small group of interested citizens….(More)”
Paper by Mark Fenster in the European Journal of Social Theory: “Transparency’s importance as an administrative norm seems self-evident. Prevailing ideals of political theory stipulate that the more visible government is, the more democratic, accountable, and legitimate it appears. The disclosure of state information consistently disappoints, however: there is never enough of it, while it often seems not to produce a truer democracy, a more accountable state, better policies, and a more contented populace. This gap between theory and practice suggests that the theoretical assumptions that provide the basis for transparency are wrong. This article argues that transparency is best understood as a theory of communication that excessively simplifies and thus is blind to the complexities of the contemporary state, government information, and the public. Taking them fully into account, the article argues, should lead us to question the state’s ability to control information, which in turn should make us question not only the improbability of the state making itself visible, but also the improbability of the state keeping itself secret…(More)”
Chatham House Paper by Michael Edelstein and Dr Jussi Sane: “Political, economic and legal obstacles to data sharing in public health will be the most challenging to overcome.
- The interaction between barriers to data sharing in public health is complex, and single solutions to single barriers are unlikely to be successful. Political, economic and legal obstacles will be the most challenging to overcome.
- Public health data sharing occurs extensively as a collection of subregional and regional surveillance networks. These existing networks have often arisen as a consequence of a specific local public health crisis, and should be integrated into any global framework.
- Data sharing in public health is successful when a perceived need is addressed, and the social, political and cultural context is taken into account.
- A global data sharing legal framework is unlikely to be successful. A global data governance or ethical framework, supplemented by local memoranda of understanding that take into account the local context, is more likely to succeed.
- The International Health Regulations (IHR) should be considered as an infrastructure for data sharing. However, their lack of enforcement mechanism, lack of minimum data sets, lack of capacity assessment mechanism, and potential impact on trade and travel following data sharing need to be addressed.
- Optimal data sharing does not equate with open access for public health data….(More)”
Now, that data is being used to predict what parts of cities feel the safest. StreetScore, a collaboration between the MIT Media Lab’s Macro Connections and Camera Culture groups, uses an algorithm to create a super high-resolution map of urban perceptions. The algorithmically generated data could one day be used to research the connection between urban perception and crime, as well as informing urban design decisions.

The algorithm, created by Nikhil Naik, a Ph.D. student in the Camera Culture lab, breaks an image down into its composite features—such as building texture, colors, and shapes. Based on how Place Pulse volunteers rated similar features, the algorithm assigns the streetscape a perceived safety score between 1 and 10. These scores are visualized as geographic points on a map, designed by MIT rising sophomore Jade Philipoom. Each image available from Google Maps in the two cities are represented by a colored dot: red for the locations that the algorithm tags as unsafe, and dark green for those that appear safest. The site, now limited to New York and Boston, will be expanded to feature Chicago and Detroit later this month, and eventually, with data collected from a new version of Place Pulse, will feature dozens of cities around the world….(More)”
Paper by Frank Mols et al in the European Journal of Political Research: “Policy makers can use four different modes of governance: ‘hierarchy’, ‘markets’, ‘networks’ and ‘persuasion’. In this article, it is argued that ‘nudging’ represents a distinct (fifth) mode of governance. The effectiveness of nudging as a means of bringing about lasting behaviour change is questioned and it is argued that evidence for its success ignores the facts that many successful nudges are not in fact nudges; that there are instances when nudges backfire; and that there may be ethical concerns associated with nudges. Instead, and in contrast to nudging, behaviour change is more likely to be enduring where it involves social identity change and norm internalisation. The article concludes by urging public policy scholars to engage with the social identity literature on ‘social influence’, and the idea that those promoting lasting behaviour change need to engage with people not as individual cognitive misers, but as members of groups whose norms they internalise and enact. …(Also)”
New paper by Shoshana Zuboff in the Journal of Information Technology: “This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ Google is to surveillance capitalism what General Motors was to managerial capitalism. Therefore the institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries long evolution of market capitalism….(More)”
