Can we help wildlife adapt by crowdsourcing human responses to climate change?


Project by WWF: “Can data on how people respond to a warming world help us anticipate human impacts on wildlife?

Not long after Nikhil Advani joined WWF in 2013, he made an intriguing discovery. Advani, whose work focuses on climate change adaptation, was assessing the vulnerability of various species to the changing planet. “I quickly realized,” he says, “that for a lot of the species that WWF works on—like elephants, mountain gorillas, and snow leopards—the biggest climate-driven threats are likely to come from human communities affected by changes in weather and climate.”

His realization led to the launch of Climate Crowd, an online platform for crowdsourcing data about two key topics: learning how rural and indigenous communities around the world are responding to climate change, and how their responses are affecting biodiversity. (The latter topic, Advani says, is something we know very little about yet.) For example, if community members enter protected areas to collect water during droughts, how will that activity affect the flora and fauna?

Working with partners from a handful of other conservation groups, Advani designed a survey for participants to use when interviewing local community members. Participants transcribe the interviews, mark each topic discussed in a list of categories (such as drought or natural habitat encroachment), and upload the results to the online platform…(Explore the Climate Crowd).

Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda


Gillian Bolsover and Philip Howard in the Journal Big Data: “Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon.

However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Big-data research is necessary to understand the socio-technical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates.

Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means…(More) (Full Text HTMLFull Text PDF)

Could Bitcoin technology help science?


Andy Extance at Nature: “…The much-hyped technology behind Bitcoin, known as blockchain, has intoxicated investors around the world and is now making tentative inroads into science, spurred by broad promises that it can transform key elements of the research enterprise. Supporters say that it could enhance reproducibility and the peer review process by creating incorruptible data trails and securely recording publication decisions. But some also argue that the buzz surrounding blockchain often exceeds reality and that introducing the approach into science could prove expensive and introduce ethical problems.

A few collaborations, including Scienceroot and Pluto, are already developing pilot projects for science. Scienceroot aims to raise US$20 million, which will help pay both peer reviewers and authors within its electronic journal and collaboration platform. It plans to raise the funds in early 2018 by exchanging some of the science tokens it uses for payment for another digital currency known as ether. And the Wolfram Mathematica algebra program — which is widely used by researchers — is currently working towards offering support for an open-source blockchain platform called Multichain. Scientists could use this, for example, to upload data to a shared, open workspace that isn’t controlled by any specific party, according to Multichain….

Claudia Pagliari, who researches digital health-tracking technologies at the University of Edinburgh, UK, says that she recognizes the potential of blockchain, but researchers have yet to properly explore its ethical issues. What happens if a patient withdraws consent for a trial that is immutably recorded on a blockchain? And unscrupulous researchers could still add fake data to a blockchain, even if the process is so open that everyone can see who adds it, says Pagliari. Once added, no-one can change that information, although it’s possible they could label it as retracted….(More)”.

Normative Challenges of Identification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR


Paper by Sandra Wachter: “In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user’s devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users’ identity and private life, which can conflict with privacy rights and lead to economic, social, and other forms of discriminatory treatment. A balance must be struck between identification and access control required for the IoT to function and user rights to privacy and identity. Striking this balance is not an easy task because of weaknesses in cybersecurity and anonymisation techniques.

The EU General Data Protection Regulation (GDPR), set to come into force in May 2018, may provide essential guidance to achieve a fair balance between the interests of IoT providers and users. Through a review of academic and policy literature, this paper maps the inherit tension between privacy and identifiability in the IoT.

It focuses on four challenges: (1) profiling, inference, and discrimination; (2) control and context-sensitive sharing of identity; (3) consent and uncertainty; and (4) honesty, trust, and transparency. The paper will then examine the extent to which several standards defined in the GDPR will provide meaningful protection for privacy and control over identity for users of IoT. The paper concludes that in order to minimise the privacy impact of the conflicts between data protection principles and identification in the IoT, GDPR standards urgently require further specification and implementation into the design and deployment of IoT technologies….(More)”.

The nation state goes virtual


Tom Symons at Nesta’s Predictions for 2018: “As the world changes, people expect their governments and public services to do so too. When it’s easy to play computer games with someone on the other side of the world, or set up a company bank account in five minutes, there is an expectation that paying taxes, applying for services or voting should be too…..

To add to this, large political upheavals such as Brexit and the election of Donald Trump have left some people feeling alienated from their national identity. Since the the UK voted to leave the EU, demand for Irish passports has increased by 50 per cent, a sign that people feel dissatisfied by the constraints of geographically determined citizenship when they can no longer relate to their national identity.

In response, some governments see these changes as an opportunity to reconceptualise what we mean by a nation state.

The e-Residency offer

The primary actor in this disruption is Estonia, which leads the world in digital government. In 2015 they introduced an e-Residency, allowing anyone anywhere in the world to receive a government-issued digital identity. The e-Residency gives people access to digital public services and the ability to register and run online businesses from the country, in exactly the same way as someone born in Estonia. As of November 2017, over 27,000 people have applied to be Estonian e-Residents, and they have established over 4,200 companies. Estonia aims to have ten million virtual residents by 2025….

While Estonia is a sovereign nation using technology to redefine itself, there are movements taking advantage of decentralising technologies in a bid to do away with the nation state altogether. Bitnation is a blockchain-based technology which enables people to create and join virtual nations. This allows people to agree their own social contracts between one another, using smart contract technology, removing the need for governments as an administrator or mediator. Since it began in 2014, it has been offering traditional government services, such as notaries, dispute resolution, marriages and voting systems, without the need for a middleman.

As of November 2017, there are over 10,000 Bitnation citizens. …

As citizens, we may be able to educate our children in Finland, access healthcare from South Korea and run our businesses in New Zealand, all without having to leave the comfort of our homes. Governments may see this as means of financial sustainability in the longer term, generating income by selling such services to a global population instead of centralised taxation systems levied on a geographic population.

Such a model has been described as ‘nation-as-a-service’, and could mean countries offering different tiers of citizenship, with taxes based on the number of services used, or tier of citizenship chosen. This could also mean multiple citizenships, including of city-states, as well as nations….

This is the moment for governments to start taking the full implications of the digital age seriously. From electronic IDs and data management through to seamless access to services, citizens will only demand better digital services. Countries such as Azerbaijan, are already developing their own versions of the e-Residency. Large internet platforms such as Amazon are gearing up to replace entire government functions. If governments don’t grasp the nettle, they may find themselves left behind by technology and other countries which won’t wait around for them….(More)”.

Proliferation of Open Government Initiatives and Systems


Book edited by Ayse Kok: “As is true in most aspects of daily life, the expansion of government in the modern era has included a move to a technologically-based system. A method of evaluation for such online governing systems is necessary for effective political management worldwide.

Proliferation of Open Government Initiatives and Systems is an essential scholarly publication that analyzes open government data initiatives to evaluate the impact and value of such structures. Featuring coverage on a broad range of topics including collaborative governance, civic responsibility, and public financial management, this publication is geared toward academicians and researchers seeking current, relevant research on the evaluation of open government data initiatives….(More)”.

Civic Technology: Open Data and Citizen Volunteers as a Resource for North Carolina Local Governments


Report by John B. Stephens: “Civic technology is an emergent area of practice where IT experts and citizens without specialized IT skills volunteer their time using government-provided open data to improve government services or otherwise create public benefit. Civic tech, as it is often referred to, draws on longer-standing practices, particularly e-government and civic engagement. It is also a new form of citizen–government co-production, building on the trend of greater government transparency.

This report is designed to help North Carolina local government leaders:

  • Define civic technology practices and describing North Carolina civic tech resources
  • Highlight accomplishments and ongoing projects in civic tech (in North Carolina and beyond)
  • Identify opportunities and challenges for North Carolina local governments in civic tech
  • Provide a set of resources for education and involvement in civic tech….(More)”.

Research reveals de-identified patient data can be re-identified


Vanessa Teague, Chris Culnane and Ben Rubinstein in PhysOrg: “In August 2016, Australia’s federal Department of Health published medical billing records of about 2.9 million Australians online. These records came from the Medicare Benefits Scheme (MBS) and the Pharmaceutical Benefits Scheme (PBS) containing 1 billion lines of historical health data from the records of around 10 per cent of the population.

These longitudinal records were de-identified, a process intended to prevent a person’s identity from being connected with information, and were made public on the government’s open data website as part of its policy on accessible public 

We found that patients can be re-identified, without decryption, through a process of linking the unencrypted parts of the  with known information about the individual.

Our findings replicate those of similar studies of other de-identified datasets:

  • A few mundane facts taken together often suffice to isolate an individual.
  • Some patients can be identified by name from publicly available information.
  • Decreasing the precision of the data, or perturbing it statistically, makes re-identification gradually harder at a substantial cost to utility.

The first step is examining a patient’s uniqueness according to medical procedures such as childbirth. Some individuals are unique given public information, and many patients are unique given a few basic facts, such as year of birth or the date a baby was delivered….

The second step is examining uniqueness according to the characteristics of commercial datasets we know of but cannot access directly. There are high uniqueness rates that would allow linking with a commercial pharmaceutical dataset, and with the billing data available to a bank. This means that ordinary people, not just the prominent ones, may be easily re-identifiable by their bank or insurance company…

These de-identification methods were bound to fail, because they were trying to achieve two inconsistent aims: the protection of individual privacy and publication of detailed individual records. De-identification is very unlikely to work for other rich datasets in the government’s care, like census data, tax records, mental health records, penal information and Centrelink data.

While the ambition of making more data more easily available to facilitate research, innovation and sound public policy is a good one, there is an important technical and procedural problem to solve: there is no good solution for publishing sensitive complex individual records that protects privacy without substantially degrading the usefulness of the data.

Some data can be safely published online, such as information about government, aggregations of large collections of material, or data that is differentially private. For sensitive, complex data about individuals, a much more controlled release in a secure research environment is a better solution. The Productivity Commission recommends a “trusted user” model, and techniques like dynamic consent also give patients greater control and visibility over their personal information….(More).

Migration Data Portal


New portal managed and developed by IOM’s Global Migration Data Analysis Centre (GMDAC)“…aims to serve as a unique access point to timely, comprehensive migration statistics and reliable information about migration data globally. The site is designed to help policy makers, national statistics officers, journalists and the general public interested in the field of migration to navigate the increasingly complex landscape of international migration data, currently scattered across different organisations and agencies.

Especially in critical times, such as those faced today, it is essential to ensure that responses to migration are based on sound facts and accurate analysis. By making the evidence about migration issues accessible and easy to understand, the Portal aims to contribute to a more informed public debate….

The five main sections of the Portal are designed to help you quickly and easily find the data and information you need.

  • DATA – Our interactive world map visualizes international, publicly-available and internationally comparable migration data.
  • THEMES – Thematic overviews explain how various aspects of migration are measured, what are the data sources, their strengths and weaknesses and provide context and analysis of key migration data.
  • TOOLS – Migration data tools are regularly added to help you find the right tools, guidelines and manuals on how to collect, interpret and disseminate migration data.
  • Sustainable Development Goals (SDGs) and the Global Compact on Migration (GCM) – Migration Data, the SDGs and the new Global Compact on Migration (GCM) – Reviews the migration-related targets in the SDGs, how they are defined and measured, and provides information on the new GCM and the migration data needs to support its implementation.
  • BLOG – Our blog and the Talking Migration Data video series provide a place for the migration data community to share their opinion on new developments and policy, new data or methods….(More)”.

From Territorial to Functional Sovereignty: The Case of Amazon


Essay by Frank Pasquale: “…Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally? Why not let Amazon have its own jurisdiction or charter city, or establish special judicial procedures for Foxconn? Some vanguardists of functional sovereignty believe online rating systems could replace state occupational licensure—so rather than having government boards credential workers, a platform like LinkedIn could collect star ratings on them.

In this and later posts, I want to explain how this shift from territorial to functional sovereignty is creating a new digital political economy. Amazon’s rise is instructive. As Lina Khan explains, “the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it.” The “everything store” may seem like just another service in the economy—a virtual mall. But when a firm combines tens of millions of customers with a “marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house…a hardware manufacturer, and a leading host of cloud server space,” as Khan observes, it’s not just another shopping option.

Digital political economy helps us understand how platforms accumulate power. With online platforms, it’s not a simple narrative of “best service wins.” Network effects have been on the cyberlaw (and digital economics) agenda for over twenty years. Amazon’s dominance has exhibited how network effects can be self-reinforcing. The more merchants there are selling on (or to) Amazon, the better shoppers can be assured that they are searching all possible vendors. The more shoppers there are, the more vendors consider Amazon a “must-have” venue. As crowds build on either side of the platform, the middleman becomes ever more indispensable. Oh, sure, a new platform can enter the market—but until it gets access to the 480 million items Amazon sells (often at deep discounts), why should the median consumer defect to it? If I want garbage bags, do I really want to go over to Target.com to re-enter all my credit card details, create a new log-in, read the small print about shipping, and hope that this retailer can negotiate a better deal with Glad? Or do I, ala Sunstein, want a predictive shopping purveyor that intimately knows my past purchase habits, with satisfaction just a click away?
As artificial intelligence improves, the tracking of shopping into the Amazon groove will tend to become ever more rational for both buyers and sellers. Like a path through a forest trod ever clearer of debris, it becomes the natural default. To examine just one of many centripetal forces sucking money, data, and commerce into online behemoths, play out game theoretically how the possibility of online conflict redounds in Amazon’s favor. If you have a problem with a merchant online, do you want to pursue it as a one-off buyer? Or as someone whose reputation has been established over dozens or hundreds of transactions—and someone who can credibly threaten to deny Amazon hundreds or thousands of dollars of revenue each year? The same goes for merchants: The more tribute they can pay to Amazon, the more likely they are to achieve visibility in search results and attention (and perhaps even favor) when disputes come up. What Bruce Schneier said about security is increasingly true of commerce online: You want to be in the good graces of one of the neo-feudal giants who bring order to a lawless realm. Yet few hesitate to think about exactly how the digital lords might use their data advantages against those they ostensibly protect.

Forward-thinking legal thinkers are helping us grasp these dynamics. For example, Rory van Loo has described the status of the “corporation as courthouse”—that is, when platforms like Amazon run dispute resolution schemes to settle conflicts between buyers and sellers. Van Loo describes both the efficiency gains that an Amazon settlement process might have over small claims court, and the potential pitfalls for consumers (such as opaque standards for deciding cases). I believe that, on top of such economic considerations, we may want to consider the political economic origins of e-commerce feudalism. For example, as consumer rights shrivel, it’s rational for buyers to turn to Amazon (rather than overwhelmed small claims courts) to press their case. The evisceration of class actions, the rise of arbitration, boilerplate contracts—all these make the judicial system an increasingly vestigial organ in consumer disputes. Individuals rationally turn to online giants for powers to impose order that libertarian legal doctrine stripped from the state. And in so doing, they reinforce the very dynamics that led to the state’s etiolation in the first place….(More)”.